Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The dangers of dismissing the possibility of AGI emerging in the next 5-10 years are huge.

Again, I think we should consider "The Human Alignment Problem" more in this context. The transformers in question are large, heavy and not really prone to "recursive self-improvement".

If the ML-AGI works out in a few years, who gets to enter the prompts?



Me.

... ... ...

Obviously "/s", obviously joking, but meant to highlight that there are a few parties that would all answer "me" and truly mean it, often not in a positive way.


A DAO.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: