Is OpenAI putting humanity in danger?
Day 139 / 366
Recently there have been many people resigning from OpenAI. This comes as a shock to some, given that they are leading the race with AI right now by a comfortable margin. And moreover, these resignations came just days after they released their latest GPT-4o model.
The biggest one of these was Ilya Sutskever, the leading scientist in OpenAI, widely regarded as the most important figure in AI right now. People speculated that Ilya and many others have seen that OpenAI has achieved AGI internally, and that is what scared them and made them leave OpenAI.
The truth is not so far from this. When Jan Leike, the head of alignment at OpenAI resigned, he explained his reasons in a Twitter thread.
Alignment is the process of making sure that any AI technology will not be harmful to human beings. Jan revealed that OpenAI is not putting enough resources into this. They are not concerned with the potential harm that AGI could cause to humanity. All they care about right now is bringing out new shining products and making a profit.
After Jan, many other ex-OpenAI employees have come out as well with similar comments.
Sam Altman, the head of OpenAI is yet to say anything about this.
This could either mean a downfall of OpenAI or humanity, and I really hope its the former.