So, isn't humanity playing a suicidal game today?

Experts note disturbing trends in the behavior of new AI models: they can lie, manipulate users, and try to avoid disconnection

YKeuPolitico
6.3k Viewed
2 min read.

Journalists have noticed that top managers of AI companies are resigning due to concerns about the security of AI technologies.

Behind the ladies. AxiosAt least 10 senior executives from leading artificial intelligence companies, including OpenAI, Google, and Anthropic, have resigned from their positions due to concerns about potential threats AI poses to humanity.

Experts note disturbing trends in the behavior of new AI models: they can lie, manipulate users, and try to avoid disconnection. At the same time, even developers do not fully understand the mechanisms of their own systems.

One industry expert compared the current situation to a flight on an airplane with a 1 in 5 chance of crashing, an unacceptably high risk for mass use.

The massive exodus of experienced professionals from key AI companies indicates serious internal concerns about the security of technologies that are being actively introduced into the daily lives of millions of people.

More and more people who are enthusiasts or promoters of AI claim to see the technology starting to think like humans and imagine models in a few years that will act like us - or even better than us. Elon Musk estimated the risk that AI could destroy the world at 20%. Well, he is shocking, but what if he is right?

How it works: There is a term shared by critics and optimists: p(doom). It refers to the probability that superintelligent AI will destroy humanity. Elon Musk estimated p(doom) at 20%. And he is not the only one with this opinion.

In a recent podcast with Lex Friedman, Google CEO Sundar Pichai, an AI architect and visionary, admitted: "I'm optimistic about the p(doom) scenarios, but... the underlying risk is actually quite high." At the same time, Pichai argued that the higher it gets, the more likely it is that humanity will rally to prevent a catastrophe. Friedman, a scientist and AI researcher himself, said that his p(doom) is about 10%.

Share This Article