Thousands of technologists around the world, including Musk, called for Suspending on more powerful AI development


Recently, there has been news almost every day about ChatGPT and artificial intelligence. Tech panic and concerns about tech ethics get more attention as the ‘smarter’ GPT-4 is released.

The latest news is that on March 29 local time, the Future of Life Institute, a non-profit organization in the United States, issued an open letter titled “Suspending Giant AI Experiments”. In the letter, thousands of artificial intelligence experts and industry executives called on all artificial intelligence laboratories to suspend the development and training of more powerful artificial intelligence systems. At least, suspend for half a year. They suggested that if such a moratorium cannot be implemented quickly, “the government should step in and impose a moratorium”.

The potential risks of artificial intelligence to society and human beings have become the consensus of many technologists, including “artificial intelligence godfather” Jeffrey Hinton, Tesla and Twitter CEO Elon Musk, Turing Award winner Joshua Bengio. To this end, they signed their names on this open letter.

“Only when we can be sure that the favorable factors of the AI system are positive and the risks are controllable, can a powerful AI system mature.” The open letter reads.

In fact, this is not the first time that the Future of Life Institute has publicly called for vigilance against the development of artificial intelligence. In 2014, this organization was established in the United States. The original intention of its establishment was to promote research on “optimistic future scenarios” on the one hand, and to “reduce existing risks faced by human beings” on the other hand. The latter has been the focus of its attention.

In 2015, physicist Stephen Hawking and Elon Musk and other digital scientists, entrepreneurs, and investors related to the field of artificial intelligence jointly issued an open letter warning that people must pay more attention to the safety of artificial intelligence and its social benefits.

At that time, AI did not exhibit the disturbing “intelligence” it does today. But since then, Musk has said he firmly believes that uncontrolled artificial intelligence “could be more dangerous than nuclear weapons.”

Eight years later, amid an increasingly turbulent economy, the signatories of a new open letter ask: “Should we allow machines to flood our information channels with propaganda and lies? Should we automate everything, including Satisfying jobs for human beings? Should we develop non-human minds that may eventually be more and smarter than we are, obsolete and replace us? Should we risk losing control of our civilization?”

Europol also issued a warning on March 27 that artificial intelligence chatbots such as ChatGPT are likely to be abused by criminals: “The ability of large language models to detect and reproduce language patterns not only helps phishing and online fraud, but also can be used to to impersonate the speaking style of a particular individual or group.” Their Innovation Lab has organized workshops on what criminals might do, outlining potentially harmful ways to use it.

The signatories hope that the rush to peril will be “paused” to co-develop a set of shared safety protocols for the design and development of advanced artificial intelligence, subject to rigorous audit and oversight by independent outside experts. AI developers also need to work with policymakers to develop strong AI governance systems.

At the very least, create a regulatory body dedicated to AI that is adequately prepared and capable.