Controversy is ignited by a letter bearing Elon Musk's signature that calls for a pause in AI research.

It has been uncovered that the signatures on the statement were forged, and the researchers whose work was cited in the statement have denounced its use.



The controversy surrounding the letter demanding a pause in artificial intelligence research continues to grow, as it has been revealed that some of the signatories were fake, and many researchers cited in the letter have denounced its use of their work. The letter, which was co-signed by Elon Musk and thousands of others, called for a six-month halt on the development of AI systems that are "more powerful" than OpenAI's GPT-4. Notable figures such as Apple co-founder Steve Wozniak and cognitive scientist Gary Marcus lent their support, as did engineers from Amazon, DeepMind, Google, Meta, and Microsoft.

OpenAI, a company co-founded by Musk and now backed by Microsoft, developed GPT-4, which has the ability to engage in human-like conversations, compose songs, and summarize lengthy documents. The letter claimed that AI systems with "human-competitive intelligence" pose significant risks to humanity. However, researchers cited in the letter have criticized its use of their work, and some signatories have since backed out of their support.

The letter calling for a pause in AI research also proposed that during this break, AI labs and independent experts should work together to develop and implement shared safety protocols for advanced AI design and development, which should be rigorously audited and overseen by independent outside experts.

The Future of Life Institute, which coordinated the effort, cited research from 12 experts, including university academics and current and former employees of OpenAI, Google, and DeepMind. However, four of these experts have since expressed concern that their research was used to support the letter's claims.

Initially, the letter lacked verification protocols for signing and received signatures from people who had not actually signed it, including Xi Jinping and Meta's chief AI scientist Yann LeCun, who clarified on Twitter that he did not support it.

Critics have accused the Future of Life Institute, which is primarily funded by the Musk Foundation, of prioritizing hypothetical apocalyptic scenarios over more immediate concerns regarding AI, such as the potential for racist or sexist biases to be programmed into AI systems.

The letter cited 12 pieces of research, including "On the Dangers of Stochastic Parrots," a well-known paper co-authored by Margaret Mitchell, who previously oversaw ethical AI research at Google and is now chief ethical scientist at AI firm Hugging Face. Mitchell criticized the letter, stating that it was unclear what counted as "more powerful than GPT4."

Mitchell's co-authors Timnit Gebru and Emily M Bender also criticized the letter on Twitter, with Bender labeling some of its claims as "unhinged." Shiri Dori-Hacohen, an assistant professor at the University of Connecticut, took issue with her work being mentioned in the letter. Her research argued that the present-day use of AI systems could influence decision-making regarding climate change, nuclear war, and other existential threats.

In response to the criticism, FLI's president, Max Tegmark, said that both short-term and long-term risks of AI should be taken seriously. He stated that citing someone in the letter only meant that they were endorsing a specific sentence, not necessarily the entire letter.

Post a Comment

Previous Post Next Post