Elon Musk, Steve Wozniak, and 1,000+ Experts Urge for a Temporary Halt on Potentially Dangerous AI Developments
Elon Musk, Steve Wozniak, and over 1,000 experts have penned an open letter, urging a temporary halt on the development of AI technology more potent than OpenAI’s GPT-4, expressing concerns about its profound risks to society and humanity.

Over 1,000 prominent figures including Elon Musk and Steve Wozniak have expressed concerns over the unbridled progression of artificial intelligence (AI) technology. In response, they have joined forces to release an open letter that advocates for a six-month hiatus in the development of AI systems more advanced than OpenAI's GPT-4, citing the significant threats they pose to society and mankind.
Addressed to AI developers, the 600-word letter raises the alarm on the current state of affairs within AI labs, which appear to be in a relentless pursuit of creating increasingly powerful digital minds. These developments have reached a stage where not even their creators can comprehend, predict, or manage them, further escalating the potential risks involved.
It's noteworthy that Musk was initially a co-founder of OpenAI, which began as a nonprofit organization dedicated to ensuring AI benefits humankind. However, he resigned from the company's board in 2018. Today, OpenAI operates as a for-profit entity, with critics alleging that its objectives have since diverged from their original mission. A deep-rooted partnership with Microsoft, involving billions of dollars in investment, appears to have driven OpenAI to undertake even riskier initiatives.
Publicly questioning OpenAI's metamorphosis, Musk's concerns were echoed by Mozilla, which recently revealed its new startup, Mozilla.ai. Set to address some of the most pressing concerns with the development of AI technology, this initiative aims to create an autonomous, open-source AI ecosystem.
According to the open letter, the proposed six-month halt should involve AI labs and independent experts working together in developing a suite of shared safety protocols for advanced AI design and development. These protocols would be subject to stringent audits and oversight by external, independent experts.
Meanwhile, in a separate development, the UK Government has unveiled a whitepaper that highlights its 'pro-innovation' approach to AI regulation. This framework proposes measures to bolster safety and accountability, although it refrains from establishing a dedicated AI regulator, as the European Union (EU) has opted for.
Tim Wright, Partner and AI regulation specialist at law firm Fladgate, shared his thoughts on the UK whitepaper, emphasizing that only the passage of time would reveal the sector-by-sector approach's efficacy. Regardless, this stance starkly contrasts with the EU's strategy of implementing a detailed rulebook and liability regime under the supervision of a single AI regulator.
At the time of writing, the open letter calling for a timeout on potentially dangerous AI developments has garnered 1,123 signatures. The growing appeal of low-code and no-code platforms such as AppMaster in mitigating fear amongst AI proponents can no longer be ignored. , as it enables businesses and individuals to harness the technology's benefits without the concerns associated with out-of-control AI advancements.


