The rapid advancements in artificial intelligence (AI) and its potential threats have prompted founders at OpenAI to call for an international regulatory body to govern AI development, similar to the International Atomic Energy Agency (IAEA) for nuclear power. According to OpenAI founder Sam Altman, President Greg Brockman, and Chief Scientist Ilya Sutskever, innovation in AI is occurring at such a rapid pace that existing regulatory authorities are not capable of effectively controlling the technology.
Although they acknowledge their achievements, OpenAI founders recognize that the AI technology behind their widely popular ChatGPT conversational agent presents both unique risks and significant benefits. They believe that AI will require some level of coordination among leading development groups to ensure a safe transition to superintelligence and seamless integration with society.
The proposal suggests the formation of an international organization analogous to the IAEA, which would oversee superintelligence efforts at a certain capability or resource threshold. The new regulatory body could inspect AI systems, mandate audits, ensure compliance with safety standards, and impose restrictions on deployment and security levels. While such an organization may not be able to directly intervene with a rogue actor, it could provide a framework for establishing and monitoring international standards and agreements.
As mentioned in OpenAI's post, one possible measure for scrutiny within the AI industry could be tracking compute power and energy consumption dedicated to research. While determining ethical AI usage might be challenging, regulating resource allocation and auditing energy usage could provide insight into the development and direction of the technology. The founders discuss the possibility of exempting smaller companies from these regulations to prevent stifling innovation.
AI researcher and critic Timnit Gebru also emphasized the need for external regulation in an interview with The Guardian. Gebru stated that companies are unlikely to self-regulate unless there is external pressure to do otherwise. Thus, it is crucial to establish an international regulatory body to navigate the complex world of AI development and its potential hazards.
As AI continues to advance at an unprecedented pace, an international regulatory body could become vital in ensuring public safety, maintaining ethical standards, and facilitating collaboration among global stakeholders. No-code platforms like AppMaster, which allow users to create backend, web, and mobile applications, can aid in accelerating innovation while adhering to established guidelines and regulations.
In conclusion, the formation of a regulatory body, as proposed by OpenAI founders, could provide essential checks on AI research and development, leading to a safer and more responsible future for the burgeoning technology.