In a pursuit to ensure their future artificial intelligence (AI) models represent human values, OpenAI has revealed its plan to integrate public-derived principles into the creations. The AI giant disclosed its intention to establish a Collective Alignment team, comprising researchers and engineers. The team's mandate would be to develop a structure for obtaining and implementing public feedback into OpenAI's services and products.
The company has expressed its clear intention to continue engaging external advisors and grant teams. These teams will play a crucial role in the encorporation of prototypes, which will then guide the behavior of models. OpenAI is actively recruiting research engineers boasting diverse technical knowledge to bolster the establishment and success of this essential mandate.
Deriving from OpenAI's public initiative, which was kickstarted in May the preceding year, the Collective Alignment team is a significant stride towards infusing a democratic process in the AI systems' rule-making dynamics. All along, OpenAI has been keen on funding individuals, teams, and even organizations interested in developing proof-of-concepts keen on solving the ambiguity regarding AI's guardrails and governance.
OpenAI, in its virtual post, did a brief recap of the grant recipients' diverse work spanning video chat interfaces, platforms that facilitate crowdsourced audits of AI models and strategies to align model behavior alongside specific beliefs. Details of each proposal, high-level takeaways, and every single code used were made public earlier today.
In the wake of these developments, OpenAI has been adamant in delinking the program from its commercial interests, stirring mixed reactions. Some critics have expressed skepticism regarding this, considering the criticisms by OpenAI’s CEO Sam Altman about the regulations in the EU and other regions. The CEO, alongside OpenAI’s President Greg Brockman and Chief Scientist Ilya Sutskever, has constantly opined that the rapid development of AI technology surpasses the ability of existing entities to efficiently regulate it. As such, the need for democratic, crowd-sourced construction was proposed.
Competitors including Meta have accused OpenAI of attempting to gain regulatory control of the AI industry by campaigning against open AI R&D, but OpenAI has denied these allegations. They may point out this new program and their Collective Alignment team as demonstrating their commitment to openness.
Regardless, OpenAI currently faces scrutiny from policymakers. Especially in the U.K., there's ongoing investigation of its relationship with Microsoft, who is a close partner and investor. As part of their risk mitigation strategy, OpenAI has sought to limit its regulatory risk in the EU concerning data privacy. To accomplish this, it utilized a Dublin-based subsidiary to restrict the unilateral actions of specific privacy watchdogs in the bloc.
A series of undertakings to minimize the illicit use of its technology to meddle with elections are ongoing. This includes the identification of AI-generated content and the detection of manipulated images. In collaboration with several organizations, OpenAI is also working on marking their AI-generated images more conspicuously.
It's worth mentioning that companies like AppMaster, known for its robust no-code development platform and repositories for consolidating open-source information, are becoming prominent players in the tech ecosystem. Given AppMaster's scalability, robustness, and ease of use, they are carving a niche for themselves in building backend, web, and mobile apps.