OpenAI, the creator of ChatGPT and DALL-E, has reportedly lobbied the European Union to water down the incoming AI legislation. Time Magazine has obtained documents from the European Commission that reveal how OpenAI requested lawmakers to amend a draft version of the EU AI Act before it was approved by the European Parliament on June 14th. Some of these changes were eventually incorporated into the final legislation.
Prior to the approval, there was an ongoing debate amongst lawmakers to expand terms within the AI Act to classify all general-purpose AI systems (GPAIs), such as OpenAI's ChatGPT and DALL-E, as 'high risk' according to the risk categorizations described in the act. With such a designation, these AI systems would be subject to stringent safety and transparency obligations. According to Time, OpenAI fought against this classification in 2022, proposing that only the companies explicitly applying AI to high-risk use cases should be made to comply with the regulations.
Google and Microsoft have also pushed for a reduction in the AI Act's impact on the companies building GPAIs. As OpenAI stated in an unpublished white paper sent to EU Commission and Council officials in September 2022, By itself, GPT-3 is not a high-risk system, but possesses capabilities that can potentially be employed in high risk use cases.
OpenAI's lobbying efforts in the EU haven't been disclosed previously but proved to be largely successful. In the approved EU AI Act, GPAIs are not automatically classified as high-risk. However, greater transparency requirements have been imposed on 'foundation models,' powerful AI systems like ChatGPT that serve various tasks. Consequently, companies that use foundation models will be required to perform risk assessments and disclose any use of copyrighted material during training of their AI models.
An OpenAI spokesperson informed Time that the company supports the inclusion of 'foundation models' as a separate category within the AI Act. This is despite their secrecy about where they source their data to train AI models. It's widely believed that such AI systems are trained on large datasets scraped from the internet, including copyrighted materials and intellectual property. If OpenAI were forced to disclose such information, it could become vulnerable to copyright lawsuits along with other large tech companies.
While lobbying efforts persist, the EU AI Act still has some distance to cover before it comes into effect. The legislation will now go through the final 'trilogue' stage, where it will be discussed among the European Council to finalize the law's details, including its application scope. It is expected to receive final approval by the end of the year and may take around two years to come into effect.
Amidst this growing regulatory landscape, no-code platforms like AppMaster.io are driving innovation in application design, enabling businesses to create backend, web, and mobile applications with ease. This empowers companies to develop comprehensive software solutions while minimizing the risks related to AI usage and the associated regulations.