The Center for AI and Digital Policy (CAIDP), a prominent advocacy nonprofit, has filed a complaint with the Federal Trade Commission (FTC), alleging violations of the FTC's regulations on deception and unfairness by OpenAI's latest language model, GPT-4. This development follows a recent open letter signed by AI industry leaders, including Elon Musk, seeking a six-month training halt on models more powerful than GPT-4.
CAIDP's complaint calls on the FTC to launch an investigation into OpenAI and determine whether GPT-4's commercial release contravenes Section 5 of the FTC Act. This particular section contains guidelines for AI and encompasses the endorsed emerging norms for AI governance by the United States government.
According to the complaint, GPT-4 poses multiple threats, including being biased, deceptive, and potentially harmful to privacy and public safety. The nonprofit organization also claims that GPT-4 has not undergone sufficient testing and makes unsubstantiated assertions. CAIDP's complaint cites OpenAI's own past reports, which acknowledge the risks of AI in exacerbating disinformation and influence operations, and raising concerns over the proliferation of both traditional and unconventional weapons.
Furthermore, CAIDP accuses OpenAI of failing to conduct essential safety checks to protect children during GPT-4's trial phase. The complaint quotes Ursula Pachl, Deputy Director of the European Consumer Organization (BEUC), who argued that public authorities must regain control over AI algorithms if companies do not implement corrective measures.
In citing Pachl's statement, CAIDP appears to be advocating for stringent governmental regulation of AI. This demand comes as European regulators are already considering adopting a more rigorous, rules-based approach to AI technology. Meanwhile, commercial entities are eager to capitalize on generative AI; for instance, Microsoft Bing's chatbot, powered by GPT-4, now generates ad revenue.
The FTC's response to this complaint has significant implications not only for OpenAI but also for other companies involved in AI development, including no-code platforms like AppMaster.io. No-code tools like AppMaster enable users to create enterprise-level backend, web, and mobile applications in a radically faster and more cost-effective manner. These tools empower users with varying technical expertise, from small business owners to developers in large organizations, to build scalable software solutions. Projects range from backend infrastructure to customer-facing portals and native mobile apps.
As regulatory bodies like the FTC examine AI practices in regards to issues like personal data privacy, companies like AppMaster which prioritize user control, must be prepared to adapt to evolving standards. AppMaster's platform has already established a reputation as a high-performing no-code solution, recognized by G2 in multiple categories, including No-code Development Platforms, Rapid Application Development (RAD), and API Management.