Rising AI Regulation: What Businesses Need to Know and How to Prepare
As the EU finalizes its AI Act and the global AI regulatory landscape evolves, businesses must prepare for stricter AI regulations impacting their operations.

The landscape of artificial intelligence (AI) and machine learning (ML) is poised for a significant transformation, as regulatory frameworks emerge to provide clear guidelines on the development and implementation of AI technologies. As the European Union (EU) finalizes its AI Act and generative AI continues to rapidly evolve, businesses worldwide should prepare for stricter AI regulations that will impact their operations, products, and services.
To better understand what AI regulation may look like for companies in the near future, we can examine the key features of the EU AI Act, the possible effects of the global expansion of AI regulations, and the strategies that organizations should adopt to prepare for these changing times.
The EU AI Act and Its Global Implications
Scheduled for a parliamentary vote by the end of March 2023, the EU AI Act is expected to establish a global standard for AI regulation, much like the EU's General Data Protection Regulation (GDPR) did in 2018. If the timeline is adhered to, the AI Act could be adopted by the end of the year.
Though it is a European regulation, the impact of the AI Act is likely to extend far beyond the EU. The so-called 'Brussels effect' will compel organizations operating on an international scale to conform to the legislation, while US and other independently-led companies will find it in their best interest to abide by its stipulations. Recent moves, such as Canada's Artificial Intelligence & Data Act proposal and New York City's automated employment regulation, further signal this trend towards adopting AI regulations beyond the EU's territory.
AI System Risk Categories Under the AI Act
The AI Act proposes three risk categories for AI systems, each accompanied by its own set of guidelines and consequences:
- Unacceptable Risk: AI systems in this category will be banned. They include manipulative systems that can cause harm, real-time biometric identification systems used in public spaces for law enforcement, and all forms of social scoring.
- High Risk: This category covers AI systems such as job applicant scanning models, which will be subject to specific legal requirements.
- Limited and Minimal Risk: Many of the AI applications currently used by businesses (including chatbots and AI-powered inventory management tools) fall under this category and will largely remain unregulated. However, customer-facing limited-risk applications will require disclosure that AI is being used.
AI Regulation: What to Expect
As the AI Act is still under draft and its global effects are undetermined, the exact nature of AI regulation for organizations remains uncertain. However, its impact will likely depend on the industry, the type of model being developed, and the risk category it belongs to.
Regulation may entail scrutiny from a third party that stress-tests AI models against the intended target population. These tests will assess factors such as model performance, margins of error, and disclosure of the model's nature and usage.
For organizations with high-risk AI systems, the AI Act has already provided a list of requirements, including risk-management systems, data governance and management, technical documentation, record keeping, transparency, human oversight, accuracy, robustness, cybersecurity, conformity assessment, registration with EU-member-state governments, and post-market monitoring systems. In addition, AI industry reliability testing (similar to e-checks for automobiles) is expected to become more widespread.
Preparing for AI Regulations
AI leaders who prioritize trust and risk mitigation when developing ML models are more likely to succeed in the face of new AI regulations. To ensure readiness for stricter AI regulations, organizations should consider the following steps:
- Research and educate teams on potential regulations and their impacts on your company now and in the future.
- Audit existing and planned models to determine their risk categories and the associated regulations that will most affect them.
- Develop and adopt a framework for designing responsible AI solutions.
- Think through the AI risk mitigation strategy for both existing and future models, accounting for unexpected actions.
- Establish an AI governance and reporting strategy, ensuring multiple checks before a model goes live.
With the AI Act and forthcoming regulations signaling a new era for AI design, ethical and fair AI is no longer just a 'nice to have' but a 'must have.' By proactively preparing for these changes, organizations can embrace the world of AI regulation and leverage the full potential of this rapidly evolving technology. Moreover, businesses can utilize powerful no-code platforms like AppMaster to expedite their AI developments while ensuring compliance with emerging regulations.


