Charting New Horizons: The Impact of the EU AI Act on Business
The year 2023 was marked by exhilarating advancements in generative AI (Gen AI). Businesses across the globe experimented with innovative Gen AI applications, unveiling a world of possibilities. However, as we step into 2024, the focus is shifting from mere experimentation to realizing the true value of Gen AI. Businesses are now keen on operationalizing critical AI applications that balance costs against tangible results.
The potential of Gen AI to transform industries is immense, with IDC research estimating that GenAI’s worldwide economic impact will be close to $10 trillion by 2033. IDC’s October 2023 Global AI (Including GenAI) indicated that the three most important business outcomes from GenAI are increased operational efficiency, cost savings, and improved employee productivity.1
EU Parliament voted on and adopted the AI Act on 13 March 2024
Yet, amid these promising prospects, the global AI regulatory landscape remains intricate and continuously evolving. Enter the EU AI Act—a landmark legislation that has ushered in a new era for AI governance. Navigating this digital regulatory frontier is essential for senior executives integrating AI into business operations. The EU AI Act aims to foster a future where AI’s innovative potential is harnessed responsibly, ensuring a balance between technological progress, ethical considerations, and human rights protection in line with European values and rules.
What is the EU AI Act?
The EU AI Act is a comprehensive law governing the development and use of artificial intelligence within the European Union (EU). This legislation adopts a risk-based approach to regulation, applying different rules to AI systems based on the threats they pose to human health, safety, and rights. The Act’s scope extends far beyond the EU, impacting companies and organizations worldwide that deploy AI systems within the EU or affect individuals in the region.
As the world’s first all-encompassing regulatory framework for AI applications, the EU AI Act prohibits certain AI uses while promoting rigorous safety and transparency standards for others. It outlines numerous requirements and obligations for all actors in the AI value chain, including providers, users, importers, distributors, manufacturers, and authorized representatives.
Timeline: From 2024 to 2030
2024: The EU AI Act officially becomes enforceable. A grace period for organizations to audit and classify their AI systems begins. Early adopters who align their operations with the Act’s provisions start to emerge, setting industry standards. This includes a ban on all systems with unacceptable risks.
2025-2026: Strict enforcement begins for providers of GenAI models. The focus shifts towards monitoring compliance and addressing early challenges in the implementation of the Act. Organizations are expected to be fully compliant, with regulators actively assessing and enforcing penalties for violations in each member state.
2027-2028: The landscape of AI in the EU sees a transformation, with a significant reduction in high-risk AI violations and an increase in trust among consumers. Ethical AI has become a competitive advantage in the market.
2029-2030: The EU evaluates the impact of the Act and considers updates to address technological advancements. Organizations continue to innovate within the framework, contributing to a thriving ecosystem of safe and ethical AI. Obligations for certain Al systems, which are components of vital IT systems, are established under EU law in the areas of freedom, security, and justice, such as Schengen, the information system.
By integrating compliance into the very fabric of their AI strategy, leaders can not only navigate the complexities of the EU AI Act but also harness its potential to foster innovation, build trust, and achieve a sustainable competitive edge in the era of artificial intelligence.
Why should businesses care about the EU AI Act?
The introduction of the EU AI Act brings several pressing questions, especially in the relatively nascent field of artificial intelligence. Business leaders must consider the Act’s scope, key considerations for AI utilization, and the potential implications for their operations. Noncompliance penalties can be steep, reaching up to EUR 35,000,000 or 7% of a company’s annual worldwide revenue2, whichever is higher.
Moving from prototyping to implementation
To be regulation-ready, enterprises must align with the EU AI Act’s primary goal of making AI development and usage safer and more transparent. While the Act includes a phased implementation period, proactive preparation and compliance are crucial. Key requirements of the Act include:
- Risk management system and risk assessment of AI systems
- Data and data governance management and documentation
- Quality management system
- Technical documentation of AI systems
- Transparency and communication with AI system users
- Human monitoring and control of AI systems
- Accuracy, robustness, and cybersecurity
- Notification of the AI system to the EU
Who falls under the scope of the AI Act?
One of the foremost questions for business leaders is, “Does this apply to my organization?” The AI Act casts a wide net, encompassing various entities, from startups to multinational corporations, that meet specific criteria.
Providers of AI systems: companies developing AI technologies, regardless of size or industry, are directly affected. This includes firms specializing in machine learning algorithms, facial recognition software, and other AI solutions for internal use or market distribution.
Users of AI systems: not only must AI creators comply, but users of AI systems are also within the Act’s scope. Businesses deploying AI tools for customer service, automation, data analysis, and decision-making must ensure these systems adhere to regulations concerning data handling, transparency, and accountability.
Distributors and importers: businesses involved in distributing or importing AI technology in the EU must comply with specific obligations related to marketing, sales, and the overall distribution chain of AI systems. Ensuring these technologies are compliant before reaching the European market is essential.
Third-Party service providers: service providers offering AI-powered solutions, even if they do not directly develop the technology, must ensure their offerings comply with the Act. This includes companies integrating AI systems into their service portfolios for operational enhancement or customer experience improvement.
Recommendations for businesses
In light of the EU AI Act 2024, businesses operating within or in connection with the European Union must adopt a proactive compliance approach. Here are actionable steps to consider:
- Conduct an AI gap analysis: review your current and planned use of AI against the Act’s requirements. Conduct a risk assessment and prioritize high-risk areas to ensure compliance with the EU AI Act.
- Identify your AI systems: create an overview of your AI systems and where AI is used in existing systems provided by external suppliers.
- Engage with legal experts: consulting with legal professionals specializing in EU regulations can clarify the Act’s implications for your specific business context.
- Invest in ethical AI practices: establish or strengthen your ethical AI guidelines to align with the Act’s standards, including transparent data usage, bias mitigation, and privacy assurance.
- Implement robust data governance: the Act emphasizes data quality and security. Evaluate your data governance policies to ensure they meet or exceed the EU AI Act’s stipulations.
- Foster an AI-literate workforce: education and training programs can prepare your team for the shift towards regulated AI use: emphasizing ethical considerations and compliance.
- Monitor regulatory updates; the regulatory landscape around AI will continue to evolve. Stay informed on any amendments to the Act or new guidelines issued by EU authorities.
Conclusion
Navigating the EU AI Act 2024 requires careful planning and a commitment to ethical AI practices. By taking these recommendations to heart, businesses can comply with new regulations and position themselves as leaders in responsible AI technology use. This proactive approach will safeguard against regulatory risks, enhance corporate reputation, and build trust among customers and stakeholders in an increasingly AI-driven world.
Citations:
1 The Truth About Successful Generative AI, David Schubmehl, Kathy Lange, IBM, April 2024: chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/https://www.ibm.com/downloads/cas/KEWKGW9E
2Commission welcomes political agreement on Artificial Intelligence Act*, ec.europa, December 09, 2023: https://ec.europa.eu/commission/presscorner/detail/en/ip_23_6473
Other references:
Commission welcomes political agreement on Artificial Intelligence Act
European approach to artificial intelligence
Artificial Intelligence Act: deal on comprehensive rules for trustworthy AI
More from Tom Christensen
Welcome to our discussion on responsible AI —a transformative subject that is reshaping technology’s…
At our recent roundtable event in Copenhagen, we hosted engaging discussions on accelerating…
Recently, I had the opportunity to attend a Databricks and Snowflake event in Stockholm. It…
As businesses turn to cloud services to meet their growing technology needs, the promise of…
Latest Blogs
Introduction to RAG To truly understand Graph RAG implementation, it’s essential to first…
Welcome to our discussion on responsible AI —a transformative subject that is reshaping technology’s…
Introduction In today’s evolving technological landscape, Generative AI (GenAI) is revolutionizing…
At our recent roundtable event in Copenhagen, we hosted engaging discussions on accelerating…