Skip to Content

Article Summary: Finding Your Way Through AI’s Regulatory Jungle

Recommendation

As the EU moves to enact new AI regulations, the rest of the world is slowly following suit. Glean useful insights from Boston Consulting Group’s Steven Mills and Kirsten Rulf on how best to navigate the complex landscape of AI regulation, without impeding your ability to innovate or jeopardizing the public good. Most companies that will implement AI models are far from ready, explain Rulf and Mills, as they’re making the mistake of waiting to see what the new AI regulations are before taking action. Learn what steps you can take to ensure you’re prepared to implement AI responsibly today, while maintaining your competitive edge.

Take-Aways

  • AI companies must find immediate solutions to ensure compliance with emerging regulations.
  • To enact AI policy that supports innovation, policy-makers must get feedback from industry.
  • Many companies are unprepared to leverage the innovative potential of AI – take steps to change that.

Article Summary: Finding Your Way Through AI’s Regulatory Jungle

Summary

AI companies must find immediate solutions to ensure compliance with emerging regulations.

Governments in Europe are scrambling to regulate AI, following heightened concern from the general public about its disruptive potential, yet must find ways to do so without impeding innovation. The EU’s AI Act lays out a risk framework for new AI use cases, creating the following risk categories: unacceptable risk, high risk, limited risk and little or no risk. The EU will prohibit AI systems that pose unacceptable degrees of risk. “High-risk” use cases are those that could cause emotional physical or financial harm to users (for example, influencing a user’s ability to access credit, health care or employment). The AI Act will require transparency, disclosure, certification and post-deployment documentation for any high-risk use cases. Companies that fail to adhere to the new requirements face fines of up to 6% of their global annual revenue.

“Regulatory compliance will require solutions that are integrated into a company’s broader AI technology stack.”

After negotiations between the EU Commission and the European Parliament and council member states, followed by a vote on a final draft (possibly by mid-fall 2023) the Act will become law. There’s likely to be a one to two-year grace period before the law goes into effect, and the law will work in tandem with existing legislation, including the Data Act, the Digital Services Act and the Cybersecurity Act. In the United States, regulation is slower, but laws have already been passed on a state and municipal level in California, New York City, Illinois and elsewhere, while federal agencies such as the Federal Trade Commission and the Consumer Financial Protection Bureau are applying a patchwork of existing regulations to AI use cases. AI companies and those implementing AI systems must find solutions to navigating a rapidly changing and complex regulatory ecosystem.

To enact AI policy that supports innovation, policy-makers must get feedback from industry.

Poorly written policy could impede the beneficial uses of AI. For example, in the health care industry, a company wanting to use generative AI to prepare exit memos summarizing patient treatment and post-release care – a service that would be useful for patients – may face AI regulations classifying this activity as “high risk,” creating too many cost barriers to provide this service. Also, if international companies respond to new AI regulations by overlaying every single restriction into their strategy, they’ll end up with something much more restrictive than necessary or intended – instead, they should work to create localized strategies, determining which regulations apply in each jurisdiction they operate within, ensuring compliance.

“The challenge for governments is how to regulate AI without stifling innovation.”

To prevent regulation from blocking the positive potential of AI, policy-makers should continue getting perspectives from the big tech companies at the forefront of developing these foundational models. They also need to consider the needs of “the 99% of companies who will be implementing AI,” giving them a voice as they shape these new regulations. If policy-makers fail to do this, and create requirements that are too onerous, average, non-tech companies will find it impossible to deploy AI.

Many companies are unprepared to leverage the innovative potential of AI – take steps to change that.

While many tech companies – especially those in the United States – deeply understand emerging AI technologies and have responsible AI teams in place, the vast majority of companies that will implement AI models are unprepared to do so. If you’re in this position, don’t wait for new regulations to go into effect before making moves – start preparing to harness the potential of AI models by taking steps now, such as putting good documentation processes of AI systems in place.

“It’s the other 99% of companies – those that will be implementing AI models – that need to take steps to prepare. Too many are waiting until new AI regulations actually go into effect.”

Don’t wait to start setting up your responsible AI program – this process will take you about three years. Aspire to create this framework in an agile, holistic way, encompassing strategy, governance, processes and tools. The faster you move on this, the bigger your competitive advantage, as you’ll have an adaptable framework that will bring you long-term benefits that outweigh the initial costs.

About the Authors

Steven Mills is a managing director, partner and the chief AI ethics officer at Boston Consulting Group Gamma in Washington, DC. Kirsten Rulf is a partner and associate director at Boston Consulting Group in Berlin.