Privacy Law

The Transatlantic AI Tightrope: Navigating the EU AI Act for U.S. Attorneys

June 2024

By Christy Hsu

Mark Webber
Mark Webber

As a seasoned tech lawyer with a quarter-century of experience, including a significant tenure in Silicon Valley, Mark Webber has closely monitored the evolution of technology and privacy laws. His initial foray into this field coincided with the introduction of the UK Data Protection Act of 1998. This experience has provided him with a foundational understanding of technology and privacy, which has been essential in his subsequent work, especially with the burgeoning field of artificial intelligence (AI).

In April, Christy Hsu, a member of the California Lawyers Association’s Privacy Law Section, interviewed Mark Webber to unpack the complexities of the European Union Artificial Intelligence Act and discuss its implications for US lawyers whose clients might be operating in or deploying AI to the European market.

Mark, as a seasoned tech lawyer, you have witnessed the evolution of technology and privacy laws. Can you explain the origins and principal objectives of the European Union’s Artificial Intelligence Act, particularly its implications companies operating or deploying AI in the European market?

The European Union Artificial Intelligence Act (AI Act) originates from a broader EU initiative to guide and control the development and deployment of AI systems within its jurisdiction. The AI Act aims to create a regulatory framework that prevents harmful AI applications while encouraging technological innovation in a safe and ethical manner. Specifically, it introduces a risk-based regulatory approach, categorizing AI systems by their potential threats to safety, privacy, and fundamental rights.

For U.S. attorneys, the implications are significant. Their clients, whether directly operating in the EU or impacting EU citizens through AI systems, must comply with this framework. The AI Act categorizes AI systems into risk tiers, and each tier comes with specific obligations and regulatory scrutiny. Understanding these categories is crucial for legal counsel to navigate compliance, manage risks, and advise on strategic deployment of AI technologies in the European market.

Given the AI Act’s pyramid of risks model, what are the potential challenges and obligations for companies under this legislation, especially those categorized under high-risk and prohibited AI systems?

The pyramid of risks model at the heart of the AI Act creates a structured framework for regulating AI systems. At the pinnacle are prohibited AI practices—those considered too harmful to be allowed, such as AI that could manipulate human behavior or exploit vulnerabilities in specific groups. Directly below this are the high-risk categories, which include AI systems used in critical areas like healthcare, policing, or critical infrastructure. These systems are subject to rigorous compliance requirements, including thorough documentation, high standards of data accuracy, and robust human oversight to mitigate risks.

The obligations for companies operating within these categories are substantial. They include ensuring that AI systems are transparent, traceable, and underpinned by secure and minimal data use. For high-risk AI, the AI Act requires extensive testing and certification processes, regular compliance checks, and adherence to strict ethical guidelines. Navigating these requirements demands deep technical knowledge and strategic legal insight to balance innovation with compliance.

Considering the detailed preparation necessitated by the AI Act’s complexity and phased implementation timeline, what initial steps should organizations take to assess their compliance needs?

Organizations must first engage in a comprehensive assessment to determine whether the AI systems they utilize or plan to implement fall under the AI Act’s scope. This starts with understanding the AI Act’s detailed provisions, particularly the categorization of AI systems by risk level. An organization should designate a knowledgeable individual or team to spearhead this initiative, ensuring they are fully versed in the AI Act’s requirements and implications.

The next step involves a thorough inventory and classification of all AI systems in use or development. This classification not only determines which regulations apply but also sets the stage for a compliance roadmap. Early identification of potential high-risk or prohibited AI applications allows organizations to plan for necessary adjustments or redesigns, potentially involving extensive testing and certification. This proactive approach is essential to manage compliance effectively, given the potential complexity and time required for full adherence to the AI Act’s standards.

With the AI Act rolling out stringent compliance requirements for AI systems, it is like prepping for a marathon—both daunting and necessary. How do you propose we gear up and motivate our team to manage these complex regulations effectively? What strategies would be most effective in ensuring that our AI initiatives are not only compliant but also innovative within these new legal frameworks?

Getting to grips with new rules can be a bit like learning to dance—you might step on a few toes before you nail that routine! Understanding and embracing the upcoming obligations is key. While some requirements of the AI Act might seem distant, prepping early is essential as it could eat up a good chunk of time. Initially, it would be smart for someone in your team to dive deep into the AI Act’s demands, risk categories, and what compliance looks like. Wondering if the AI shenanigans your business is up to might fall under this new umbrella? It is usually easy to spot no-no’s, but figuring out if your current or future AI setups are “high-risk” could be a bit trickier. It us a common myth that all AI activities are automatically in the AI Act’s spotlight—nope, not the case! A thorough AI inventory followed by a classification spree is what I’d recommend. This step will really highlight the workload ahead.

Then, it us all about a four-step shuffle: Assess, Identify, Govern, and Deploy. You will need to choreograph a roadmap for compliance and risk management, keeping it flexible to twist and turn as your AI initiatives evolve. Identify your company’s role—are you the choreographers (developing and training AI) or are you just grooving to someone else’s tune (deploying AI trained by others)? If you are creating the moves, there is more on your plate; if not, it is more about managing the dance partners (a.k.a. the supply chain). Either way, setting up a governance framework, defining roles, and sketching out a roadmap to tick off compliance steps is crucial.

As for the motivation conundrum, let’s face it—this is going to add some extra steps to an already bustling workday. But here is a twist: much of this process is solid best practice, even if the AI Act does not directly apply. It is wise for any business to establish policies and safeguards as part of an AI risk management routine. Whether you are pitching AI to others or using it responsibly, these steps are essential. Who knows? The training and exploration involved might just spark some enthusiasm!

In the chess game of global business, where the AI Act shapes moves on the board, how do you craft your legal strategies to extract the most value from the AI Act’s provisions for strategic advantage? Could you share an example where your insights as an European practitioner into EU regulations significantly boosted a client’s strategic stance within this complex legal maze?

Tailoring legal advice under the AI Act involves a nuanced understanding of both the legislation and the specific business operations of the client. For firms operating across EU jurisdictions or globally, strategic positioning involves leveraging the regulatory requirements to their advantage. This could mean using compliance with stringent EU standards to demonstrate high levels of corporate governance and data ethics, which can be a significant market differentiator.

Legal advice here is not just about keeping up—it is about staying ahead. Advising clients under this framework also involves scenario planning and strategic foresight—anticipating potential shifts in the regulatory landscape and preparing clients to pivot or adapt their AI strategies accordingly. For example, aligning an AI deployment strategy with the EU’s high standards for data protection and ethical considerations can not only facilitate smoother market entry but also enhance the client’s reputation and trust with European consumers and regulators.

AI has become a strategic linchpin, and the nuances of AI law cast different shadows depending on whether a company crafts, integrates, or simply uses another’s AI solutions. It’s a landscape where many businesses juggle multiple AI applications, often starting with a single use case before the technology expands its reach unexpectedly—this kind of rapid expansion and scope creep is where the peril lies. In this environment, fostering an agile approach is crucial to remain in sync with evolving needs. Here, the AI Act emerges as a beacon, sparking crucial dialogues. Even for businesses outside its immediate scope, the mere process of querying their compliance and potential obligations under the AI Act is beneficial. This proactive stance—assessing what’s in play, documenting findings, and exploring implications—charts a course towards understanding risks and embracing accountability. It is a blueprint that smart companies can apply globally, not just in Europe. New regulations and the safe use of AI are universal concerns, making these rules a springboard for broader governance initiatives. They prompt businesses, consumers, and employees to engage and question—a vibrant, healthy process.

As we ride the wave of AI evolution and its ever-tightening grip of regulations, what future shifts in AI legislation do you see on the horizon? How can companies, especially sprightly startups, gear up for these changes to stay both compliant and competitive?

I must admit, I am a bit of a skeptic when it comes to the future landscape of AI rules on the global stage. Some folks ponder whether we even needed new AI regulations since there was already a heap of laws, like those on data, keeping AI in check. These rules are meant to curb the naughty bits of AI, but let’s be honest—some shenanigans are so tempting that they might happen regardless. When you toss “high-risk” into the mix, the law demands assessments that could slow down both adoption and innovation. I have seen clients who have opted to bypass the EU altogether to dodge these hurdles, which might sound wise but also robs a hefty market of some tech magic, potentially letting the EU lag in the global tech race. Not all players in the AI arena are playing it safe, as seen in the UK’s more chilled and non-statutory, pro-innovation approach. They took a “let’s wait and see” approach, allowing regulators to adapt as AI unfolds. But even the UK is inching towards beefing up regulatory roles and eyeing international cooperation to tackle genuine AI risks and misuse.

The big challenge I see brewing is the explosion of AI regulations. The AI hype is not just fluff; governments worldwide are scrambling to balance the risks with the perks of innovation. The tech titans might weave through these regulations with ease, but the little guys and startups might find it a daunting maze. If we end up with a patchwork of diverse rules, steering AI development could get tricky. Just like with global data laws, companies might need to craft a bespoke compliance framework. AI firms focused on in-house development might breathe easier, but those in the sales arena are getting yanked in all directions by client demands. In a world where it is often the customers, not governments, setting the standards, the energy that could fuel best practices gets sucked into contract negotiations.

Keeping your ear to the ground and eyes wide open is key. Monitoring these shifting sands is tough, especially when the early regulations, hastily assembled, may soon be outdated by the swift pace of AI advances. Regardless of where a business operates, the EU’s rules aren’t that unique. Companies should gear up to fully grasp their AI and take accountability for it. This means ramping up transparency, tracking new standards, and understanding the innards of your AI systems. It is better to bake this into the development phase rather than scramble to catch up later. Encourage a culture of meticulous documentation of AI training and pull various stakeholders into the oversight loop. The days of being oblivious to AI risks are over. Every company needs a structured approach to question, control, and invest continuously in AI ethics and governance.


Forgot Password

Enter the email associated with you account. You will then receive a link in your inbox to reset your password.

Personal Information

Select Section(s)

CLA Membership is $99 and includes one section. Additional sections are $99 each.

Payment