Privacy Law

AI Rebalances Power, Making Ethical Lines Hard to Find

Author: Evan Enzer

Doctor David Danks is a Data Science and Philosophy Professor at the University of California, San Diego. He has authored numerous academic pieces on ethics, society, humanity, and AI. Professor Danks also advises the White House and the University of California on AI policy.

In April, Professor Danks met with Evan Enzer, a member of the California Lawyers Association’s Privacy Law Section, to discuss AI and ethics in the context of the EU AI Act. 

David Danks
David Danks

Professor Danks, you’re a formally trained philosopher, and we’ve both seen a lot of ethicists become expert AI advisors. Why do you think that is the case?

First, thank you, Evan and the California Lawyers Association, for inviting me to have this conversation.

I think that in many subjects, we know the ethical course of action. People should have clean water and clean air. We can fight about the trade-offs of environmental regulation, but in some sense, we already know the proper outcomes. In AI regulation, we don’t know what success looks like, and we don’t yet agree on what an ethical AI-augmented society should be. That’s where formally trained philosophers can contribute to the discussion on AI regulation. We spend a lot of time thinking about what it means to have a good society.

Could you explain how law adopts ethical models and how ethics might influence AI regulation?

For example, administrative law presupposes a kind of legal realism. We assume there is some policy that the legislature intends to enact, and the administrative apparatus figures out a way to implement it. Likewise, civil law is about how we relate to one another and settle disputes. It tries to provide answers about what it means to have a good society.

Now, the connection with AI regulation is that AI can reshape our lives in novel ways that shift historic power balances. One of the most obvious parallels is the changes around Fourth Amendment law. Do police need a search warrant to see the stored GPS locations of a cell phone? That wasn’t something we ever had to think about until recently. If the police wanted to surveil somebody, they had to be willing to invest a lot of time and energy. Now, they just need to make a phone call. Likewise, AI has the potential to dramatically reshape much of what we’ve taken for granted by automating what have always been resource and time-intensive tasks. We’ve got to rethink how we move forward with that.

Let’s jump into practical applications by talking about the EU. Why is the EU concerned about AI and trying to regulate it with the AI Act? How is it different than what we are doing in the US?

My reading is that the EU has had a much more rights-based approach to regulation than the US. The EU looks at AI and says, “This technology can systematically undermine people’s rights.”

Additionally, in the US, we predominantly regulate AI sectorally. The AI Act is sector general. It says that if you are doing something high-risk, regardless of sector, you will be subject to various limiting conditions. The virtue of the EU approach is that nothing can slip through the cracks. The downside is that AI appears so completely differently depending on the sector that the same solutions might not make sense in every situation.

Can you think of any examples of AI already violating a person’s rights under US or EU law?

We have ample evidence of cases where these systems have caused additional harm on top of what was already caused by human fallibility. Most famously, early on, we saw that the use of these systems for criminal recidivism prediction and bail awards was clearly very discriminatory.

Having said that, we also have plenty of cases where these systems have done better than humans. Sometimes, the problem is that AI systems are better than humans, but humans overrule them in ways that reintroduce biases and discrimination. AI can actually help detect and correct human bias. You can build a system that mirrors human judgments, and the results can be incredibly biased. So, AI becomes an investigative tool to bring discrimination to light.

The most important thing is to be aware of AI’s risks. Almost all the bad AI use cases have been examples of developers who, after the harms came to light, said, “Wow, that never occurred to us.”

Is the EU overlooking any ethical questions in the AI Act, or could it make different moral decisions?

Under the AI Act, all legal responsibility falls on the last link in the chain. If I build a model and some other company fine-tune it, I have almost no legal responsibilities.

So, for an audience of lawyers, the AI Act is potentially cutting the chain of proximate cause too early?

Exactly. And that could change. There’s a certain amount of regulation by litigation. There will be lawsuits that try to push responsibility onto upstream developers.

The other big concern is that in the last round of negotiations, the EU created an exemption for free and open-source software. We don’t want a company to be able to do something terrible and then declare a part of the code to be open source to avoid legal liability.

Before we wrap up, are there consensus red lines we shouldn’t cross regarding AI?

It’s been hard to construct red lines because AI has a bit of the old-school dual-use problem that we saw between nuclear power and nuclear weapons. AI has that same problem on steroids because it has many, many uses, not just two.

I don’t know if there is any consensus. Philosophers are a contrary bunch, but most would be against technologies that enable discrimination or bias without recourse. I think there would also be an agreement not to utilize systems that violate fundamental human rights.

To the extent that we agree on what those rights include, it would be a red line to violate them outside of extenuating circumstances. For example, there are plenty of philosophers and people who are very vocal about the idea that AI should never make life-and-death decisions. There are also plenty of people who say that, if done correctly, AI could make more ethical life-and-death decisions than humans. One example might be the difference between building an AI that kills indiscriminately and one that we deploy in the limited context of a legitimate war.

Thank you for joining me today, David.

Thank you, Evan.


Forgot Password

Enter the email associated with you account. You will then receive a link in your inbox to reset your password.

Personal Information

Select Section(s)

CLA Membership is $99 and includes one section. Additional sections are $99 each.

Payment