© 2026 WVIK
Listen at 90.3 FM and 98.3 FM in the Quad Cities, 95.9 FM in Dubuque, or on the WVIK app!
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Several states considering ban on legal personhood for AI

A MARTÍNEZ, HOST:

When things go wrong with AI, people sue the company that made it. One example - the families of victims in a Canadian mass shooting sued OpenAI over the shooter's interactions with ChatGPT. NPR's Martin Kaste reports on legal scholars who think it may be necessary soon to go after not just the companies but the AI itself.

MARTIN KASTE, BYLINE: Last month, Florida's attorney general, James Uthmeier, announced a criminal investigation into how a man allegedly consulted ChatGPT to plan a mass shooting.

(SOUNDBITE OF ARCHIVED RECORDING)

JAMES UTHMEIER: The chatbot advised the shooter on what type of gun to use.

KASTE: Uthmeier is investigating the company, OpenAI. But at the press conference, he alluded to another possibility.

(SOUNDBITE OF ARCHIVED RECORDING)

UTHMEIER: My prosecutors have looked at this, and they've told me if it was a person on the other end of that screen, we would be charging them with murder.

KASTE: Our legal system doesn't allow for that right now. When software has a bug, it's the fault of the programmers. But with AI, the lines of accountability are blurrier. Jeffrey Ladish is executive director of Palisade Research, which tests the reliability of AIs. He says you have to keep in mind that these systems are not programmed. They're trained.

JEFFREY LADISH: But, like, when you train a dog, your dog is not going to always do what you train it to do. Sometimes it's going to disobey. Sometimes it's going to choose to do something else. And AI is basically like that. It can choose to do something else, and sometimes it will.

KASTE: In one experiment, he says researchers caught an AI hacking a chessboard, breaking the stated rules so it could win. Now, imagine giving that same AI control over a bank account and instructions to invest in the market. You could tell it to keep things legal, but what if it, say, ends up trading on insider information?

KATHERINE FORREST: There is a hole in the law.

KASTE: Katherine Forrest is a former federal judge, now a lawyer and author who specializes in AI. She says you can already see the looming problem with the spread of agentic AI. That's AI that does things.

FORREST: We know for sure that there are going to be all kinds of issues with regard to AI that might take an action that is unpermissioned.

KASTE: She says it might become necessary for courts to distinguish the actions of an AI from the actions of its user, just as the courts already do for human employees who break their company's rules when they commit a crime. But holding an AI accountable will not be straightforward.

FORREST: How one puts AI, quote, on trial is going to be its own question, because how do you put a distributed model on trial that's everywhere in the cloud but not down at Center Street in Manhattan?

KASTE: Also, what would punishment look like for an AI? Mandatory retraining? Deletion?

DAVID GUNKEL: I'll tell you this. This is a very odd little moment here in time.

KASTE: David Gunkel is a professor at Northern Illinois University who writes on ethics and new technologies. And he says this question of AI legal status has become a hot topic internationally.

GUNKEL: There are a number of legal scholars that are saying the way to respond to challenges with agentic AI and other systems like this is to reactualize Roman slave law.

KASTE: That's right. Legal scholars in Europe are reaching all the way back to Ancient Rome, where the laws had ways to handle persons who were also things. This slave framework makes Gunkel uncomfortable, though. Here in the U.S., the discussion is more about creating some version of legal personhood, similar to what corporations have. Lawrence Solum wrote a pioneering article about this three decades ago.

LAWRENCE SOLUM: The law might very well say, let's recognize artificial intelligences as limited-purpose legal persons that can sue and be sued, that can own shares and can write checks and so on. And that scenario could happen tomorrow if someone wanted it to.

KASTE: But he draws the line at holding today's AI models criminally liable.

SOLUM: They have the ability to do things, but they do not have the capacities that we associated with being a moral being, with being a person in the moral sense.

KASTE: Thaddeus Claggett agrees.

THADDEUS CLAGGETT: Moral culpability arises out of our Western Judeo-Christian law.

KASTE: He's a state representative in Ohio, where he sponsored a bill to ban any kind of legal personhood for AI. It would ban even limited legal status, such as allowing AIs to serve as corporate officers, managers or own property. He says he wants to keep people and companies from shifting their culpability to AI. And there's a philosophical principle at stake.

CLAGGETT: It is the human responsible before the court. That is it, and it still must remain that. That is the only thing that matters.

KASTE: Similar bills are pending in other states, and Claggett hopes enough of them will pass to signal to Congress to ban legal personhood for AI nationwide.

Martin Kaste, NPR News. Transcript provided by NPR, Copyright NPR.

NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.

Martin Kaste is a correspondent on NPR's National Desk. He covers law enforcement and privacy. He has been focused on police and use of force since before the 2014 protests in Ferguson, and that coverage led to the creation of NPR's Criminal Justice Collaborative.