Commentary

The Challenge In Regulating AI

A few weeks ago, MediaPost’s Wendy Davis wrote a commentary on the Federal Trade Commission’s investigation of OpenAI. Of primary concern to the FTC was ChatGPT’s tendency to hallucinate. I found this out for myself when ChatGPT told some whoppers about who I was and what I’ve done in the past.

Davis wrote, “The inquiry comes as a growing chorus of voices -- including lawmakers, consumer advocates and at least one business group -- are pushing for regulations governing artificial intelligence. OpenAI has also been hit with lawsuits over copyright infringement, privacy and defamation.”

This highlights a problem with trying to legislate AI. First, the U.S. is using its existing laws and trying to apply them to a disruptive and unpredictable technology. Laws, by their nature, have to be specific, which means you have to be able to anticipate circumstances in which they’d be applied. But how do you create or apply laws for something unpredictable? All you can do is regulate what you know. When it comes to predicting the future, legislators tend to be a pretty unimaginative bunch. 

advertisement

advertisement

In the intro to a Legal Rebels podcast on the American Bar Association’s website, Victor Li included this quote: “At present, the regulation of AI in the United States is still in its early stages, and there is no comprehensive federal legislation dedicated solely to AI regulation. However, there are existing laws and regulations that touch upon certain aspects of AI, such as privacy, security and anti-discrimination. “

The ironic thing was, the quote came from ChatGPT. But in this case, ChatGPT got it mostly right. The FTC is trying to use the laws at its disposal to corral OpenAI by playing a game of legal whack-a-mole:  hammering things like privacy, intellectual property rights, defamation, deception and discrimination as they pop their heads up.

But that’s only addressing the problems the FTC can see. It’s like repainting the deck railings on the Titanic the day before it hit the iceberg. It’s not what you know that’s going to get you, it’s what you don’t know.

If you’re attacking ChatGPT’s tendency to fabricate reality, you’re probably tilting at the wrong windmill. This is a transitory bug. OpenAI benefits in no way from ChatGPT’s tendency to hallucinate. The company would much rather have a large language-based model that is usually truthful and accurate. You can bet they’re working on it. By the time the ponderous wheels of the U.S. legislative system get turned around and rolling in the right direction, chances are the bug will be fixed and there won’t really be anything to legislate against.

What we need before we start talking about legislation is something more fundamental. We need an established principle, a framework of understanding from which laws can be created as situations arise.

This is not the first time we’ve faced a technology that came packed with potential unintended consequences. In February, 1975, 140 people gathered at a conference center in Monterey, California to attempt to put a leash on genetic manipulation, particularly Recombinant DNA engineering.

This group, which included mainly biologists with a smattering of lawyers and physicians, established principle-based guidelines that took its name from the conference center where they met. It was called the Asilomar Conference agreement.

The guidelines were based on the level of risk involved in proposed experiments. The higher the risk, the greater the required precautions.

These guidelines were flexible enough to adapt as the science of genetic engineering evolved. It was one of the first applications of something called “the precautionary principle” - which is just what it sounds like: if the future is uncertain, go forward slowly and cautiously.

While the U.S. is late to the AI legislation party, the European Union has been taking the lead. And, if you look its first attempts at E.U. AI regulation drafted in 2021, you’ll see it has the precautionary principle written all over it. Like the Asilomar guidelines, there are different rules for different risk levels. While the U.S. attempts at legislation are mired in spotty specifics, the EU is establishing a universal framework that can adapt to the unexpected.

This is particularly important with AI, because it’s an entirely different ballgame than genetic engineering. Those driving the charge are for-profit companies, not scientists working in a lab.

OpenAI is intended as a platform that others will build on. It will move quickly, and new issues will pop up constantly. Unless the regulating bodies are incredibly nimble and quick to plug loopholes, they will constantly be playing catch-up.

Next story loading loading..