Rotman Executive Summary

Taming the machine: Why regulating AI feels impossible (but we have to try anyways)

Episode Summary

"If AI didn’t offer such massive opportunities... we’d likely regulate it out of existence." On the latest episode of the Executive Summary, professor Dan Trefler explores the double-edged sword of artificial intelligence: Are the risks worth the rewards? Is bureaucratic red tape the solution — or just another hurdle? And how can the average citizen help fight the "great regulatory" battle?

Episode Notes

"If AI didn’t offer such massive opportunities... we’d likely regulate it out of existence." On the latest episode of the Executive Summary, professor Dan Trefler explores the double-edged sword of artificial intelligence: Are the risks worth the rewards? Is bureaucratic red tape the solution — or just another hurdle? And how can the average citizen help fight the "great regulatory" battle? 

Show notes:

[0:00] In 2023, tech leaders and academics signed a letter agreeing to hold off on future AI development until government regulation caught up…spoiler alert: it didn’t. 

[0:48] Five years ago, it would have been impossible to imagine where AI development was going to be today…what will we see in the next five years? 

[1:36] Meet Dan Trefler, a professor of economics and policy at the Rotman School of Management. 

[2:29] Regulating “Artificial Intelligence” is impossible. 

[3:50] What’s the 2025 state of affairs when it comes to regulating uses of AI? 

[4:29] Dan sees one region of the world regulating the tech use about as well as they can. 

[7:12] What is the competition problem? 

[7:48] What is the coordination problem? 

[8:29] What happens when we have competition and coordination working together seamlessly? 

[9:46] So why can’t AI regulations follow the same successful model as car regulations? 

[10:19] What’s the interpretability problem? 

[11:18] California’s failed attempt at regulating AI companies is the perfect microcosm of the challenges we face. 

[12:45] Where is the last place governments should regulate? 

[13:49] To get a handle on things now, Dan wants us to focus on (1) extreme risks; 

[14:28] (2) learning from other successful regulatory bodies like the FDA;

[14:49] and (3) exploring regulatory incentives that encourage positive uses of the technology.

[15:33] And citizens can help wage the great AI regulatory battle with their own personal choices. 

[16:03] “I'm asking people to be much more forward looking than we normally tend to be. I want them to start anticipating risks which don't exist yet, because when they do come, as we've seen with past changes in AI, they will come in such a flurry that we won't be able to shovel our way out of our own homes. So let's start thinking hard about regulating things on a precautionary principle, not because they've happened, but because they might happen.”

Episode Transcription

Megan Haynes: In the late winter of 2023, 1,000 members of the tech and academic communities banded together and called for a six-month pause on all AI development to give the governments of the world a chance to create some regulatory guardrails. 

Artificial intelligence development was moving too fast, the signatories said, and companies should work with regulators to create “a robust AI governance system.” While AI might do absolutely wonderful things, the risks of this technology are simply too great to ignore. 

Fast forward six months…and not much changed. There were no new laws that reigned in AI development. No regulations that require tech companies be transparent about their algorithms or testing. No fresh guardrails to protect people against bad actors. The technology continued to advance at a rapid pace, and regulation just couldn’t keep up.  

Dan Trefler: It's hard to imagine in 2017 the algorithm for large language models did not exist. Did not exist. You can't possibly imagine all the things that have created about $5 trillion of market cap in the last five years. Fast forward another five years, please don't ask me to say where AI is going to be. I have no idea. It might be doing only amazing things, like we just saw the Nobel Prize in economics for AI contributions to proteiomics, which is mapping out all of the proteins in our body. The consequences for human health are staggeringly wonderful. But it could also mean bad actors or autonomous AIs doing absolutely horrific things to us too.

I'm Dan Trefler proudly of the Rotman School of Management at the University of Toronto. I'm a researcher working on international trade, but for many, many years now, I've been working with my colleagues at the Creative Destruction Lab trying to understand how we can apply AI to a large number of business applications.

MH: Dan has been repeatedly called on to talk to politicians and global leaders about the challenge of regulating AI. And while he believes governments and businesses need to act quickly to create safeguards around the technology, he’s not exactly optimistic. So what does it mean to regulate artificial intelligence, can it be done, and why does everyone need to care? 

Welcome to the Executive Summary, I’m Megan Haynes, editor of the Rotman Insights Hub. 

Musical interlude.   

DT:  I'm very, very frequently called upon to give a talk about how do we regulate AI internationally or even domestically? And my response is, I don't even know what you're talking about.

MH: That may sound like a daunting statement at the beginning of an entire episode about AI regulation, but it is a good place for us to start the conversation. Artificial intelligence is essentially just a way of processing lots and lots of data. It’s an algorithm and a computing process. 

DT: But it's not actually a use. It's not drug development. It's not improving podcasts. It's not any one thing. It's many things. So if you tell me I want to talk about regulating AI, without telling me specifically what use case are you engaged with, I have no idea how to answer that question.

MH:  Big picture, it’d be pretty much impossible to regulate “AI.” We need instead to think about the rules that govern how we use the technology. Want to put a new drug to market? How was machine learning used in the testing phase. Got a new social media platform? What are the rules around sharing the data that underpins the AI algorithm? Essentially, how do we shape the rules and directives around how different industries, processes and people use AI, and what are the outcomes we hope to achieve? So with that in mind, there is a spectrum around how we’re currently, in 2025 regulating AI uses. On one end…

DH: If we're in the United States, the answer is actually very, very simple and clear. We're doing nothing.

MH:  On the other end…

DT: The Chinese are super laser focused on just a very small number of uses. 

MH: China has a number of regulations that restrict or shape development and use of the technology within its borders. This includes everything from regulations that are designed to promote the use of AI at different levels of government, to mandates that all generative AI - even those privately developed - uphold China’s socialist values. 

DT:  In between doing absolutely nothing and being laser focused on things which are by and for the Communist Party of China sits another continent, which is doing it about as well as you possibly could. That's not to say, right. But as well as you could, and that continent is the European Union, and they have adopted a method which is called risk-based analysis.

MH: This is similar to how the EU approaches other regulations, and asks ok, once this thing - be it product or process – goes out into the world, what’s the absolute worst thing that’ll happen if it goes belly up. 

DT:  And then there's sort of like a green, amber, red system. And if it poses potentially serious negative consequences —we're not saying that it will, but if it's possible that it'll pose these very large negative consequences — it needs to be regulated and possibly stopped. 

MH: For example, the EU highlights unacceptable risks — such as the cognitive behaviour manipulation vulnerable groups, like kids, or biometric identification. AI for these uses are banned, with few exceptions. There are also high-risk uses, like the integration of AI into products that fall under the union’s product safety legislation — like toys, airplanes and medical devices; or its use in specific areas like operation of critical infrastructure. In these cases, the EU assesses the tech with kid gloves — if the risks to people and infrastructure are low, it gets the green light. 

If there’s some concerns, maybe it gets okayed with some caveats or requirements for regular reporting. If there’s a chance the technology could pose a lot of harm, it likely won’t be approved.

It’s based on the idea that the dangers of say, an AI used to streamline air traffic control is likely to have a much greater catastrophic impact than, say the use of AI in a streaming platform — and the EU treats them differently.  

DT: All of that is based on a principle which the Europeans are very fond of, and I wish we in North American space were as fond of, it's called the precautionary principle, which means, the probability of this happening is pretty low, but if it does happen, the consequences are hugely negative. So on balance, we're going to do something about it. We're not going to wait for something hugely bad to happen, even if it's an unlikely event.

Musical interlude

MH: So why is it so challenging to rein in or regulate AI? Dan sees two separate problems that are really hampering our ability to put guardrails in place.  

DT: One is what I call the competition problem, and the other is what I call the coordination problem. The competition problem is, think about when Google invented the large language  and they say to themselves, we're going to sit on this technology until we really are confident that this can be released into the wild. Sam Altman wakes up one morning and says, I don't know where I put my glasses, but I'm seeing dollar signs in front of my eyes. Let's release this thing and see what happens.

MH: As organizations develop these new — potentially very lucrative technologies, the risk that another company will beat them to market means everyone is more likely to also rush to get their products out the door, even if they don’t have complete faith in their product safety.

When competition is fierce, there’s less upside to being cautious, and more reason to pressure governments to leave the space unchecked by regulatory hurdles. On the other side, you have coordination, which is basically a government or governments working together to come up with a framework to regulate. 

DT: Coordination is something where I feel like we could make more progress. So what's the coordination problem? If a country or many countries got together and said, look, there are some basic principles about the regulation of AI that we all have to follow, then we can put some constraints on that competitive process. That is where the real action lies

MH: And when you have competition and coordination working together — it can be great for the entire industry. 

DT: I want to cast my eyes back to the late 1970s when American cars were, quite frankly, rust buckets that were supposed to last for a couple of years, no seat belts. Failed crash test dummies. They were sardine cans in which we were meant to die in. The Japanese come along — competition — and say, “Hey, we can build safer cars. We can build more long lasting cars. We can build cars that are more lightweight and conserve on fuel, and we can build these on much simplified frameworks that allow us to reduce cost dramatically.” Every single one of those competitive pressures came home to roost in America and we changed the way we do cars. Competition is good.

MH: Historians found that these competitive factors were coupled with a pretty strong government response, which imposed new safety and environmental regulations on manufacturers in a pretty heavy-handed manner. 

DT: With strong product liability laws in place, companies have a strong incentive to produce cars that are not only safe, but that are safer than their previous cars, whether it's ICE or whether it's the current generation of autonomously controlled EVs.

MH: So, why can’t the same thing happen in AI? Well, let’s start with the obvious issue: 

DT: What happens will in Canada will be led by the U.S., I think that's pretty clear to me, the U.S. is politically dysfunctional. 

MH: The political divide makes it incredibly difficult for Congress to pass any laws. Dan points to the internet as a perfect example of this coordination black hole. Since 1996, the U.S. has only successfully passed two laws restricting the internet — and that was to ban child pornography and ban the hosting of prostitution websites; laws that just barely eked through.  Add to that the interpretability problem, and you’ve got a mess. 

DT: We know that regulating AI is hard because the interpretability problem. We don't know what it's thinking in its head. If we don't know what it's thinking its head, it causes problems legally - how do I defend myself against the law case? But it also means we don't know what reasoning it's going through. If we don't know what reasoning it's going through, we don't know what its goals are. 

MH: If we don’t know its goals, we struggle to create laws that limit the pursuit of those goals. But Dan suggests this problem could partially be addressed through testing. 

DT:  We shock the model, and then ask what came back? Well that makes sure of two things. If what came back is some extraordinarily racist statement, for example, then we know this has to be shut down. Or if what came back was something that informs us about how the model is working, then it helps us with interpretability. 

MH: But when the state of California tried to require companies transparently test their AIs, alongside requiring companies to have kill switches and mitigate extreme risks, the democratic governor vetoed the bill after the tech industry objected. In that same veto, he somewhat ironically said the state “cannot afford to wait for a major catastrophe to occur before taking action.” So, which is it? 

DT: These companies claim that they're already doing the testing. It usually costs between one to three per cent of their R and D budget goes to testing. It's not just a regulatory burden. It also helps them understand their models. Even something as minor as that, we couldn't coordinate on so we ended up without the law.

MH: California is almost the perfect microcosm of the challenge of competition and coordination. 

DT:  The industry claims they wanted, and yet did not universally get behind it when push came to shove. So the industry really frankly, doesn't want it. I find that deeply disturbing, and it's unclear to me that we couldn't do better than that.

Musical interlude

MH: So considering how rapidly the technology is changing, and how challenging the space is to regulate, should we all just adopt the U.S.’s head-in-the-sand, do nothing strategy? 

DT: I'm extremely sympathetic for two reasons. The last place you want to regulate is in a very vibrant, innovative startup ecology. Leave these people alone, let them innovate to their hearts’ content. That is absolutely the best thing you can do as a government. The trouble here is that the things that I've already alluded to, there are some very, very serious downsides. I really do not want to regulate innovative entrepreneurs in the city out of business. On the other hand, I don't want to discover that our next set of elections is determined in Beijing, or that this creature that we never knew existed, which is an autonomous AI agent, has suddenly decided that it's time to know figure out what the nuclear codes are.

If AI did not present us with such massive opportunities, we would not be having this conversation. We would regulate it out of existence. There is this real tension. 

MH:  So what’s the best path forward? Well, first off Dan really, really wants North America to adopt a risk-based approach to evaluating AI. 

DT: We want to focus on the extreme risks at this point, let’s focus on the extreme risks. Part of that regulation is to recognize the risks that companies like TikTok and BYD impose in terms of siphoning off data from us and the security issues involved. 

MH: So governments could impose restrictions around how foreign companies collect, analyze and use data from its citizens. Governments should look at whether AI is helping companies limit economic competition or creating monopolies i.e. antitrust. And, Dan also says we should look to other regulators — like the FDA for example — which have a track record of regulatory successes in their respective spaces, as models for how to approach AI.

DT:  Start thinking about what regulatory organizations we are proud of and think can  we bring AI, at least partly, into that framework?

MH: And, governments also need to remember that laws prohibiting things aren’t the only ways to regulate industries. 

DT: There, I think the carrot of regulation is very useful. Should we give subsidies to companies that are developing educational software that solves very personalized needs, like needs of autistic children?

MH: Or perhaps it’s making AI available to people of all economic walks of life, providing discounts and subsidies for people in blue-collar jobs to play around with and get familiar with generative AIs. Or maybe it’s using AI to help big robots and humans work together more safely – something that can’t really happen now in big manufacturing centres. Essentially, how can government encourage these really positive uses of the technology? Finally, Dan reminds listeners that they themselves play a really important role in shaping how AI moves forward. 

DT: It's going to be a very difficult problem, but you as an individual, every time you're aware that you may be being lied to, every time you're aware that somebody's trying to make money off of you, and you prevent that from happening, you're fighting the great regulatory battle, and I would encourage you to continue doing that.

MH: Ultimately, he hopes that citizens, businesses and government leaders take the risks AI poses seriously. 

DT:  I'm asking people to be much more forward looking than we normally tend to be. I want them to start anticipating risks which don't exist yet, because when they do come, as we've seen with past changes in AI, they will come in such a flurry that we won't be able to shovel our way out of our own homes. So let's start thinking hard about regulating things on a precautionary principle, not because they've happened, but because they might happen.

Musical outro

MH: This has been Rotman Executive Summary, a podcast bringing you the latest insights and innovative thinking from Canada's leading business school. Special thanks to Professor Dan Trefler. 

Join us next month as we chat with associate professor Victor Couture about what’s really causing city congestion. This episode was written and produced by Megan Haynes. It was recorded by Dan Mazzotta, and edited by Avery Moore Kloss. For more innovative thinking, head over to the Rotman Insights Hub, and subscribe to this podcast on Spotify, Apple or Amazon. Thanks for tuning in.