Insight from the UAE’s minister for A.I. on the tensions between the technology’s regulation and rollout



On a recent episode of Fortune’s Leadership Next podcast, co-host Alan Murray discusses the challenges of regulating artificial intelligence with Omar Al Olama, the United Arab Emirates first minister of state for artificial intelligence. As governments roll out the use of A.I., Al Olama says it must be “done responsibly: responsibly for current generations, responsibly for future generations.”

Also on the episode, Murray and co-host Michal Lev-Ram, who joins for the pre- and post-interview chats, play Fortune editor-in-chief Alyson Shontell’s recent interview with Arati Prabhakar, the director of the White House Office of Science and Technology Policy. The two also discuss A.I. regulation.

Listen to the episode or read the full transcript below.

Transcript

Alan Murray: Leadership Next is powered by the folks at Deloitte who, like me, are exploring the changing rules of business leadership and how CEOs are navigating this change.

Welcome to Leadership Next, the podcast about the changing rules of business leadership. I’m Alan Murray.

Michal Lev-Ram: And I’m Michal Lev-Ram. So, Alan, it is crazy to realize that it’s been actually more than a year, I think, since ChatGPT was introduced, or almost exactly a year. This technology is so fast moving. But I want to know, has it changed anything for you? Is your life any different since the emergence of this?

Murray: Well, my life is different, because every conversation with a business leader that I have, no matter what it’s supposed to be about, ends up talking about A.I. I mean, I just got back from two weeks of traveling to our conference in Abu Dhabi, and then our conference in Macau. The Macau conference was about design. But the most fascinating conversations were about A.I. Michal, there was the head of one of the big architectural firms there who was showing a tool that he now uses on a regular basis where he can draw, sketch a few lines on a piece of paper and say, I want a building that looks something like this, and put it into a generative A.I. machine that turns out a fully articulated picture of a building. I mean, it’s just amazing. I think the world has changed, mine included.

Lev-Ram: Yeah. No, it’s crazy. I feel like we’re busier than ever, but maybe it’s because we’re spending so much time talking about A.I. But another thing that’s changed, obviously, or that is changing as we speak, is the regulatory landscape. Lots going on on that front, really on a global level. And this actually gets us to today’s guest, because one country in the Middle East that’s taking a different approach to this problem is the United Arab Emirates, where we just had a big event, and they want to make sure its government—and by extension, I suppose the country itself,—is preparing to embrace and to appropriately regulate A.I.

Murray: I’m not sure there’s any other country in the world that has been as forward leaning on A.I. as the United Arab Emirates have been. They’ve created the first University of A.I. We had the head of the university meet with us when we were there in Abu Dhabi. They’ve created a company that’s gotten some press lately. G42, that’s dedicated to A.I., and most significantly, for this podcast, they have the first minister [of state] for artificial intelligence. His name is Omar Al Olama. He’s 33 years old. Really interesting, smart, fast-thinking guy. And he’s who we’re talking to in this episode.

Lev-Ram: Well, he’s even younger than Sam Altman, if you can believe it. So he fits right in age wise. I know the UAE is hoping that this role can help shape regulation around A.I. as we’ve been discussing, of course. But before His Excellency sat down to talk to you, Alan, he was on stage at the event at our Fortune Global Forum summit, speaking with our colleague Geoff Colvin. And I loved his explanation of why he believes it’s so important to get A.I. regulation right.

[music]

Omar Al Olama: The Middle East was far ahead of the rest of the world during the golden age of the Middle East. And so from the year 813 to the year 1550. The reason for that advancement was technology.

Geoff Colvin: Right, right.

Al Olama: And some historians thought that the Middle East was at least 500 years ahead of the rest of the world, the reversal of civilizations. Today, the fact is very different. We are backwards. And if you ask yourself, why is that? It’s because we overregulated the technology, which was the printing press. It was adopted everywhere on Earth. The middle East banned it for 200 years—to be specific, 189 years. Now there is a competition that’s quite interesting here. Why was it banned? The calligraphers came to the sultan and they said to him, We’re going to lose our jobs, do something to protect us. So job loss protection. Very similar to A.I. The religious scholars said, People are going to print fake versions of the Qur’an and corrupt society. Misinformation. Secondly, [just like] artificial intelligence. And the third, the top advisors of the sultan said, We actually do not know what this technology is going to do. Let us ban it, see what happens to other societies and then reconsider. Very similar to artificial intelligence. Fear of the unknown, fear of the ambiguity.

[music]

Murray: It’s really interesting, Michal. One other thing I want to share with our listeners about His Excellency Omar Al Olama. He was only 27 years old when he was appointed to be minister of state for A.I. I spent some time with him when I was there last summer. He’s a fascinating guy. You know, he’s got a little R&D area right below his office where he’s working on some projects of his own and is really determined. You know, the thing I asked him was, how can a little country like the United Arab Emirates hope to play in something like A.I., where you expect most of the big developments to be happening in the superpowers, whether it’s the United States or China? And he said he believes that by first focusing on it strategically; second, developing talent and inviting talent from all over the world to come to the UAE, that they think they can really make a difference.

Lev-Ram: Well, now that we both feel so much older than he is, let’s let our listeners hear your conversation with the world’s first Minister of State for Artificial Intelligence, His Excellency Omar Al Olama.

[music]

Murray: First of all, tell us what the job is. What does the minister of state for artificial intelligence of the UAE do?

Al Olama: So the minister of state for artificial intelligence for the UAE is the person who oversees the deployment of artificial intelligence across government departments, who coordinates these efforts across the different bodies to ensure that we are not constantly learning from zero. We’re always trying to adapt and improve on the previous case study from each other government department. The most important thing of this role is sometimes we deploy artificial intelligence in a certain vertical. The spillover effect might actually affect people in other verticals or might affect people in the long term, negatively or positively. So you need to ensure that there is someone who has a broad view that understands what this deployment would mean and this will ensure that it is done responsibly: responsibly for current generations, responsibly for future generations. There is no ignorance in the decision-making process because ignorance, I think, is a big issue that we need to deal with as governments due to the technology.

Murray: That’s a big burden on you. So basically your job is to make sure that all the people in government who are making decisions that could be affected by artificial intelligence know what they’re doing.

Al Olama: Absolutely.

Murray: Are you there yet?

Al Olama: So what we did was we launched a program with the University of Oxford where we said we want people that are not going to sugarcoat the problems of the current state of affairs in the UAE and actually deal with the people that are working on it in a very professional manner. Over 400 government officials, director level and above, have gone through an eight month course with Kellogg College and the University of Oxford, and they’ve become our A.I. experts within the government that are leading the charge. What’s also happening is they’re becoming the top advisors and aides for the ministers and for the leadership.

Murray: So what is it about A.I. that made this ministerial position so important? I assume there wasn’t a minister of state for social media or a minister of state for genetics or other technologies. Why is A.I. something that requires such high-level government attention?

Al Olama: If you look historically, many technological advancements have created government positions. So when we used to depend [inaudible] on coal for our energy demands, we’ve not had a minister of energy, any of us. When energy production became very important for economic activity and prosperity, ministers of energy were put in place to ensure that electricity was being produced in the right manner, being distributed in the right manner as well, and a disruption to the economy was minimal or negligible at best. Now you have the same thing has happened in aviation, the same thing has happened with ICP and telecommunications. Every time there’s a technology that touches every facet of life, it has a huge impact on government, there’s a ministerial position that’s created for us. We need to ensure that as a government we are prepared rather than we have laggards that are waiting for us to prepare and follow them.

Murray: Yeah, but what’s become clear to me over the last few days is that the UAE not only wants to be an educated user of this technology, it also wants to help develop this technology. And I want to ask you about that because most of the big developments in A.I. have happened in, it takes a lot of compute power, it takes a lot of concentrated brain power. They’ve happened in a few concentrated locations. The west coast of the U.S. There are parts of China where some work is going on. Maybe some in Toronto. What makes you think that the UAE can play in this game?

Al Olama: So I’d like to quote The Economist, they had an article that came out a few days ago, and the article said that the future of A.I. is going to be determined by three countries, the U.S., China and the UAE. If The Cconomist thinks of the UAE as a player that’s in the top three category, yeah, it just proves that we are moving and punching way above our weight.

Murray: But explain that. I mean, how can you possibly, the U.S. and China are the two largest economies in the world. The UAE is relatively small by comparison. How can you how can you dance with the elephants? What gives you that?

Al Olama: That’s a great point, actually. I think in terms of computer per capita, what the UAE has access to is uncomparable to anywhere else on Earth. So we have access to a lot of components. Second is talent right now moving to the UAE from east and west. The reason why talent is moving is because Visa restrictions around the world on making people reconsider. And especially if you look at people with really good technical abilities on mathematics or the physics and other important STEM fields, they’re choosing the UAE. From India, even some people from Latin America, from the Middle East, from Africa and other places as well. The third is the infrastructure of the UAE is cutting edge connectivity, even the infrastructure to be able to deploy and leverage the data that is available here. And then finally, if you look at the datasets, that are available in the UAE, they really are the best datasets when it comes to artificial intelligence. First, we have 200 nationalities living in the UAE, so it’s the least biased because of the demographics and the breakdown are very unique. So it is becoming a hub for the development of A.I. talent. Think about the biggest number of unicorns of the region, the highest number of unicorns in the region, and the Middle East actually is in the UAE. If you look at the biggest number of digital economy companies, they’re in the UAE. And as well as high quality talent. So just a number that I’d like to share. If we look at coding talent and I think coding talent is different to A.I. talent, but look at that for a second here. We had 33,000 professionals working in the UAE in this domain. Do you know the number that was in 2021? Do you know what the number is today? Over 120,000.

Murray: Wow. So a fourfold increase in two years.

Al Olama: In two-and-a-half years.

Murray: And that’s in part because of policies designed to attract them here. I spent a few hours at the Mohamed bin Zayed University of Artificial Intelligence, which is, you’re the first minister for artificial intelligence. That’s the first university solely dedicated to artificial intelligence. Talk about how that plays into the ecosystem you’re talking about.

Al Olama: So the Mohamed bin Zayed University of Artificial Intelligence, which I sit on the board of, is a focus university that’s looking at quality over quantity. You’re not going to see it graduate a thousand people, for example. The people that graduate from this university are the top of the top people, they’re looking at post-graduate degrees. So masters and PhD-level graduates. And what makes them very different is applied research rather than just theoretical research. What’s important is you have access to computers at the university, you have access to world class talent, and you’re able to find colleagues from different places on Earth that are going to start the business with you, that are going to help improve your research and to push you forward.

[music starts]

Murray: Jason Girzadas, the CEO of Deloitte US, is the sponsor of this podcast and joins me today. Welcome, Jason.

Jason Girzadas: Thank you, Alan. It’s great to be here.

Murray: Jason, everyone in business is talking about A.I. It clearly has the potential to dramatically disrupt almost every industry, but a lot of companies are struggling. What are some of the barriers that companies are facing in creating business value with A.I.

Girzadas: Yeah, Alan, A.I. is on every client’s agenda. I think every CEO and board interaction and conversation that I’m a part of proves the fact that the promise of A.I. is widely held and the hope is far and deep that it creates business value. But there’s challenges, to be sure. What we’ve seen is that the probability of success increases dramatically with strong executive sponsorship and leadership. There has to be a portfolio of investments around A.I., as well as to link the business ownership with technology leadership to see the value of A.I.-related investments. Over time. We’re optimistic and confident that the value will result, but it will be a portfolio where either short-term opportunities for automation improvements around productivity and cost takeout, and then longer-term medium-term opportunities for business model innovation that are truly transformational. So this is a classic case where it won’t be a single approach that realizes value for A.I.

Murray: It sounds like you take it a step at a time.

Girzadas: I think a step at a time and also a portfolio recognizing that some investments will have short-term benefit where you can see immediate use cases creating financial and business impact, but longer-term opportunities to really invent different customer experiences, different business models, and ultimately create longer-term benefit that we can’t even fully appreciate at this point in time.

Murray: Jason, thanks for your perspective and thanks for sponsoring Leadership Next.

Girzadas: Thank you.

[music stops]

Murray: So I want to talk a little bit about the defensive part of this. People worry about A.I.  You’ve already referred to bias. They worry about data protection, intellectual property protection. With generative A.I. we have this danger of hallucination and polluting the information eco structure. Every government in the world is thinking about how do we regulate in order to minimize the downsides? You’re thinking about that. I know you’re talking to other countries. What’s the right answer? I mean, it’s a dangerous effort to try and regulate a technology that’s developing so rapidly. What’s the right approach for minimizing the downsides of A.I.?

Al Olama: So I was on a panel yesterday and they asked me from one to ten, with ten being most optimistic and one being the most pessimistic, where do you stand? And I said, I stand at number five and any government official, this is my view, if any government official says that they are six or above or four or below, you should be concerned. Because as government officials, we need to be in the middle. Dead center. We need to not be blinded by the glitz and glamour of A.I. and what it’s going to bring is promises, and at the same time not be too pessimistic that we overregulate and we actually harm society by not letting them utilize this technology. So we don’t lose control of the balance.

I think there are a lot of challenges. The challenges are unique to the geographies. So depending on where you are, depending on the current job market and its breakdown in your country, the risks are going to be different. Depending as well on whether you’re a receiver of the technology or a developer of the technology. The challenges are different. And then finally, looking at the maturity of the technology as a whole is another issue that people completely ignore. People think that if the technology is, let’s say, progressing in this city, it’s going to continue to progress in the city. But the thing that we’ve learned from history is this is never the case. Things change. Certain developments, for example, excite people more. So we hit certain boundaries and it will continue to happen. What really needs to happen is more dialogue. That’s what the UAE is involved in every single dialogue on, whether it’s A.I. governance or whether it’s on A.I. cooperation with everyone. We think that there needs to be a global dialogue on this. And that’s why I sit on the UN Secretary-General’s High Level Advisory Board on Artificial Intelligence, as well as the World Economic Forum’s, the A.I. governance body, and many other bodies around the world.

Murray: There needs to be a global dialogue, but there also has to be some degree of global integration or agreement. Right? A.I. can’t be held within borders. It’s not going to recognize borders.

Al Olama: So I agree with that. I think if you look at A.I. and climate change, they’re very simple in terms that no one country can tackle the issue on its own. So you can be the most green net positive country on earth, if someone else pollutes, you will be harmed. If it goes wrong somewhere, it’s going to cross borders and it’s going to come to you. Second, it’s a matter against time. So the more we delay action with climate, the more the problem becomes bigger and harder to deal with. The same is true for artificial intelligence. And the third is it is an issue where we have finite expertise dealing with it. So even with climate, they have climate experts that are, you know, a small number of people on board or working on it. We have the corporates trying to do their bit, but if you actually think about it, there is so much data that we cannot actually crunch using the best standards of the world in real time. So we need to leverage technology, I think with the A.I. we need to leverage technology to be able to govern the technology. We don’t have conventional governance mechanisms to regulate the technology that is completely different.

Murray: Yeah. Omar Al Alama, a fascinating conversation. Thank you for taking some time to talk with us.

Al Olama: Thank you for having me.

[music]

Lev-Ram: Alan, as I was listening to your interview, it made me think about how the U.S. government is approaching these same issues. And while we don’t have a minister of A.I., we do have Arati Prabhakar. She’s director of the White House Office of Science and Technology Policy and is President Biden’s key advisor on A.I.-related topics.

Murray: Yeah, that’s right, Michal. Before she took this role, Dr. Prabhakar spent some time at the Defense Department. She also spent many years in the private sector as an executive in Silicon Valley and as a venture capitalist. When she was appointed to the position in 2022, she received, get this, bipartisan support. That doesn’t happen often in Congress.

Lev-Ram: Wow. Wow. It doesn’t matter what she says. That’s already really impressive. We can, you know, unify over some things, apparently. Earlier this year, Fortune‘s editor-in-chief, Alyson Shontell, had the chance to interview Arati. I thought it would be interesting to play just a really short snippet of that conversation, to hear just a little bit about how she’s thinking about other countries deploying A.I. and also her take on the tension between regulation and innovation. And she should know because, like we said, she’s been on both sides. So this is Fortune‘s Alyson Shontell and Arati Prabhakar.

Alyson Shontell: So you have a very hard challenge ahead of you trying to wrap your arms around all of this, plan for something that we’ve never experienced before. Even if we’re able to create regulation here in the United States. How are you thinking about multinational? Like what kind of multinational agreements do we need to have in place? How worried are you about what kind of A.I. other places like China might build? How are you thinking about all that?

Arati Prabhakar: Yeah, the first place to start here is to just recognize that when you have this kind of powerful capability, every nation, every company, everyone on the planet is trying to use A.I. to create a future that reflects their values. I think we’re all pretty clear, we don’t want to live in a future that’s driven by technology that’s been shaped by authoritarian regimes. And because, look, the ingredients for this are data and computing algorithms that are widely available around the world. So it is absolutely a global race to get this right and to do this in a way that really reflects the values of democratic nations. So that means moving out here in the United States. But doing that in a way that’s lined up and deeply coordinated with our likeminded allies around the world and that’s a that’s an integral part of the approach.

Shontell: And how are you thinking about regulation versus innovation when it comes to businesses building A.I., like the anthropics. What kind of regulations should CEOs be aware of that you’re thinking through?

Prabhakar: Yeah, again, I think the place to start is to just that there’s so many of the harms that everyone’s identified from artificial intelligence that are malfeasance and for which we already have laws and regulations and managing the ways that applications of A.I. can go wrong is a critical place to start. And so people should definitely expect, just as an example, the equal Opportunity Employment Commission, the Consumer Financial Protection Bureau, the Department of Justice and the Federal Trade Commission, all came out a couple of months ago with this statement just asserting that they are still enforcing against violations for which we have laws and regulations on the books, and that the fact that you’re using artificial intelligence technology does not get you off the hook. I think that’s a great example of the kind of beefing up of regulations to manage the harms of A.I. issues. I don’t see a fundamental tradeoff between regulation and innovation if you get regulation right. And that’s exactly what we are working on right now. That’s what’s actually going to unleash the power of this innovation because if you step back and you look at where A.I. is today, there is great excitement. But there is also great trepidation. And it’s not until people really come to trust this technology, that’s when we’re going to get the full power of what A.I. can bring and regulation and putting the control points so that we can. It’s sort of like having brakes on a car. It actually lets you go faster once you know that you’re in control. And that’s really where I think we have to get to with this.

Shontell: You know, I would say the government has a reputation for wanting to do a lot of good, but also sometimes being a little slow. And this is the fastest moving technology we have ever seen. I think when it comes to government regulation and social media, wasn’t fast enough or wasn’t iterative enough. Even with the internet, not fast enough, not iterative enough. How are you thinking about it? This way? How do we not make the same mistakes? How can you remove barriers to move as fast as you’re going to need to move to put what we need in place to be protected?

Prabhakar: Yeah, you know, Alison, I’ve spent half of my professional life in the public sector and half in the private sector. And in my private sector life, of course, we need to move quickly when technology is moving. But the thing is most when I’m in a company, I definitely want regulatory clarity, but I actually don’t want regulations that change and twitch as fast as the technology moves. And so I think it’s really important to get clear regulations in place in ways that can be robust enough, but also adaptive enough as the technology changes. And I’ll come back to where I started because I think being clear about the values that we’re driving to is essential precisely because the fast pace of change makes for some very choppy waters right now. And when you’re in choppy waters, that’s when you have to be really clear about what it is you’re steering to. So staying focused on the idea of systems that have to be safe, systems that have to prove that they are effective and accurate and that they work in the ways that that was intended. Being clear and betting bias cannot happen, we can’t use systems until those issues have been wrung out. I think those are going to be the guideposts so that as the technology changes and continue to steer to those core values.

[music]

Murray: So, Michal, this technology is moving so fast that it’s not going to be easy for any government agency, for anybody in government to get ahead of it. And the last thing you want them to do is do something that’s going to slow down the potential positive impacts. But I think everyone feels like they have to at least try.

Lev-Ram: Absolutely. And I think, you know, speaking from what I’m seeing in Silicon Valley, there are some real concerns about stifling innovation. You know, I think everybody who’s in the space wants things to keep moving forward and obviously realizes the need to be competitive, not just within the industry but between countries. So this is really important stuff. At the same time, you know, I’ve heard some tech leaders out here, Aaron Levie, for example, from Box, say that Biden’s executive order on A.I. is the gold standard for how to regulate this stuff. So who knows? Like anything else, people have different opinions and we’ll get to hear a lot more of them in the coming year, I’m sure.

Murray: Yeah and Michal, you and I’ve talked about this. The thing that distinguishes this generative A.I. from previous waves of technology is that if you look at at the internet era, if you look at things like the iPhone and the iPad, they started with consumers and they took a few years to kind of find their way into business. But this round of technology has exploded in business right off the bat. And the good news in that is we’re going to have plenty to talk about next year on the next season of Leadership Next.

Lev-Ram: Absolutely. We will see everyone back here next week for our final episode of the season. Thank you for listening.

[Music starts.]

Murray: Leadership Next is edited by Nicole Vergara.

Lev-Ram: Our executive producer is Megan Arnold.

Murray: Our theme is by Jason Snell.

Lev-Ram: Leadership Next is a production of Fortune Media

Murray: Leadership Next episodes are produced by Fortune’s editorial team. The views and opinions expressed by podcast speakers and guests are solely their own and do not reflect the opinions of Deloitte or its personnel. Nor does Deloitte Advocate or endorse any individuals or entities. Featured on the episodes.



Original: Fortune | FORTUNE: Insight from the UAE’s minister for A.I. on the tensions between the technology’s regulation and rollout