Interview Peter Kyle, the UK’s new Secretary of State for Science, Innovation and Technology, has been in America this week promoting British expertise in AI and other areas. He took the time to sit down with journalists on Friday to explain his plans.

On Thursday, Kyle met with tech giants in Seattle and he has now brought his message down south. He’s also promoting the UK’s AI Safety Institute, which is opening an offshoot in San Francisco staffed by British and American techies later this year.

Reporters: Welcome back to the US. How has your reception here been?

Kyle: I was here in February when I was a shadow Secretary of State preparing a program for the government. I met many companies in Seattle and here, but that was on a listening exercise. Every company back then spoke about the instability they saw in the UK – economically and politically. There were lots of conversations about energy supply, the price of electricity, connection to the grid, regulatory challenges, and issues of planning.

Those are issues, which I then fed into our program of government. And now, three and a half months in, I’ve come back and said, “Look what we’ve done.” There is stability in our politics. There is stability in the way we’re managing the economy. We stood on a platform of a decade of national renewal, so we are signaling 10 years into the future on economic and social policy and the way that we’re managing the country.

A bill has been drafted. It will be introduced to Parliament soon, but already powers have been given to Secretaries of State to intervene and expedite investments into the UK’s infrastructure. Labs for life sciences and datacenters are now national planning priorities and will be expedited.

Reporters: What has the effect of Brexit been in terms of Americans’ perception of Britain?

Kyle: Brexit has only been mentioned once on this trip, and that was at a dinner yesterday by one of the people around the table.

So if you look at the Investment Summit we had last week, that raised £63 billion ($82 billion) worth of pledged investment into the UK, £24.3 billion of that was directly AI related. Add on another £10 billion which is life sciences related, and you see that more than half of the total is related to the economy of the future.

What was pledged last week is more than double what the previous government got in its last Investment Summit, and as much was pledged in AI investment this year than the previous Summit raised in total. And that shows that Britain is connected to the industries of the future and is open for business in the key areas that really matter in the global economy.

Reporters: California’s governor recently vetoed the state’s own AI bill. How do the UK and other governments view California as an influencer on AI policy?

Kyle: California does have an outsized influence. And California is a sizable economy in its own right, not just legislatively, but actually just the singularity of its powerful companies. The fact that I’m here for the second time this year, the fact that I’m going to be a regular visitor here, shows not just the power of these individual companies but the collective power of Seattle and Silicon Valley and San Francisco.

I don’t want to be a Secretary of State that sits in their office thinking, ‘I can control things by legislating and regulating from Westminster,’ because those days have gone when it comes to this area

It shows the respect that governments should be showing companies that innovate on the scale that some of these companies are. I don’t want to be a Secretary of State that sits in their office thinking, “I can control things by legislating and regulating from Westminster,” because those days have gone when it comes to this area. We need to have a far more relationship-based approach to engaging with big tech.

What I’m doing in Britain is putting the Safety Institute onto a statutory footing, but I’m enshrining the voluntary code that has already been agreed by all of the frontier AI companies. I have tasked every regulator in the UK to do an assessment about how AI will impact the sectors that they regulate, and they must come up with plans to show that they are adapting to the potential impact into the future.

Why am I doing this? I don’t want to disrupt the regulatory environment to stifle innovation. But secondly, I want to create an environment where fast evolving technologies can have a soft landing into societies.

We need to safely exploit all the potential of AI. We have to be very cognizant that this isn’t just Britain and America and Europe and some allied, like-minded, democratic Asian countries. If we don’t win this, if we don’t stay at the cutting edge of innovation, China will get there.

Reporters: Are you worried that regulation could jeopardize the relationships you have with the AI labs at present?

Kyle: I think that the relationship is getting deeper and more trusting. And I think the benefits of understanding safety at a very deep level, and for those companies to be able to adapt accordingly, are speaking for themselves. So it’s not disruptive to what we’re doing.

I think everybody understands this fundamental political challenge that we have and a fundamental sort of existential challenge that they have. If there is a model that makes its way into the public, and it leads to a widespread societal harm or damage to national security, someone’s gonna have to hold the can for that. I don’t think any government would survive the next election if harm emerged in society [from AI].

Reporters: We’re two weeks away from the presidential election. Do you have thoughts about how AI policy could differ between Kamala Harris and Donald Trump?

Kyle: As a government, we deeply respect the choices that the American people will make, and we’ll work with whatever administration emerges after the election. ®

Comments are closed.