© 2024 Maine Public

Bangor Studio/Membership Department
63 Texas Ave.
Bangor, ME 04401

Lewiston Studio
1450 Lisbon St.
Lewiston, ME 04240

Portland Studio
323 Marginal Way
Portland, ME 04101

Registered 501(c)(3) EIN: 22-3171529
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations
Scroll down to see all available streams.

'Age of Danger' explores potential risks because AI doesn't understand rules of war

LEILA FADEL, HOST:

The journalist Thom Shanker spent decades covering American wars and national security. He wrote for The New York Times. Now he stepped away. And he tells our co-host, Steve Inskeep, that he's thinking about threats in the not-too-distant future.

THOM SHANKER: There's a lot of very scary things out there. And for 20 years, we make the case that this government focused on counterterrorism, zoom-like focus. And for those 20 years, we ignored lots of rising threats. And they are now upon us. And we are really unprepared. The system is unprepared. The public is unprepared. We haven't thought about some of these things.

STEVE INSKEEP, HOST:

Shanker co-authored a book with Andrew Hoehn called "Age Of Danger." It's a catalogue of threats that might keep people up at night if only they knew. He says national security professionals warn about diseases designed to destroy American crops. They think about low-lying naval bases that may be underwater in a few decades thanks to climate change. They think about ways to counter the advanced weaponry of China. Shanker does not advocate a bigger military budget to counter these threats, but he does argue the government needs to make smarter use of the resources it has. He says a prime example is the dangers of computers run by artificial intelligence.

SHANKER: Most of the public discussion of AI so far has been about, will it write my kid's homework? That's bad. Will it put law clerks out of a job? That's bad. Will it tell me to break up with my dog? That's bad. Will it compose symphonies - I don't know - if they're good symphonies. So those are real-world problems, Steve. But when you get into the national security space, it gets very, very scary what AI can do, an autonomous weaponry that operates without a human in the kill chain.

INSKEEP: When you say autonomous weaponry, what do we mean, like a tank with no person that drives itself, finds its own target and shoots it?

SHANKER: It's already happening out there now. There's a military axiom that says speed kills. If you see first, if you assess first, if you decide first, if you act first, you have an incredible advantage. And this is already part of American military hardware, like the Patriot anti-missile batteries that we've given to Ukraine. Incoming missiles, you really don't have time for a human to get his iPad out and work out trajectories and all that. So they're programmed to respond without a human doing very much. It's called eyes on, hands off.

INSKEEP: Does a human still pull the trigger or press the button in that case?

SHANKER: Certainly can. Absolutely. Absolutely. But sometimes, if all of the data coming in indicates truly it's an adversary missile, it will respond. And here's where it gets scary. As weapons get faster, like hypersonics, when they can attack at network speed, like cyberattacks, humans simply cannot be involved in that. So you have to program, you have to put your best human intellectual power into these machines and hope that they respond accordingly. But as we know in the real world, humans make mistake. Hospitals get blown up. Innocents get killed. How do we prevent that human error from going into a program that allows a machine to defend us at network speed, far faster than a human can?

INSKEEP: I'm thinking about the way the United States and Russia - or in another context, perhaps, the United States and China - have their militaries aimed at each other and prepared to respond proportionally to each other. In a worst-case scenario, a nuclear attack might be answered by a nuclear attack. Is it possible that through these incredibly fast computers, we could get into a cycle where our computers are shooting at each other and escalating a war within minutes or seconds?

SHANKER: That's not where we are now. But that, of course, is the concern not only of real-world strategists, but of screenplay writers, like "Dr. Strangelove," those sorts of things.

INSKEEP: I was going to ask you if you had seen "Dr. Strangelove." Clearly, you have.

SHANKER: You should ask me how many times I've seen "Dr. Strangelove."

INSKEEP: Let's describe - I don't think we're giving away too much - the machine that turns out to be the big reveal in "Dr. Strangelove." What is the doomsday machine?

SHANKER: Well, the Kremlin leader has ordered a machine created that if the Soviet Union is ever attacked, then the entire Soviet arsenal would be unleashed on the adversary. And in some ways, you can make the case that is a deterrent because no matter who attacks with one missile or 1,000, the response will be overwhelming.

(SOUNDBITE OF FILM, "DR. STRANGELOVE OR: HOW I LEARNED TO STOP WORRYING AND LOVE THE BOMB")

PETER SELLERS: (As Dr. Strangelove) Because of the automated and irrevocable decision-making process, which rules out human meddling, the doomsday machine is terrifying and simple to understand and completely credible and convincing.

GEORGE C SCOTT: (As General Turgidson) Gee, I wish we had one of them doomsday machines, Stainsey.

SHANKER: But the joke of the movie is they were going to announce it on the Soviet leader's birthday the following week. So the world doesn't know that this deterrent system is set up. And basically, Armageddon is assured.

INSKEEP: What's going to happen is there's going to be a random attack.

SHANKER: And the machine will respond, as programmed by humans. And the challenge today is, right now, most of the missiles fly over the pole. We have pretty good warning time. But as the Chinese in particular experiment with hypersonic weapons, we might not have the warning time. And there might someday be an argument to design systems that would respond autonomously to such a sneak hypersonic attack.

INSKEEP: When I think about the historic connections between the Pentagon, defense contractors and Silicon Valley and all the computing power that's in Silicon Valley, I would like to imagine that the United States is on top of this problem. Are they on top of this problem?

SHANKER: Some of the best minds are on top of it. And Andy Hoehn and I spoke to a number of people in the private sector, number of people in the public sector, in government. And they really are aware of the problem. They're asking questions like, how do we design artificial intelligence that has limits, that understands the laws of war, that understands the rules of retaliation, that won't assign itself a mission that the humans don't like? But even people like Eric Schmidt, you know, the founder of Google, who's spending a lot of time and money in this exact space, spoke to us on the record. He's extremely worried about these questions.

INSKEEP: It seems to me there are two interrelated problems. One is that an adversary like China gets ahead of the United States and can defeat the United States. But the other is that some effort by the United States gets out of control and we destroy ourselves.

SHANKER: That is a concern. And that could be your next screenplay. And the problem is you're raising a problem, Steve, that nobody has an answer for. I mean, how does one design AI with real intelligence and compassion and rationality, because at the end of the day, it's just ones and zeros?

INSKEEP: Tom Shanker is co-author of the new book "Age Of Danger." Thanks so much.

SHANKER: It was an honor to be here, Steve. Thank you so much for having me.

(SOUNDBITE OF SONG, "WE'LL MEET AGAIN")

VERA LYNN: (Singing) We'll meet again, don't know where. Transcript provided by NPR, Copyright NPR.