War is a Laboratory for AI

On September 26th, 1983 Lieutenant Colonel Stanislav Petrov was in charge of the USSR's Nuclear Strike Detection command center - at the height of the Cold War when tensions between the USSR and the US were running high.

A few weeks earlier, the Soviets had shot down a South Korean passenger jet that headed straight into its airspace. One day in September, the computers in Petrov's Bunker went on high alert. Five intercontinental ballistic missiles were headed straight towards the USSR...

Author:
Date Published: May 23rd, 2024

The official procedure was clear, at the first indication of a US nuclear strike, the USSR would launch a counterstrike on US cities. The computers were unequivocal, the missiles were coming, but Petrov waited. He didn't inform his superiors. He knew tensions were high between the two countries, but it just didn't make sense to him that the US would strike in that particular way.

So he waited. The missiles didn't arrive, and when the incident was investigated, the Soviets discovered that a rare alignment of sunlight on high-altitude clouds and the satellites had triggered a false alarm. Petrov's hesitance saved the world from nuclear war, but would an AI-based system have made the same call?

Right now, militaries around the globe are investing heavily in the use of AI weapons and drones from Ukraine to Gaza, weapon systems with increasing levels of autonomous behavior are being used to kill people and destroy infrastructure. The development of fully autonomous weapons is showing little signs of slowing down. So what does that mean for the future of warfare? What safeguards can we put up around these systems?

And is this runaway trend towards autonomous warfare inevitable or will nations come together and choose a different path? Today we're going to sit down with Paul Scharre to try to answer some of these questions. Paul is the author of two books on autonomous weapons. He's a former army ranger, and he helped the Department of Defense write a lot of its policy on the use of AI in weaponry. So this is a critical conversation and we're pleased to have an expert like Paul help us get a sense for how AI will change the way wars are fought. Thank you, Paul, for coming on the show.

Thank you. Thanks for having me.

So I want to start by talking about a recent trip that you made to Ukraine, which has become something of a laboratory for AI weapons. What did you see here?

Paul Scharre: Tristan Harris:

Page1of21

Center for Humane Technology | Your Undivided Attention Podcast War is a Laboratory for AI with Paul Scharre

Paul Scharre:

I was in Ukraine a few weeks ago and met with government officials and people from the Ukrainian defense industry, anywhere from large state-owned defense companies, all the way to just a whole host of small startups. And the level of innovation in the technology and tactics in Ukraine is really unprecedented. Where the big development right now in Ukraine is autonomous terminal guidance. So that's where and a lot of these drones are remotely controlled, they're piloted by a person, but because there's a lot of drones, there's also a lot of jamming that's going on, where people are jamming the communications link because if the drone is remotely piloted, well, once you jam that communications link, then the drone's pretty useless. And so people are adding in more autonomy, particularly for the last mile once a person has chosen a target to complete that attack. But that is a stepping stone towards, in the future, more autonomous weapons.

So all of a sudden there's this rapid advancement that we've seen in Ukraine, there's this proliferation of innovation around autonomous weaponry, why do you think we're seeing that right now? What's special about right now? Is it about the conflict or is it about where we are in the technological development?

Yeah, I mean, that's a great question. I think it's both. So 10 years ago we just couldn't see the types of things that we're seeing in Ukraine now, but also war is a real accelerant of innovation. We're now over two years into this war, it's settled down into a long grinding war of attrition between Russia and Ukraine. And Ukraine doesn't have the people or the industrial production to go toe-to-toe with Russia and trade sort of person-for-person in this kind of war. So they've got to find ways to be clever, be innovative, and that's driving it as well. And I think it's just worth remembering that war is thankfully rare. And so most of the time in peacetime, militaries are coming up with things that they think will be valuable, but they don't get immediate feedback on what is going to work or is not going to work. They don't find out oftentimes until they fight a war. And so now when you have a longer war like we're seeing in Ukraine, on both sides, you can get really rapid feedback loops that can accelerate innovation very quickly.

So let's pause here and define some terms for our audience. We have not covered this topic before. What exactly do we mean when we talk about autonomous weaponry? What's the difference between remotely operated, semi-autonomous, fully autonomous, human in the loop, human on the loop? Give us a little bit of a tour of categories so we know what we're dealing with here.

Yeah, no, great question. These terms get thrown around a lot and they're not always used the same way. So most drones today are remotely piloted or remotely operated. Then a lot of times it looks like a person actually just manually maneuvering the drone and piloting it the way they would pilot an

Daniel Barcay:

Paul Scharre:

Tristan Harris:

Paul Scharre:

Page2of21

Center for Humane Technology | Your Undivided Attention Podcast War is a Laboratory for AI with Paul Scharre

Daniel Barcay:

aircraft if they were on board the aircraft. Sometimes that remote operation is a little more removed, where for some of the more advanced drones, like a Global Hawk for example, it's a very large, very expensive US military drone, flies up at 60,000 feet, very high altitude. Some of those more advanced drones are flown with a keyboard and a mouse, but they're still directed by humans where to go and what to do. We're starting to move towards more autonomy and different functions for drones, whether it's navigation or automated takeoff and landing, for example.

I would say this is analogous to what we're seeing in cars, where a lot of new cars today have a lot of autonomous features for specific types of driving functions, automatic braking, self-parking, intelligent cruise control, automatic lane keeping, and you're sort of bit by bit starting to incrementally take over some different functions of driving. Now for cars, there's a clear vision on the horizon of a point in time in the future, a fully autonomous car that won't even have a steering wheel. Now for militaries, at least for weapons, that vision of the future is one of an autonomous weapon that would still be built by humans and launched by humans to go out into the battle space to perform some task. But then, once launched, would be on its own and using its programming or some onboard AI, you know, some machine learning algorithm that its been trained on, would identify targets all on its own and then attack them and complete that attack and carry out the attack all by itself.

We're not quite there yet. There have been a couple, one-off examples historically, but certainly we don't see autonomous weapons in widespread use, but there's a lot of advancements that are taking us in that direction, and that seems to be the arc of the technology right now. And there are, some of the terms you used, semi-autonomous for example, would be a weapon that has many of those functions, but a human is still choosing the target. And sometimes people use the term supervised autonomous or a human on the loop to mean a circumstance where the weapon could complete that engagement on its own, but a human could supervise it and could intervene if things go wrong. Just like if you had somebody maybe sitting in the driver's seat of a Tesla on autopilot and they're, you know, supposed to be, at least in theory, hands on the wheel being attentive, they could jump in if something goes wrong. Those are all different possibilities for autonomy as well in weapon systems.

Adding to that, there seem to be two different kinds of narratives around weapon autonomy. One narrative is this precision narrative that says if these things are guided by a kind of a calculus, then you don't have some of the things that make war awful: people who decompensate emotionally and start targeting the wrong people, you know, long nights of low sleep and sleep induced poor performance that result in people dying. On the other hand, the narrative on the other side is that this kind of cold, calculating, human not being in the loop might lead to the normalization of casualties and the normalization of violence

Page3of21

Center for Humane Technology | Your Undivided Attention Podcast War is a Laboratory for AI with Paul Scharre

Paul Scharre:

in a way that is unchecked by some of the human instincts to avoid it. What do you think about these two narratives?

Well, I think you captured very well the two arguments that are out there sort of in favor of autonomous weapons or opposed to them. Of course there's been a movement of a number of different humanitarian groups in several countries opposed to autonomous weapons and calling for a preemptive ban on them before we see them sort of built and used in a widespread fashion. But others have said that they could be more precise and more humane over time. I think there's actually validity to both of those, and it's possible to envision a future where both of those things become true, that there could be some conflicts when you have militaries that care a lot about the rule of law and avoiding civilian casualties, where in some settings autonomous weapons might be more beneficial. They might be more, not just militarily effective, but also more effective in avoiding civilian casualties.

There might be other settings where they become a slippery slope towards people broadening the aperture of who's targeted leading to civilian casualties, or they could lead to accidents, to situations where maybe it's not an intentional use of an autonomous weapon in a bad way. It's not a war crime, but the weapon makes a mistake and certainly, depending on the weapon and how it's used, there could be a lot of civilian casualties. So I think that actually we could end up in a future where both of those visions become true. And then the question is how do we approach this technology in a way that's thoughtful in terms of how we use it and how we govern it and regulate it to avoid some of the worst harms?

It may be worth taking a beat here just to back up and say that the incentives, the reason why militaries are engaging in development of autonomous weaponry isn't even related necessarily to the precision of the strike. The incentives for military seem like they're multifactorial from easier logistics to faster counter-response. Can you talk for a second about why it is that militaries are rushing headlong into this technology?

Sure. So maybe it's worth unpacking here the difference between, for example, autonomous weapons, so a weapon system that itself would go out and attack targets on its own, versus the use of autonomy or automation or AI more broadly across the military space. And we're seeing that militaries around the world are very interested in AI and automation for a whole wide variety of tasks, for maintenance and logistics and personnel management, for the same reasons that a lot of industries are, because it could improve efficiencies and save money and reduce personnel requirements and make things more effective. And, you know, most of what militaries do is not actually the fighting. Militaries talk about tooth to tail and sort of the tooth being the fighting component of the military

Daniel Barcay:

Paul Scharre:

Page4of21

Center for Humane Technology | Your Undivided Attention Podcast War is a Laboratory for AI with Paul Scharre

Tristan Harris:

Paul Scharre:

and the tail being everything else. Usually it's, you know, seven or nine or 10 times as many people and dollars spent on all of the support functions.

And there's enormous opportunity for AI and those things. And some of those things like personnel applications raise the same kinds of concerns about bias and hiring and promotion that you might get in other fields as well. The value for militaries in the autonomous weapon space is really like, there's a lot of value for adding AI. Could you make more precise decisions if you have image classifiers that help identify objects? Sure, there's value in that. I guess one question is like, what's the value with taking away the human? Because we know that despite all of the amazing things that AI could do, there's still lots of ways in which humans add a lot of value, particularly in terms of understanding context and in novel situations where the AI system may not perform as well.

And there are two big reasons. One would be speed, if there's a circumstance where you need immediate reaction time, just like the value of automation in automatic braking in a car for example, there are going to be these places in warfare where split-second reaction times are really valuable. And the other one is if the communications link is lost with some controller with a drone as we started talking about. But I do think there's a lot of value in humans and militaries are going to find that they're going to want to keep humans in the loop whenever they can, whenever that's feasible for them.

Paul, this is a great moment to bring in a story from when you were an army ranger in Afghanistan. You mentioned in your first book there was an incident involving a shepherd girl, which shaped a lot of your thinking about human decision making and the importance of context. Could you tell us that story?

Sure, absolutely. So there was an incident when I was an army ranger. I was on a sniper team and we were up on the Afghanistan-Pakistan border and we'd infiltrated at night and we were setting up a hide site where we were going to watch for insurgents coming across the border. This turns out to be, as an aside, an insane task because the Afghanistan-Pakistan border is massive and unmarked and mountainous, so it's very much like, sort of, a drop in an ocean. But in any case, that was the mission. So we hiked at this mountain at night, and when the sun came up, we were very exposed and there was not a lot of vegetation in the area. There was about eight of us piled behind a couple rocks. So very quickly this farmer came out into his fields and he spotted us. So we knew people were coming to get us, so we hunkered down.

And what we did not expect is what they did next, was they sent a little girl to scout out our position. She was maybe five or six, she had a couple goats in tow, I think as cover that she was ostensibly herding goats, but I was pretty clear that she was there to watch us. She was not super sneaky to be honest. So she walked this long slow circle around us and she stared at us and we stared back at

Page5of21

Center for Humane Technology | Your Undivided Attention Podcast War is a Laboratory for AI with Paul Scharre

Daniel Barcay:

her and we heard what we later realized was the tripping of a radio that she had on her, and she was reporting back information about us. So we watched her for a while and then she left, and after that some Taliban fighters did come to attack us, so we took care of them. And then the gun fight that ensued or brought out the whole valley, so we had to leave.

But afterwards we were talking about how would we deal with a similar situation if we came across somebody we didn't know if they were, they looked like maybe a goat herder, but we didn't know if maybe they had a radio or something. Well, nobody suggested the idea of shooting this little girl. Like, that wasn't a topic that was raised, and that certainly would not have been consistent with my values that I was raised with or what we were taught in the army. But what's interesting is under the law of war, that would've been legal. The law of war doesn't set an age for combatants. So by scouting for the enemy, she was participating in hostilities the same way as if she'd been an eighteen-year-old male doing that same task. So if you programmed a robot to perfectly comply with the law of war, it would've shot this little girl.

Now, I think that would be morally wrong, even if there could be some legal justification for it. But it does beg the question, how would you program a robot to know the difference between what is legal and what is right? How would you even write down those rules and how would it know to understand the context of what it's seeing? It just drives home for me, I guess just the significance of these kinds of decisions. You know, in this instance, the fate of the war did not turn on that moment, but it certainly meant a lot to that little girl and to us to make sure that you were doing the right thing. And machines may not always know what that is.

I really want to take a moment to zoom in here because what you're pointing out is right on which is the ambivalence that we all carry around taking some of these decisions and making them procedural. And, you know, for people outside the military, forget about autonomy for a second, for people outside the military, even talking about concepts like acceptable levels of collateral damage or acceptable levels of people killed who you don't want to kill. In a way this isn't specific to the military. Like, doctors will talk about what is the acceptable level of death in patients from a specific intervention.

And so it's hard to talk about war without talking about how difficult it is to take some of these incredibly deep human intuitions and human moral moments and encode them into our society. And just as one quick aside, when I was in college, the trolley problem was this philosophy experiment around, you know, trying to decide the boundaries of different meta-ethical theories, it was useless. And now fast-forward 20 years later, we're having to program this into our autonomous vehicles to decide who to kill in a case where an accident is unavoidable. And so this isn't just about war, it's about how much do we cede

Page6of21

Center for Humane Technology | Your Undivided Attention Podcast War is a Laboratory for AI with Paul Scharre

Tristan Harris:

control to things that are programmed and how capacious can those programs be around our morals and our ethics?

Also can we come up with the vocabulary of philosophical distinctions as fast as we need them? I mean, one of the things you write about Paul, is, you know, our previous laws of war don't account for drones and new kinds of automated submarines. And basically the laws and the categories that we've been guided by thus far are constantly getting outdated and undermined by technology inventing millions of new categories underneath that. And so what that forces us to do is look at the spirit of those laws and then reinterpret them. But kind of the meta challenge that we talk about in our work at Center for Humane Technology is that our 18th century institutions aren't able to articulate the new distinctions as fast as the technology requires out of them. To put it in Nick Bostrom's terms, AI is like philosophy on a deadline, we have these urgent philosophical questions and now we have a deadline to actually answer them because we are instrumenting our society with more AI.

So with all that in place, I want to ask you what are some of the actual philosophical questions around war that as an expert in the automation of warfare and the automation of violence, that we need to be figuring out?

Yeah, I mean it's a great question. There's several, and I think some of the challenges look very similar to what people are facing in other industries and professions. Certainly there's a set of class of problems that is sort of, okay, we're having to task a machine to perform something that humans used to do. And now the rules that we're implicit for humans, we have to write down, as you explained for vehicles for example, in some cases, maybe those rules weren't written down just because we trusted human judgment to figure it out. In some cases, maybe the case of drivers, maybe human reflexes aren't even good enough to be making really deliberate conscious decisions in the middle of the crash, but now we're going to have to write down those rules. So that's one set of challenges and that exists certainly in the military space as well.

There's a sort of, you know, additional problem of just trying to figure out what are the tasks that we should be automating in what context? And of course one of the challenges there is that line's going to continue moving over time as the machines keep improving. And I think some of the things that you were saying about humans are much better at understanding context for decisions. And so one of the ways that I think about this, at least in the military space is, if there's a type of task where there is a clear metric for better performance, and either we have good data on what good performance look like or we can generate that data, that's probably the kind of task where we can train a machine to do it, whether it's landing an aircraft on an aircraft carrier or some other kind of skill, aiming a rifle properly is a great example of this. Like if you choose a target, we

Daniel Barcay:

Paul Scharre:

Page7of21

Center for Humane Technology | Your Undivided Attention Podcast War is a Laboratory for AI with Paul Scharre

Daniel Barcay:

want the bullet to hit the target and missing the target is bad, whether the machine is doing it or the person.

Now, those are the kinds of things where we probably want to lean into automation once it's reliable enough and we can get there. But there's a lot of things where there isn't a clear right answer. And it depends a lot on context, what we want to think of as judgment. So for example, you know, if you have an image in, you know, an infrared camera or a video at night of a person, and we don't know what the person's holding, are they holding a rifle in their hands or a rake in their hands, there's a right or wrong answer to that. We could probably train image classifiers do a better job than humans if we have enough data, but to then go to the next question which is, is that person an enemy combatant?

Well, that's actually a lot trickier, and that might depend a lot on context of like, well, what were they doing a minute ago or 10 minutes ago? Or what's their historical network or what's the sort of circumstances in which they're in? It could be that they're holding an innocuous object like a shovel, but they're digging a hole for a roadside bomb and they are participating in hostilities, and that's really hard for machines right now. And so those are the kinds of things that I think we're going to need human judgment for the foreseeable future, and those are the things that we want to hold onto. So I think those are the kinds of problems that are going to be challenging as we try to figure out where are we comfortable using this technology.

A lot of this hinges on your sort of innate view on the reliability of a system like this, right? Because on one hand, if you treat the machine as kind of a clunky thing that has okay friend or foe recognition, but it doesn't have the subtlety that a human has on the battlefield to make these split-second decisions, that's a very pro-humanistic view and of course we should wait. On the other hand, you know, you have these sort of dystopian version of this where you have poorly trained eighteen-year-olds who are under slept, who have been emotionally decompensated in the field, making decisions that perhaps a machine should have overseen. And so I at least have this profound ambivalence over this question and it relies not only on the precision and how good we think these machines are at making these decisions, but also under what ethical principles they get created and how those get eroded.

Well and I think you make a great point that's really important, particularly in the wartime context when we think about things like autonomous weapons. To put in perspective, what is the baseline for human performance? It's not always great, right? And so humans commit war crimes, humans make mistakes, humans do terrible, terrible things. And so sometimes when I'll hear discussions about autonomous weapons, I'll hear people sort of putting what humans do up on some pedestal as though it's like this pristine way of people fighting, where, you know, back in the day, people would look each other in the eye and

Paul Scharre:

Page8of21

Center for Humane Technology | Your Undivided Attention Podcast War is a Laboratory for AI with Paul Scharre

Tristan Harris:

Paul Scharre:

appreciate their humanity before, you know, killing each other with battle-axes or swords or something.

And it's like that's sort of a really unrealistic depiction of what's going on. And so we need to be realistic about what that baseline is, so then we can ask, okay, as the technology's coming along, will it be improving things? The flip side of that is I'll often hear sometimes in the autonomous weapons debates, people sort of painting this vision of people using technology in the most perfect way where everyone's careful and thoughtful and the reality is we look around the world, we do see a lot of atrocities and civilian casualties, and some cases, if countries aren't trying to be careful, technology is not going to help. It actually might make things worse.

These are big complex questions. You've had experience in the military and worked at the Pentagon. Are these kinds of conversations happening inside the US defense establishment?

And so look, I'll admit that I'm biased here. I've been out of the Pentagon for a decade now, but I helped lead the working group that drafted the Pentagon's first policy on the role of autonomy and weapons way back in 2012. So they were, you know, fairly ahead of the curve on a lot of these issues in terms of thinking through these challenges. The current policy that's in place and it was updated last year, is fairly flexible that it lays out some categories of things that the military has done in the past. Good familiarity with, that are fine to do and for anything that's sort of new, it creates a process for bringing together people from different parts of the military community: lawyers, policy professionals, military leaders, engineers to think through some of these challenges that we're discussing when they have a practical weapon system and people are saying, well, can we build this? Can we build this thing? Can we deploy this thing? Is it safe? Is it going to be appropriate?

And it's mostly what the Pentagon policy does is create a process for doing that. I think it is something that they're being really thoughtful about. I think that the robust debate that we have publicly about military AI really helps press the Pentagon to be thoughtful. And one of the best things, ironically enough, and the military wasn't happy about this at the time, was Google's decision not to continue to work on Project Maven, which wasn't about autonomous weapons. It was just about being involved in AI to support the military overall.

Just a note here for listeners, Project Maven is the name of the Pentagon's umbrella initiative to bring Google's AI and machine learning into their targeting systems. It started back in 2017 and it sparked a lot of internal discontent within Google as well as a very public staff protest letter.

Tristan Harris:

Page9of21

Center for Humane Technology | Your Undivided Attention Podcast War is a Laboratory for AI with Paul Scharre

Paul Scharre:

It forced the military, I think for the first time to think about like, oh, we really need to be able to articulate to the broader scientific and technical community in America, how are we going to approach this technology? ANd that led to then to the DoD's AI ethics principles, which they developed in partnership with the broader, sort of, civilian tech community getting feedback from them. The Defense Department has continued to refine their policies on AI since then and getting more granular, right? And this is a challenge with all of these things, is how do you go from these lofty ethics principles to something practical that actually shapes what you're doing? But I think they're doing that.

There is lately a lot more, like a lot of these AI concepts are just becoming more real. The secretary of the Air Force, Frank Kendall recently, a bit of a stunt, flew in an F-16 fighter jet that was being piloted by an AI agent doing simulated dogfighting, but like not in a simulator, out in the real world flying a jet around. So that's the state of the technology now, it's coming along pretty quickly. I'm comfortable with where the US military is, I'm a lot less comfortable where competitors are, like China and Russia when we don't have the same degree of transparency. AI technology is very global and we don't really know what those countries are doing, and I don't have certainly the same level of confidence in their ethical approach to this technology.

So looking out at especially the recent conflicts and Russia's use of autonomous weapons in Ukraine, which is increasingly the sort of laboratory for, you know, innovating and iterating all these different techniques and strategies, what worries you about how other states, China and Russia or non-state actors are going to be using these autonomous weapons?

Yeah, so I think I mean, look, AI is a very global technology. It's very democratized, very widely available, and we're seeing that in a lot of the innovation in not just the Russia-Ukraine war, but also in other conflicts in Nagorno-Karabakh and Libya and ISIS had a small drone army a few years ago that they were using in Syria and Iraq. So there's no question that we're going to see lots of countries, not just advanced militaries and non-state groups using this technology. I think what worries me is that not everyone is going to be thoughtful about avoiding civilian casualties, about complying with the law of war. There have just been some recent reports about Russia using chemical weapons in Ukraine. That's not like up for debate whether chemical weapons are legal in war, there's a global ban on chemical weapons and all. There's still some occasional uses by rogue dictators, Saddam Hussein had use chemical weapons and Bashar al-Assad in Syria.

So, you know, that doesn't give you a lot of comfort that they're going to approach this technology in a way that's compliant with the law of war. And you know, similarly, with China, there's not a lot of clarity about how China is approaching the technology. Now in the conversations that I have with Chinese

Tristan Harris:

Paul Scharre:

Page 10 of 21

Center for Humane Technology | Your Undivided Attention Podcast War is a Laboratory for AI with Paul Scharre

Daniel Barcay:

Paul Scharre:

scholars on this issue, there's a notable difference in that with the US military, I hear a lot of discussions about the law and ethics and morals and people maybe aren't sure about what to do in the future and what right looks like, but that's very much the frame that they're approaching this technology, that we need to ensure that we're being legal and moral and ethical about it,

I don't hear any of those things when I talk to Chinese counterparts. They are worried about control and they are worried about keeping humans in control. So it's not as simple as they're going to just automate everything. They're very intensely worried about political control and making sure that their political leadership has tight control over military operations, but the law doesn't have the same salience within the Chinese military. And so that does concern me in terms of where we see the technology going forward.

Building on that, so far we've sort of been talking about one-sided ethics of is it okay to shoot the little girl? Is it okay to do that? But one of the scary parts of this is when both sides begin using automation and the tempo begins to outpace human's ability to control intrinsically because the decisions are made so quickly. Can you talk a little bit about what you call hyper war or this sort of scaling of warfare into these inhumane timescales?

Yeah, I mean this is, I think the big worry in the long run. It's that you're right, these are not simply decisions that one military makes in a vacuum. It's a competitive environment and ultimately militaries want to fuel forces that are going to win on a battlefield, and if they lose a war, the consequences can be catastrophic for that nation. You know, certainly, we see in Ukraine for example, that that country is fighting for its existence against Russia. And so one of the concerns is that you begin to see this compression of decision cycles, of the targeting cycle where people are identifying targets and making a decision, what sometimes is called the OODA loop in warfare, the observe, orient, decide, act loop, where people are sort of understanding the battle space and then making a decision and then acting on it. You know, for one person, AI can accelerate components of that and actually buy a human more time to make decisions, right?

So if you can compress parts of that loop that are easy for automation to do, you can expand more space for humans, if you are the only one doing this. But when your competitor is doing it, they're accelerating their time cycles too. And now you get into this dynamic where everyone's just having to make decisions in split seconds. Now we've seen this in stock trading. This is not a theoretical concept. We've seen this whole domain of high-frequency trading emerge where algorithms are making trades in milliseconds, at superhuman speeds that humans could never try to be in the loop for those kinds of trades.

Page 11 of 21

Center for Humane Technology | Your Undivided Attention Podcast War is a Laboratory for AI with Paul Scharre

Tristan Harris:

And then we've seen accidents like flash crashes as a result of that because of, I mean in part because of high-frequency trading and other factors too, of just these sort of weird interactions among algorithms because of course you're not going to share with your competitor exactly how your algorithm works, whether you're in finance or in warfare. I think what's concerning to me is the way that financial regulators have dealt with this problem is they've installed circuit breakers to take stocks offline if the price moves too quickly, but that doesn't exist in warfare. Right? There's no referee to call time out in war if things start to get out of control. So how do you then maintain human control over war when war is being fought at superhuman speeds?

I think this is just the heart of the conversation when you push this whole conversation to its extreme. I mean, each military wants to increase its OODA loop, its observe, orient, decide and act loop. John Boyd from the Air Force came up with this concept and basically you're only as good as and you are accurate in being able to update your OODA loop. And as militaries build an autonomy with the incentive of tightening that decision-making chain, tightening their logistics chain, tightening their targeting chain, tightening their execution chain, they have that incentive. And the more that they do that, the more their competitors do that. And even if they believe or are paranoid that their competitors might do that, that's why even though we say we don't want these weapons to be built, we can't guarantee the other guys not going to build them. And so we keep accelerating and building them ourselves.

And it struck me in thinking about this, the concept of mutually assured destruction in nuclear war was a critical concept to create essentially something that would inhibit this runaway escalation because we basically said as soon as one nuke goes off, it's going to create an exchange, a nuclear exchange that will basically create this omni lose-lose scenario. And we, what worries, I think so many people about drones and autonomous weapons is the idea that it's kind of unclear what would happen.

And the phrase that came to my mind when reading your work, Paul was mutually assured loss of control, that were we to sort of hit go on, okay, we think we're being attacked by China, hit go and then all the autonomous systems just go, well, then they're going to set their systems to just fully go and both parties are going to get into a runaway escalatory loop and there isn't going to be a control. And I'm just trying to think about what are the concepts that we need to prevent what we all don't want to happen, which is this kind of runaway, omni lose-lose scenario that's more ambiguous with smaller scale weaponry that's autonomous versus the large scale nuclear situation.

Yeah, I mean I think that's exactly the right question and that's the challenge that we face. I don't think it's like a today problem, but it's coming as we see militaries add more and more AI and automation. Some Chinese scholars have

Paul Scharre:

Page 12 of 21

Center for Humane Technology | Your Undivided Attention Podcast War is a Laboratory for AI with Paul Scharre

Tristan Harris:

hypothesized this idea of a singularity on the battlefield in the future where the pace of AI-driven action exceeds human's ability to respond. And you effectively have this situation that you're describing where militaries have to turn over the keys to machines in order to remain effective, but then how do you maintain control of a warfare? How do you end wars if the war is being fought at superhuman speeds? And then if there are accidents or if these systems begin to escalate in ways that maybe you don't want to, you're trying to take a limited war that begins to spiral out of control.

It seems like one of the paradoxes here, you know, human judgment is both fallible, talk about the eighteen-year-olds who haven't slept and who are in the battlefield and all of the mistakes that are going to get made in that environment. But then there's also the cases where human judgment is sort of the thing that frankly has saved us because we wouldn't be here, but for the fact that that human judgment happened. And, you know, I think about autonomous weapons being used by totalitarian states or dictatorships where if you think about police officers or national guard who are ordered to fire on their own citizens, there's something about the native human moral intuition. These are my fellow countrymen. I'm not going to fire on my own fellow human beings. How should we be thinking about that?

Well, I think it's a very real concern, and it's one that people have raised in terms of thinking about autonomous weapons. Now we've thought, we've been talking mostly about autonomous weapons in a wartime context, but this domestic policing context that you raise is also very significant because we can see historical examples like the fall of the Eastern Bloc at the end of the Cold War, where that sort of ability for soldiers to lay down their weapons, to say, I'm not going to fire on my fellow citizens, is the sort of last check often on a dictator's repressive power.

And when you take those humans away and it's robots effectively, I mean, it may not look like humanoid robots, could be robotic vehicles or stationary guns that are controlled by autonomy or even if they're just remotely controlled, but by virtue of technology, by a much smaller number of people, then you sort of take away that ability for ordinary people to say, I'm not going to do this and concentrate ever more power in the hands of a small number of people, a dictator and those surrounding him. So I think that's a big concern, and that is, you know, sort of suggests that maybe some kind of regulations about this technology are going to be beneficial so that we can avoid that kind of future.

Let's talk about nukes. You talk a lot about the parallels with nuclear weaponry and the creeping automation around decision-making associated with nuclear systems, including Russia has this dead hand system to launch counterstrikes. Can you talk a little bit about automation within the nuclear context?

Paul Scharre:

Daniel Barcay:

Page 13 of 21

Center for Humane Technology | Your Undivided Attention Podcast War is a Laboratory for AI with Paul Scharre

Paul Scharre:

Yeah, so one of the interesting things about this is when you start thinking about, okay, like, where is it appropriate in the military to use AI and automation? I think the first place that people go to is like, well, we shouldn't use it for nuclear weapons. That seems like an easy one. We shouldn't do that. Now the crazy thing is we actually have a fair amount of automation in nuclear command and control. Already in many cases we've had for decades, both throughout the Cold War and the US and the Soviet Union to help speed up elements of processing. So for example, if the president were to make a decision to use nuclear weapons, there's elements of automation and sort of carrying those orders out to people to make sure that they're executed correctly. Now, a lot of that's human driven, but there are going to be places where militaries do start to use AI that touch on things like intelligence collection or early warning or parts of automation in executing decisions, for example.

But that's a place we want to get it right. And if you can use the technology in ways that help to make sure that the information people are getting is more accurate, well that's good, that's valuable. We want to do that. If we can reduce the number of false alarms that come in, for example, that's valuable. But we don't want, I mean this is a place where AI's unreliability is a real concern. Now, the United States Government Defense Department has said, they have an explicit policy that humans will always be in the loop for any decisions pertaining to the decision to use nuclear weapons or executing the decision to use nuclear weapons by the president. And that's I think really foundational.

The UK, the United Kingdom has a similar policy that they've come out with, but we've not seen that from all nuclear armed states, and we haven't heard anything from Russia and China, for example, or other nuclear powers. But that seems like a place where as we're thinking about how do we approach this, it seems like a low bar to set that we can agree, okay, humans should be in the loop here and I think would be important to set that expectation internationally, that people are going to be responsible for how they use this technology as it relates to nuclear weapons.

Those commitments from the US and the UK were only recently, right in 2022?

That's right. That came out in the US Nuclear Posture review in 2022, and roughly the same timeframe from the UK as well.

It's been reported that Russia actually wants to automate the entire kill chain with nukes. Is that right?

So Russia has this, they've done a bunch of things that from a US defense analyst standpoint generally seem kind of crazy. One of them is that during the Cold War, the Soviet Union had built in the eighties a semi-automated dead hand system called Perimeter. And so the way this worked was it would've a series of

Tristan Harris: Paul Scharre:

Tristan Harris: Paul Scharre:

Page 14 of 21

Center for Humane Technology | Your Undivided Attention Podcast War is a Laboratory for AI with Paul Scharre

Daniel Barcay:

Paul Scharre:

sensors across the Soviet Union and they were detecting seismic activity, light flashes, other things that would, they're intended to detect a nuclear detonation on Soviet soil. Now, once the system was activated, once someone had turned it on, if it detected these nuclear detonations, it would wait a predetermined amount of time for some kind of signal from higher authorities. If there was no signal, presumably because Soviet command had been wiped out, it would transfer a launch authority from Soviet high command to a relatively junior officer in a bunker who had been protected.

Now, there was still a human in the loop, but it would basically bypass the normal chain of command. Even sort of crazier, the Soviets never told the Americans about this. It never came out until after the Cold War, sort of violating the Dr. Strangelove rule of, if you make a doomsday device, tell the other people you've made a doomsday device, and the wild thing about this is, this whole thing seems very risky, and why would you do this? It had a certain logic to it. And the logic was that one of the challenges in nuclear stability is that if you get warning that someone is launching missiles at you, you can have this use or lose dilemma of having a very short time, you know, maybe 10, 15 minutes to make a decision, to launch your missiles before your missiles get wiped out or your command gets wiped out, and they wanted to reduce that pressure from them.

So they could, in theory, they could turn on Perimeter and say, you know what? Even if the Americans get us in the first strike, Perimeter will retaliate and we'll get them back. So there's a certain logic to it, but like a lot of things in the nuclear world, like, the logic is also a little bit nuts. And so, you know, according to some reports, the Russian military said that the system is still operational and has been upgraded since then. We don't know a lot of details about it, but is certainly an indication that the Russians are likely to think about risk in this space in a very different way than say the US military would.

So with nuclear weapons, we've entered a phase of an uneasy perhaps, but seemingly stable detente. When we talk about moving towards autonomous weaponry, AI-enabled weaponry, we at the Center for Humane Technology think a lot about the way that the incentives end up shaping the outcomes you get, and you have a set of recommendations about how to shift those incentives around autonomous weaponry to make sure that we arrive or hopefully arrive at a stable deterrent regime. What are those recommendations and how do they shift those incentives?

I think there's a couple things. One is, you know, we need to have rules. If we look at rules historically that militaries have been able to agree to and then hold in practice in warfare, which is challenging, sometimes there are treaties that then you get to war and then nobody follows the treaty anyway. So that's maybe not the best case study to build your example on. They have a couple clear patterns that they follow the ones that have been successful. So one is that the

Page 15 of 21

Center for Humane Technology | Your Undivided Attention Podcast War is a Laboratory for AI with Paul Scharre

Daniel Barcay:

Paul Scharre:

rules are very clear and it's known sort of whether you're crossing the line or not.

So rules that are ambiguous or gray are not helpful and often are violated in war. Militaries are able to comply with these rules in practice, political leaders have imperfect control over their military forces. And so that's also important. So for example, in the early days of World War II, Britain and Germany both refrained from bombing populated areas when they conducted their aerial bombing campaigns, and in fact Hitler put out in a rule that the Luftwaffe was not to bomb populated areas in Britain, only to bomb industrial targets for the war, not because Hitler was a good person, because he was afraid of the British Air Force and he was worried about retaliation.

This broke down when one night German bombers got lost over London and bombed central London by mistake and Churchill retaliated with a bombing of Berlin, and then afterwards Hitler declared that they would bomb London and that London Blitz was the result. So militaries have to also comply with these rules that they're trying to do and the sort of cost-benefit calculus for militaries of what are they giving up needs to be in their favor. That's part of the reason why we've been so successful with some exceptions, as I mentioned earlier, but generally successful with countries walking away from chemical and biological weapons today in part because they're not that useful on the battlefield. And we've seen this in practice as militaries have used them, particularly against troops that have chemical gear. They somewhat slow down their movements, but they're certainly not decisive in the way that nuclear weapons are, for example.

And it's been very hard for countries that are nuclear powers to try to get them to give those up because of their value there. So I think when we think about for autonomous weapons or other forms of military AI, trying to come up with rules that can meet these criteria is difficult. And in part because a lot of these definitions of AI and autonomy themselves are slippery, does it cross the line, is it autonomous enough? That's actually, can sometimes be really challenging and I think as a hurdle here to coming up with rules that might be useful in practice.

So how do you Paul choose where to spend your political capital? Because on one hand you've got Pollyannaish proposals because people aren't willing to give up control and possibility of an edge and warfare. On the other hand, you have the idea that we get in a few milquetoast restraints that sort of gesture at the problem, but don't fundamentally make us arrive at a stable equilibrium point. What do you think the most effective potential international agreements are?

Well, thanks. I have given this a lot of thought over the last like 15 years or so. So I think, look, one of the things I think is challenging in the diplomatic discussion right now is it's a very binary discussion of either we somehow have a

Page 16 of 21

Center for Humane Technology | Your Undivided Attention Podcast War is a Laboratory for AI with Paul Scharre

Tristan Harris:

comprehensive preemptive legally binding treaty that would ban autonomous weapons or we do nothing. And there was a lot of space in between. And I think we can see that there is not at the moment political momentum for a comprehensive ban that would be effective because if it doesn't include the leading military powers, why do it? But there's a lot of space in the interim. So for example, you could see a narrower ban on anti-personnel autonomous weapons. That is to say that autonomous weapons that target people, I think that's more doable for a couple reasons. One is that from a military standpoint, you're giving up something that's not quite as valuable.

So you could see the rationale why maybe you could imagine a future where you need autonomous weapons to fight against fighter jets that are also autonomous. There's no way to keep a human in the loop in that kind of world, we're attacking radar systems with automated fire responses. Well, humans are not that fast. Like outrunning a machine gun has not been an effective tactic since World War I. And so at the speed at which humans move, you could keep a human in the loop. And for militaries, for high-end warfare, a lot of that is against machines. You're targeting artillery and radar and ships and submarines and aircraft. People, the infantry. I mean, I was in the army, I was in the infantry, and we think very highly of ourselves, but they're not the centerpiece of major battles between militaries. So I think you're giving up something less valuable there.

But I think also the need is higher in terms of the risk there, right? Because if there's, let's say there's an accident and this autonomous weapon is targeting the wrong things, you can always get out of a tank and run away from the tank if it's targeting the tank. If it's targeting you, you can't stop being a person. And so the risks to combatants, to civilians, I think are much more severe. So that could be one approach. The US State Department recently led over 50 countries to come together in a political declaration. So not a legally binding statement, but nevertheless, an international statement surrounding responsible use of military AI.

And one of the things in that agreement was about just like test and evaluation to make sure your things are reliable and they work, don't malfunction. That's a place where I think we could press on and get some value of giving guidance to countries to make sure that if they're going to use AI, they do it in a way that's responsible and ethical and is safe and we don't see malfunction. So I think there's actually a lot of space to explore here that could be really beneficial.

I think this mirrors, is it called permission access controls? What the US sort of distributed to other allied partners that do have nuclear technology, basically making sure that they're permissioned appropriately and wanting to make sure that we democratize the safest and best permissioned control systems so that

Page 17 of 21

Center for Humane Technology | Your Undivided Attention Podcast War is a Laboratory for AI with Paul Scharre

Paul Scharre:

Tristan Harris:

the world is safer because we've increased the baseline of all of our partners. Am I getting that right?

Yeah. So the technical term is permissive action links, which doesn't, what you said actually is more intuitive than what a permissive action link is, but it's... So yeah, exactly, it's a safeguard on nuclear weapons to make sure that they are only used by an authorized person when authorized by whatever their national authority is. And the US has helped, as you said, spread that technology to other nuclear states because it's not in our interest for their nuclear weapons to fall in the wrong hands.

So Paul, one of the things you write about in your book is around how AI changes the game of war in the same way that when you let humans play Go for thousands of years, they play it a certain way, they have certain strategies. When they play chess, the same thing. And then when you suddenly introduce AI, the AI discovers a new move that no humans ever done. In Go, it was move 37. You reference in your work the recent examples of dogfighting simulations where you have an AI F-16, I think, what happens, what new moves sort of are discovered by AI systems that humans wouldn't do, and how does that change the game of aerial dogfight?

Oh, absolutely right. We see the same phenomena with military systems. So a couple years ago, DARPA, the Defense Advanced Research Projects Agency, the Defense Department sort of Department of Mad Scientists that do kind of crazy experiments, they trained an AI agent to compete in a dogfighting competition. So they started out with a whole set of different companies in a simulator. The winner was a small startup called Heron Systems that beat out defense giant Lockheed Martin in the finals, and then they went head-to-head against a human, experienced fighter pilot, and they absolutely crushed the human. Now, some caveats are worth their, it was important here, it was in a simulator and not in the real world. There were a couple of things that were simplified for the simulator, but nevertheless, they are now actually flying subsequent iterations of AI agents in real world aircraft, in F-16s, and doing simulated dogfights in the real world. So this technology has matured.

Now, but what's exciting is not just that it was better than the human, but that it fought differently as we see in other areas. So in particular, one of the things that the AI agent did was as the aircraft are circling each other, there's a moment where the aircraft are nose to nose and they're racing at each other at hundreds of miles an hour, and there's a split-second opportunity to get off a gunshot to take out the enemy. Now, humans don't make this shot. It's almost impossible for humans. In fact, it's banned in training because it's dangerous to even try because you risk a mid-air collision as you're racing head-to-head at this other aircraft, at hundreds of miles an hour. Well, the AI system very much could

Paul Scharre:

Page 18 of 21

Center for Humane Technology | Your Undivided Attention Podcast War is a Laboratory for AI with Paul Scharre

Daniel Barcay:

Paul Scharre:

make this shot. It could do so with superhuman accuracy, but even more interesting, it learned to do it entirely on its own.

It was not programmed to do that. It used a reinforcement learning algorithm and it got rewards, and it sort of discovered this tactic. Now, humans had heard of this before. Humans just can't do it. But it highlights how, as in other areas, the value of AI is not just being better than humans, but also fighting differently. And it opens up a new space of possibilities. And in fact, when you look at gaming environments at StarCraft and Dota 2 and Chess and Go and other things, you see a lot of commonalities of ways in which AI systems play differently than humans.

Some of them are really obvious, better speed, precision, but some of them are different in terms of things like the ability, one of the things that comes out in a lot of gaming environments, the ability of AI systems to look holistically at the game space. And this is something that chess grandmasters have talked about with Alpha Zero, for example, that it's able to balance moves across the board better than often humans can. We often see by some of these AI agents very rapid shifts in tactics and aggression. We see this in poker, for example, where they're able to finely calibrate the risks that they're performing and which certainly would've tremendous advantages in warfare. And so there's certainly a tremendous space of opportunity of AI changing warfare in very significant ways.

When I hear that story, part of what's terrifying for me in that is if you have one side that is able to put AI in the cockpits of their fighter jets and get off that shot in a particularly inhuman way, isn't the other side basically forced to do that? Because otherwise you lose? And isn't that part of that control problem in and of itself, is that both sides are de facto racing towards implementing these systems that we can't control?

I mean, yeah, I think you've put your finger on exactly the problem, right? Which is in the short term, sure, we integrate AI and it makes militaries more effective, but, sort of, where's the endpoint here? And the endpoint is one where a lot of functions are automated and the combat is in the hands of AI, and humans are still being killed. To be clear, I don't think there's a future where we envision, that I would envision of these bloodless wars of robots fighting robots. I mean, that would be great, but I think the unfortunate reality is that we'll see in the future, humans still fighting humans, but with robots and maybe autonomous weapons, the same way that humans fight humans with missiles and aircraft today. And the reality is that there will need to be real human costs to warfare, for wars to end. And so we'll still be on the receiving end of some of this technology, but if we continue to lose control over it, I think that's a very terrifying future to imagine when we could see these potentially really

Page 19 of 21

Center for Humane Technology | Your Undivided Attention Podcast War is a Laboratory for AI with Paul Scharre

Daniel Barcay:

Paul Scharre:

destructive tools being used in ways that might be hard for us to control or to stop.

How do you think war changes in the next five to 10 years if we do nothing, what are we on track to war becoming, and then what do you see war becoming if we are able to successfully intervene and limit these technologies?

Yeah, I think maybe to just talk a little bit about timeframes, I actually think that in the next five to 10 years that there will be changes. We'll see more autonomy. We'll probably see the introduction of autonomous weapons at least in a limited fashion. But I think the changes will be modest. Militaries are moving quite slowly on integrating AI for better or worse, depending I guess on your point of view. But they're pretty far behind the civilian sector in the space. But over the long run, over maybe the next several decades, I think the changes are likely to be quite profound. At least as significant as the changes that the industrial revolution brought to warfare, where we saw that the introduction of industrial-age technology dramatically increased the physical scale of warfare, of the mechanization that entire societies could bring to World War II, for example. Mobilizing their industry for war, bringing enormous amounts of firepower, destroying, even before nuclear weapons, really entire cities in Europe and Asia.

And then AI is likely to do something similar to the cognitive aspects of warfare, accelerating the speed and tempo of war, of decision-making. Slowly pushing humans out of the loop. And I think the risk here is that if we do nothing, we end up in a situation where we have militaries that are quite effective and then go out and fight wars that affect us and that have real human consequences but that humans are not in control of once they begin. That we could see situations where machines escalate wars in ways that humans aren't prepared for, even start wars or cause crises to spiral out of control, that it makes it more challenging to limit conflicts and it makes it more challenging to end wars. It's frankly hard for humans oftentimes to end wars because of political commitments and because leaders don't maybe recognize when they're losing.

But if you add in a layer where they have lost the ability to control their military forces effectively, that gets much, much more challenging. And we've been fortunate enough to live for almost a century now without a large scale global war, but the consequences of such a war would be absolutely catastrophic to humanity even if it were not in a nuclear space. I mean, the scale of destruction that we have already in our inventory, if we really were to see great powers mobilized for war, would cause enormous human suffering. And I don't think that we should take for granted the peace that we live in and we want to be mindful of how emerging technology is changing some of those dynamics. And if we do things right, the goal would be to find a way to skate through these dangers. I think they'll continue to hang over our head just the same way they do with nuclear weapons.

Page 20 of 21

Center for Humane Technology | Your Undivided Attention Podcast War is a Laboratory for AI with Paul Scharre

Tristan Harris:

Paul Scharre: Tristan Harris:

Like, we have a lot of nuclear weapons out there in the world, and we've been able to avoid a nuclear war. We don't know what the future holds, and we don't know that we'll be able to look 70 years from now and say that remains true. But we try to find a way to navigate through those kinds of threats and have as stable as possible a situation. And with autonomy and AI, if we can come up with a set of rules that countries can agree upon that are pragmatic, that are realistic, that take into account the realities of warfare and how militaries fight and that are achievable, then maybe we can find ways to buy down some of that risk and reduce it and avoid some of the most catastrophic harms.

Here. Here. That was good. Thank you, Paul. It's a bummer. There's so much more I want to talk to you about, but this has been a great conversation, Paul, super appreciative of your time, and I hope that the policymakers in our audience of this podcast really take what you have shared to heart.

Thank you. Thanks for having me.

Paul Scharre is executive Vice President of the Center for New American Security and the author of Four Battlegrounds: Power in the Age of Artificial Intelligence. Your Undivided Attention is produced by the Center for Humane Technology, a nonprofit working to catalyze a humane future. Our senior producer is Julia Scott. Josh Lash is our researcher and producer. Kirsten McMurray is our associate producer, and our executive producer is Sasha Fegan. Mixing on this episode by Jeff Sudakin, original music by Ryan and Hays Holladay. A special thanks to the whole Center for Humane Technology team for making this podcast possible. You can find show notes, transcripts, and much more at humanetech.com. If you like the podcast, we'd be grateful if you could rate it on Apple Podcast because it helps other people find the show. If you made it all the way here, let me give one more thank you to you for giving us your undivided attention.