Does AI *really* help the defense in military situations?

Christina, an apparent MR reader, asked me whether it is really true that AI helps military defense more than military offense, as was previously argued by Eric Schmidt.  I can think of a few parallel cases:

1. In chess, AI clearly has helped the defense.  Top computer programs never play 32-move brilliant sacrifice victories against each other, a’ la Mikhail Tal.  Most games are drawn, and a victory tends to be long and protracted.  (Do note it is sometimes better to get the war over with and lose right away.)

2. In the NBA, analytics have helped offense more, for instance by showing that more attempted shots should be three-pointers.  Analytics of course is not AI, but you can consider it a more primitive form of using information technology to improve decisions.

3. It is interesting to ponder the differences between chess and the NBA as potential analogies.  In chess, the attack often “plays itself,” as the player with the initiative may be following fairly standard strategies of bringing the Queen and some lesser pieces in the neighborhood of the opposing King, or maybe just capturing material.  Finding the correct defense is often a more complex matter, and the higher quality of the chess-playing programs thus boosts defense more than offense.  Besides, under perfect information chess is almost certainly a draw, and the use of AI asymptotically approaches that outcome.

In professional basketball, the offense typically has more options and permutations, and given any offensive decisions, the defense often respond in fairly typical fashion, such as lunging at the player attempting a shot, or doubling Stephen Curry as he crosses the half-court line.  In those cases where the defense has more options, however, analytics conceivably could help basketball defense more than offense.  A (hypothetical) example of this would be using game tape and AI to see which kinds of tugs on the jersey best disrupt the shot or rhythm of the team’s leading scorer.  That said, most of the action seems to be in honing the options for the offense.

4. Is warfare more like chess or more like the NBA?

I believe the USA has more options in most of its conflicts, and thus AI will help the United States, at least at first.

In the Second World War the Nazis had more options than their opponents.  In the Civil War and American Revolution, however, the available offense was more static and predictable, and AI for those fighting forces might have helped the defense more.  In the Iran-Iraq war I suspect the defense had more options too.  Terror groups have more meaningful options than the forces defending against terror, and thus AI might help terror groups more than the defense, at least provided they had equal access to the data and to the technology (which is doubtful at this point, still as part of the exercise this is useful).

5. One important qualifier is that the chess and NBA examples already assume a game is on to be played.  A war, in contrast, is started as a matter of volition on at least one side.  If AI creates a new arms race of sorts, where one side at times opens up a decisive lead, that may provoke more decisions to engage and thus attack.  The mere fact that AI increases the variance in the power gap between the two sides may increase the number of attacks and thus wars.

So there is more to this question than meets the eye at first, and I have only begun to engage with it.

Addendum: AI is also spreading in the legal world, will this help defendants or plaintiffs more?


Do advanced economic models help more with supply or demand?

This is a weird question, it's entirely case by case. More interesting roads would be more specific instances and their implications, not hand wavy "offense or defense".

As someone involved in the NBA analytics community; analytics does not inherently help offense more than defence, there just is a heck of a lot more data available for analyzing offence than D. I don't think there is any reason this qualifier would apply to warfare.

Real warfare (at least until now) has involved physical destruction of things, and to say "the defense has the upper hand" (ala WWI) is really saying the offense couldn't achieve its objectives in spite all manner of destructive acts.

Given the very sorry state of computer security, one can expect attacks to create all sorts of havoc, but not necessarily physical destruction of things.

So AI might make possible some forms of attack, but be of zero use in withstanding the counter attacks...

(Example - AI allows attacker to breach the ticket/dispatch/scheduling systems of all airlines, and local mass transit, and all Tesla's, all at the same time. Hides the source for a while, until political reality or intelligence sort it out. Attacker AI of no use when attacked country retaliates with blockages, aerial battery, or nuclear weapons....)

Have A Nice Day

I believe the USA has more options in most of its conflicts, Given the very sorry state of computer security, one can expect attacks to create all sorts of havoc

'In the Civil War and American Revolution, however, the available offense was more static and predictable'

So, let's just ignore the naval side of warfare, including submarines, and in the Civil war, the development of steam powered ironclads and the use of turrets.

There was a grand total of one submarine used during the Civil War, by the Confederacy, which killed 26 people, 21 of whom were Confederates. So, yes, you can ignore it. Disappointed no one signed my last petition:

And a grand total of one submarine was used in the Revolutionary War - 'On this day in 1776, during the Revolutionary War, the American submersible craft Turtle attempts to attach a time bomb to the hull of British Admiral Richard Howe’s flagship Eagle in New York Harbor. It was the first use of a submarine in warfare.

Submarines were first built by Dutch inventor Cornelius van Drebel in the early 17th century, but it was not until 150 years later that they were first used in naval combat. David Bushnell, an American inventor, began building underwater mines while a student at Yale University. Deciding that a submarine would be the best means of delivering his mines in warfare, he built an eight-foot-long wooden submersible that was christened the Turtle for its shape. Large enough to accommodate one operator, the submarine was entirely hand-powered. Lead ballast kept the craft balanced.

Donated to the Patriot cause after the outbreak of war with Britain in 1775, Ezra Lee piloted the craft unnoticed out to the 64-gun HMS Eagle in New York Harbor on September 7, 1776. As Lee worked to anchor a time bomb to the hull, he could see British seamen on the deck above, but they failed to notice the strange craft below the surface. Lee had almost secured the bomb when his boring tools failed to penetrate a layer of iron sheathing. He retreated, and the bomb exploded nearby, causing no harm to either the Eagle or the Turtle.'

You also left out the fact that the Hunley sank the USS Housatonic when attempting to point out how trivial the world's first successful submarine attack was.

Machine learning will always be deployed primarily to increase offensive lethality. Always has, always will. Winning is always and only about overcoming defenses.

Considering AI's role in warfare to be purely strategic / planning oriented may be a huge mistake. Here's what I mean:

Let's assume there are a group of 5000 small, semi-independent robots. We call them drones when in the air, but such autonomous machines might also operate in a marine environment, wait on the sea floor for ships to pass, act as intelligent land-mines etc. Their coordination with each other would be called AI, and an attack by swarms of them would be fiendishly difficult to fend off. A slow, lumbering aircraft carrier is almost impossible to defend against hundreds of little torpedos rising from the depths.

In that way AIs may very well play very effective offensive roles. But since the major countries have nuclear weapons, all-out war between them seems unlikely. Even proxy-wars are becoming less frequent. That leaves the economic and intelligence realms as the only viable methods for conflict, which are areas that probably won't be usefully accessible to AI for decades yet.

Historically when war was no option, people fought for glory in highly symbolic, even ornamental, conflicts. Think games of chess, positive propaganda, sports etc. It's hard to see how AI could be of importance in such human affairs.

Machine trained algorithms + neurotechnology + brain computer interface (BCI) principles + extensive psychological profiling + wifi/cellular/microwave bath + Fourier transforms + would do

I'd view AI as a more towards optimal play. The move from current play to optimal play may benefit one side more than the other because one side may have more complex options, but it will also make things more predictable. Blunders will be less common and the outcome of "games" will be more predictable. I'd suggest an end result of less conflict and more negotiation as the outcome of most outright conflict is known beforehand. Expect fewer conflicts and more deals.

Or possibly a long term scenario like in 'A Taste of Armageddon' -

I see the opposite - AI becomes another source of asymmetric information, as different countries use different AIs with different inputs/presets.

I would also fear that the use of AI would bury even more assumptions, hiding programmers' and decision-makers' biases in the algorithm.

I gotta say that TC's examples of "parallel cases" to war are pathetic. But perhaps I'm just thinking about different aspects of it than he is. Are N. Korea & S. Korea "at war"? How about Ukraine and Russia? How about Sudan and S. Sudan? Or ISIS and the USA? Or even the USA and China (or Russia). The idea that war is "one thing" is pretty idiotic. It obviously isn't. Although it's probably best defined clearly with its various flavors kept separate for this type of analysis. My core question/comment here is: is AI something which has a meaningful definition? I doubt it. Bonus question: isn't it obvious to anyone paying attention that AI is changing so fast right now that even if we could (pose and) answer the question, that any answer we come up with will be obsolete/irrelevant in the immediate future? That is, isn't it obvious that this question currently has no merit? Seems to me a "parallel case" is the question of whether the internal combustion engine is better on mobile or stationary platforms... asked in 1860.

For present purposes, "machine trained algorithm" is quite suitable as a definition of "AI". Which means that there isn't really any "intelligence" involved per se. It's recognizing patterns and then applying a diversity of optimization processes to the algorithms themselves in order to more effectively optimize for other things.

The so-called "neural networks" are nothing like a brain, for example.

This just means that the algorithms for the optimization process have a "more open mind" about which combinations, aggregations, etc., might inform about various analogues of colinearity or perpendicularity for the purpose of grouping together variables and associations.

There are areas of the military where stronger computing is likely to lead to positive outcomes - logistics for instance. Scheduling trucks and other supplies is something computers already do well. The faster and smarter the computers get, the better they will be able to do this. I would think that better logistics disproportionately benefits the offense rather than the defense.

There are a lot of areas where data is noisy. Sonar and radar for instance. AI might be better at picking out interesting noises from background. I would think that better sonar helps the offense, better radar the defense. Although with air power it is hard to separate the two.

Addendum: AI is also spreading in the legal world, will this help defendants or plaintiffs more?

Well that is a no-brainer. Defendants. The prosecution is largely a government monopoly. The government cannot notice hate-facts. Any AI is likely to notice. Which means they must be programmed to ignore the facts or they have to be turned off. The defense, especially criminal defendants, have no such problems.

Poker, like warfare, has significant level of bluffing, not like the more transparent chess. In poker AI, group of human experts (they were free to talk to one another) were reduced to paralysis, unsured of how to response. In warfare that is fatal.

"""About halfway through the competition, which ended this week, Kim started to feel like Libratus could see his cards. “I’m not accusing it of cheating,” he said. “It was just that good.” So good, in fact, that it beat Kim and three more of the world’s top human players—a first for artificial intelligence."""

This post is laughable.

One more time: the most important elements of warfare through the ages:

1) People

2) Ideas

3) Weapons

UPSHOT: AI: the 3D Printing of Marginal Revolution, ca 2017 [masturbatory fantasy]

Right, TC?

I could cite any number of colonial battles where insanely brave Third World folk were gunned down by machine guns. But I won't.

I will just point out that the German soldiers were vastly better than Western or Soviet soldiers. Still lost.

Ah, well, Individual combatant bravery doesn't smoothly map onto troop quality. Often, actually, it's a sign of a (tribal?) culture that produce warriors rather than soldiers and has serious problems with professional warfare and large group cohesion. Hence you can have groups with good individual but terrible group performance (see Russians, Arabs, Japanese, Zulu's etc).

But generally, variation in troop quality dominates variation in equipment quality, which is what Carlo's claim is about. You can see this in the way the best militaries spend a lot on troops relative to equipment; weaker militaries ironically tend to have "procurement heavy" budgets; shiny weapons and incompetent users. Which are the opposite of what you might causally expect. But then look at the disaster of the Iraqi army abandoning Mosul in the face of ISIS; they didn't lack for firepower!

Napoleon famously placed the "moral" to the "physical" as 3 to 1; and he probably wasn't far wrong for his era or later ones. If you get a choice of armies between superior equipment OR superior training/morale/organisation, then always take the better people.

I don't know what the Japanese are doing on that list. Their 1941 campaign was outstanding.

I am not sure I agree that troop quality dominates equipment quality. You are, I assume, in a Clauswitzian world where military units are similar from country to country. That is not usually true. There is no point being brave if you lack the proper equipment - and few units are. It would be hard to find a country that spent a lot on weapons but still produced poor military units. The Argentinians perhaps are the only example that springs to mind. It is true that corruption and sectarian fighting gutted the Iraqi Army in Mosul. But that is quite rare.

Most experts of WW2 rate the Japanese troop quality as pretty low. And it dropped precipitously as the war went on. Their experienced cadre was getting killed and their military academies and boot camps produced a mediocre replacement. German troop quality was generally considered the best.

I don't know who these experts were, but the Japanese were grossly out classed by the Americans. They were fighting with spears against a massively larger and better equipped enemy. But they did not run and they did not give up. That is impressive quality right there.

Their air force had problems replacing cadres but I doubt it was a problem for the Army. Mostly because they got to fight once.

Incompetent, weak, and well equipped?

Nearly every Arab army post 1945
Most US third world allies.

Let's not overstate the case; better equipment and more men matter. But on average it doesn't matter as much as the difference in quality.

FICTITIOUS EXAMPLE; The model is (troop quality) has a weight of 2 and an SD of 4 in the data. (Equipment and numbers) have a weight of 1 and an SD of 3. Hence we say (troop quality) matters more.

"generally, variation in troop quality dominates variation in equipment quality"

Absolute nonsense.

Of course if you've outfitted the troops with AKs and jeeps, it doesn't make sense to invest 5 years of full time training. But no amount of training is going to get you far when you're on the receiving end of machine gun fire from behind fortifications, but armed with an old fashioned rifle.

The causation is the other way around. You have more expensive capital equipment, and then it becomes more economical to invest more heavily in higher quality training. In medieval times, this would be the knight on the costly steed armoured with half the iron to be had in the county, trained for years on end before seeing serious battle - partnered with dozens or hundreds of barely armed peasants to throw at the defenses until there is space for the more highly capitalized knight to do what they do.

"After 1973’s crushing 80-to-1 victory by Israelis flying F-4s and Mirages against Arab pilots flying MiGs, the commander of the Israeli Air Force (IAF), Gen. Mordecai Hod, famously remarked that the outcome would have been the same if both sides had swapped planes. He was exactly correct, simply because the IAF had the most rigorous system in the world for filtering out all but the most gifted pilots. In every war, it’s the few superb pilots that win the air battle. A tiny handful of such pilots have dominated every air-to-air battleground since World War I: roughly 10 percent of all pilots (the “hawks”) score 60 percent to 80 percent of the dogfight kills; the other 90 percent of pilots (“doves”) are the fodder for the hawks of the opposing side.1 Technical performance differences between opposing fighter planes pale in comparison". [SNIP]

Sure, Israeli pilots often claim that.

But I notice they spend billions to get the best airplanes they can. They do not rely on their superior pilots and fly much cheaper planes.

Planes don't fly themselves, moron.

The Romans didn't conquer their Empire because they had much better swords or more men. Nor did Macedonia, the Swiss Pike, the Janissaries, the Conquistadors (matchlock and horse don't compensate for a 30 to 1 ratio), and the Grande Armee, the Khanate, the early Ming, Jackson's cavalry, or Nelson's navy, the Wehrmacht, the IJN (early war) or the IAF.

Yeah, you can pull down the better troops with enough numbers and more men, but the data says collective troop quality matters more than equipment and numbers. It is a larger component of the explanatory model.

Econ blog - people should understand what the sensitivity of a variable and it's variance are.

carlospln March 27, 2017 at 9:43 pm

I am sorry but was that too subtle for you? The Israelis spend a lot of money on training pilots. If pilot quality was all that mattered, or even if it mattered a lot more than anything else, there would be no need to spend big on top quality planes. But they do.

So despite what they say, their revealed preferences are for really good planes. Suggesting they know that quality matters.

30 Alistair March 28, 2017 at 2:58 am

The Romans did conquer because they had better swords - the Greeks comment on the injuries inflicted. And especially because of their massive manpower pool. No other country could come close to them. So they could lose battle after battle and still win.

I am not sure that models reflect real life if they think that quality of the soldiers is as important as you claim.


I don't know if that's true or not.

But they did all that training because they were flying very expensive equipment. The expensive equipment came first in time, which then provided reason to train so much.

It would not surprise me if better filtering for the specific tasks of the Air Force occurred in Israel, and that this could explain much of what happened. I'm not very inclined to give God the credit, but clearly something went very right for them.

If you read about Mubarak (who just got released from prison), it's interesting to observe that this story basically does not come up whatsoever, despite his having been very high in the Air Force at the time. He went on to be good friends of both Israel and America in quite a lot fo ways.

Ideas are number 2 in your list. Psychological warfare proved its effectiveness during the 1991 Gulf War. Lots of Iraq's soldiers were manipulated to surrender using leaflets or radios. 26 years after it's worth to ponder if someone is using the web in the same manner of leaflets during war. Perhaps it is the worry #34,756 in the importance list, but someone has to work on that.

Leaflets and B-52 bombers. Not leaflets on their own. Hard to work out which of those two was more important. Having 500 pound bombs dropped on you from enormous height or being showered with pamphlets? We may never know.

You've lost the argument already.

Please report to the local re-education zone at the appointed time and location.

Also, roll over. With a smile, please, but only so long as you already understand that there is no choice.

If chess is a draw when played perfectly, then the fact that it gets more defensive and draw-ish when played by very strong chess-playing algorithms says little about AI or warfare, only that chess is drawish except for mistakes.

Yes, but the real world does not start with precisely matched pieces and a pre-determined first mover.

Imagine AI+chess if black always started down a pawn.

"Terror groups have more meaningful options than the forces defending against terror, and thus AI might help terror groups more than the defense"

I'll take the opposite side of that bet; the big advantage of terror groups is the defenders' finite attention i.e. the terrorists pick the time/target + just blend-in the rest of the time. The eternal patience and diligence of AI should mitigate that advantage.

Which is why historically the Attacker had a strategic advantage. The Defender had a tactical advantage (the classic 3 to 1), but the Attacker could choose where and when to attack and amass much more than the necessary 3 to 1 at any particular time and place. Assuming that the Attacker could effectively surprise the Defender.

I would argue that in basketball, the referee calling the game (the implementation of the rules) has more impact than the offense or defense. Let things go and that tends to favor the more physical, but potentially less skilled team and generally makes defense far easier to play. In contrast, calling a tighter game generally benefits the more skilled teams with better shooters and skill players. Grabbing/holding/pushing is effective at disrupting offenses. This is more obvious in football.

I guess all the sports are designed to balance offense and defense under a set of rules to create interesting competition. But changes to the rules, even small, can quickly unbalance a game. The NBA and NFL are designed to encourage showcasing skill, so rules have evolved to allow the offense to do so. Slight tweaks to the rules could quickly bring back grindy physical games where a different type of player would excel.

I fail to see what difference an AI could make in a modern great power war. My understanding is that the success of chess-bots is down their having access to records of millions of previously played games. They can perform statistical analyses on a huge sample size to determine an optimal strategy. A "World War 3-bot," on the other hand, would have no such access to past great power wars waged at the modern technological frontier because no such wars have occurred. How would an AI learn enough to be of use in, say, a war between China and the United States without first observing some conflicts between China and the United States?

Certainly AI and machine learning can improve select military technologies like remote sensing, but I do not see it's utility in level theater strategy or grand strategy. Military technology has changed too much since the last great power war.

That is the out-dated idea about AI.

AlphaGo, the computer system Google engineers trained to master the ancient game of Go, needed only one move to make it abundantly clear that it has left humans in its dust.

The move came Thursday, in the second game of AlphaGo’s 4-1 landmark victory over South Korean Lee Sedol, one of the world’s best Go players. About an hour into the match, AlphaGo placed one of its stones in a nontraditional spot on the board that surprised those watching.

“I don’t really know if it’s a good or bad move,” said Michael Redmond, a commentator on a live English broadcast. “It’s a very strange move.” Redmond, one of the Western world’s best Go players, could only crack a smile.

“I thought it was a mistake,” his broadcast partner, Chris Garlock, said with a laugh.

Sedol, however, was more serious. He stared at the board, then got up from the table and left the room.

As Sedol returned after a few minutes and pondered his next move, it became clear that AlphaGo’s move was no mistake. It might be strange, but it definitely wasn’t bad. It was brilliant.

Sedol would take almost 16 minutes to make his next move. He would never recover, losing the match.

“Almost no human pro would’ve thought of it, I think,” Redmond said after the match.

AlphaGo did not learn that move/strategy from human. It discovered it by itself when playing against the clone of itself. That surely will not be the only one.

I think, ceteris paribus, A.I is more useful to the defence. Take Chess. If I have access to an A.I I could probably get a draw even against a much superior player. Without an A.I, a child can beat me. With it, I can do 'backward induction'- i.e. see deeper into the game- potentially all the way to the final outcome, and so it becomes easier for me to act optimally even though I don't know a lot of the relevant heuristics.

Tyler raises an important point re. the original attacker as having (at least initially) more options. But this means you can have a Bellman type state space explosion.
America has a lot of options because of its great economic, military and diplomatic strength. Yet, as Obama said its playbook, consisted of 'doing stupid shit' precisely because it experiences such little cost from using the wrong policy instrument for some reason of 'optics' or partisan politics.
During the Second World War, America and, to a lesser extent, Britain, used Operations Research (which have the potential to be turned into Expert Systems because of their algorithmic nature) better than the Germans. Britain prevailed in the air because fighters and bombers were optimised to help each other. German fighters had to fly suboptimally to accompany the bombers. The Brits, nder Beaverbrook, could quickly change their assembly lines. The Germans, though technically superior on an item by item basis were less co-ordinated.
Tyler correctly states-
'I believe the USA has more options in most of its conflicts' but I think he is wrong that 'AI will help the United States, at least at first.' Why? Well any country foolish enough to get into trouble with the US will figure out what algorithms the US will use- this might involve the high level espionage skills required to use Google search- and game the A.I such that we drop bombs on the wrong targets while they are laughing at us in their tunnels.
The other side of the coin is that the US can use A.I and data mining against asymmertical threats- e.g. suicide bombers- to destroy networks or get them to cannibalise each other before they can inflict damage. Here, again, it is the defence that gains the advantage.

Tyler writes 'In the Second World War the Nazis had more options than their opponents.' Quite true. In both Wars Germany had the option to decide when and whom to attack because French militatry doctrine was wholly defensive and Russia was useless till properly rearmed.
Yet it was this relative abundance of options which explains the incredible stupidity of its decisions in both wars- why go to war with the US quite gratuitously?- and this is why they lost so badly that they are no longer a military power of any consequence.

I think a good A.I, applied to offence would find ways to achieve one's objective without warfare. This is because any algorithmic method is going to define fighting capacity as a function on economic / social/ political variables each of which may be directly targeted. This targeting need not be anything sinister. It could be cooperative, positive sum, in nature.

Tyler's reflection reminded me of Colin Kaepernick's revelation as a read option QB, and his fading with defensive adaptation. It seems to rhyme with the Stuka/Panzer blitzkrieg tactics, which also faded with defensive adaptation of the Allies.

As far as AI goes, I have very little experience with this, but I have observed that machine learning and deep learning techniques work better where there is a lot of clean data, and less so with limited or mixed data.

Comments for this post are closed