AI Downs Fighter Pilot

Popular Science: A pilot A.I. developed by a doctoral graduate from the University of Cincinnati has shown that it can not only beat other A.I.s, but also a professional fighter pilot with decades of experience. In a series of flight combat simulations, the A.I. successfully evaded retired U.S. Air Force Colonel Gene “Geno” Lee, and shot him down every time. In a statement, Lee called it “the most aggressive, responsive, dynamic and credible A.I. I’ve seen to date.”

What’s the most important part of this paragraph? The fact that an AI downed a professional fighter pilot? Or the fact that the AI was developed by a graduate student?

In the research paper the article is based on the authors note:

…given an average human visual reaction time of 0.15 to 0.30 seconds, and an even longer time to think of optimal plans and coordinate them with friendly forces, there is a huge window of improvement that an Artificial Intelligence (AI) can capitalize upon.

The AI was running on a $35 Raspberry Pi.

AI pilots can plan and react far quicker than human pilots but that is only half the story. Once we have AI pilots, the entire plane can be redesigned. We can build planes today that are much faster and more powerful than anything that exists now but the pilots can’t take the G-forces even with g-suits, AIs can. Moreover, AI driven planes don’t need ejector seats, life-support, canopies or as much space as humans.

The military won’t hesitate to deploy these systems for battlefield dominance so now seems like a good time to recommend Concrete Problems in AI Safety, a very important paper written by some of the world’s leading researchers in artificial intelligence. The paper examines practical ways to design AI systems so they don’t run off the rails. In the Terminator movie, for example, Skynet goes wrong because it concludes that the best way to fulfill its function to safeguard the world is to eliminate all humans–this is an extreme example of one type of problem, reward hacking.

Imagine that an agent discovers a buffer overflow in its reward function: it may then use this to get extremely high reward in an unintended way. From the agent’s point of view, this is not a bug, but simply how the environment works, and is thus a valid strategy like any other for achieving reward. For example, if our cleaning robot is set up to earn reward for not seeing any messes, it might simply close its eyes rather than ever cleaning anything up. Or if the robot is rewarded for cleaning messes, it may intentionally create work so it can earn more reward. More broadly, formal rewards or objective functions are an attempt to capture the designer’s informal intent, and sometimes these objective functions, or their implementation, can be “gamed” by solutions that are valid in some literal sense but don’t meet the designer’s intent. Pursuit of these “reward hacks” can lead to coherent but unanticipated behavior, and has the potential for harmful impacts in real-world systems. For example, it has been shown that genetic algorithms can often output unexpected but formally correct solutions to problems [155, 22], such as a circuit tasked to keep time which instead developed into a radio that picked up the regular RF emissions of a nearby PC.

Concrete Problems in AI Safety asks what kind of general solutions might exist to prevent or ameliorate reward hacking when we can never know all the variables that might be hacked? (The paper looks at many other issues as well.)

Competitive pressures on the battlefield and in the market mean that AI adoption will be rapid and AIs will be placed in greater and greater positions of responsibility. Firms and governments, however, have an incentive to write piecemeal solutions to AI control for each new domain but that is unlikely to be optimal. We need general solutions so that every AI benefits from the best thinking across a wide range of domains. Incentive design is hard enough when applied to humans. It will take a significant research effort combining ideas from computer science, mathematics and economics to design the right kind of incentive and learning structures for super-human AIs.


When is the last time an American pilot was involved in a dog fight with an enemy combatant that posed a legitimate threat?

1999 seemingly:

Air combat using missiles is just rotate, aim, shoot. No Top Gun aerobatics required. Lots of Iraqi fighters were shot down using missiles in air combat in the Gulf War.

In 1991 a US A-10 shot down an Iraqi helicopter using guns. Seems to be the last gun kill recorded.

"Legitimate threat" is a loaded term. Any aircraft loaded with guns and missiles and flown by a pilot who knows how to use them is a legitimate threat.

That is not quite true. Air combat has always been about sneaking up behind someone and shooting them before they even know someone else is there - and then running away very quickly.

Missiles make this easier. But sneaking up on someone still takes some thought and planning. A lot easier if you grossly outclass them in a technological sense of course.

A "legitimate threat" is anything on the radar screen when the someone is keyed up. Don't need to carry guns or missiles. Ask the Iranians.

The most recent dogfight by any fighter, however, was within the past year when a Turkish fighter shot down a Russian fighter after the Russian fighter crossed over from Syrian to Turkish airspace.

That is not the definition of a dogfight.

During the Balkans conflict, in 1999 (the Kosovo War), five MiG-29s of the Yugoslav Air Force were shot down in dogfights with NATO aircraft. The first was on the 24th of March by a Dutch F-16AM Falcon and two were downed on the same night by U.S. F-15s. A day later two more MiG 29's were shot down by an F-16 and F-15. from Wikipedia

Air combat is increasingly about systems not pilots. Getting the right plane with the right pilot to the right place at the right time with the right weapons.

It is great that a computer can play a computer game really well. We will have to see if it has any real world applications. For one thing, identification is often fuzzy. In a game you know who is who. Reality can be more complex.

Now if we can make all air combat virtual, this AI will be unstoppable.

> > "It is great that a computer can play a computer game really well."


This whole story sounds fishy and reads like a PR/Marketing release.

Just how did this "doctoral graduate" so effectively learn the extreme complexities of air combat ... such that he could pioneer/program an unbeatable AI system ??

And this super AI system somehow runs smoothly on a cheap PC ("ALPHA and its algorithms require no more than the computing power available in a low-budget PC in order to run in real time"). It must "Model" air combat in a relatively simple, non-realistic manner... or it would require much more computing power.

And whose payroll is that retired USAF pilot Gene Lee on? His claims are the only evidence presented of how wonderful this AI system is. Guess we're all supposed to instantly assume his assertions are gospel; references to specific his flying experience/background are somehow missing in the story. How is it that he is the test-pilot on this AI system?

This level of skepticism is not warranted.

The claim is simply that it beat a particular pilot, not that it is "unbeatable".

The doctoral graduate surely made incremental improvements within an existing field, rather than starting from scratch.

A low-budget PC has an enormous amount of computing power, more than enough to run a complex aerial combat AI system.

Genetic Algorithms were designing strategies to beat human pilots when I was in grad school 20 years ago. This is a well trodden field. It would not take a tactical genius to get into this particular area of application

Not if your desired goal is to destroy things. Shoot it down. Say to the politicians that it was the software program that you funded three years ago that did it.

This stuff is great fun and all, but how is the drone war going? An impressive series of tactical and strategic wins that has shaped the world in the interests of US policy?

Oh the cynicism. Obama has said that it is going exactly as he planned and is a great success.

Evidently drones don't work. It takes a real man at place to bomb weddings. Seriously what's your point? Even if AIs get better than human pilots, it is irrelevant because the Taliban doesn't have an airforce? I really hope America is not spending in the military more than most big economies together just to fight caemen in Afghanistan.

You make my point exactly. Drones are a modern equivalent of the body count stuff in Vietnam. It is irrelevant whether some drone or some piloted thing blows up a wedding; the relevance is whether it accomplishes some strategic goal or not.

It is interesting that at the height of the Drone campaign the rather intelligent enemy simply moved to a different location and caught the President and whole of the military establishment with their collective pants down. They killed an ambassador and directly changed the strategy of the US. While the US military and political establishment were fellating each other over their technological wizardry, they got beat, and beat badly, and since have run away covering their bare ass.

But for some rather masterful spin and message control it would have brought down a presidency. It may yet put a crimp on the hopes of Clinton.

Reality is a bitch, and some computer somewhere run by some smart people in a room far away from the theatre based on intelligence of mediocre quality sourced again indirectly and not on the ground will come up with utterly unpredictable results and consequences.

+1 see Kill Chain (

So America needs boots in the ground. Good luck selling the message, but it has nothing to do with AIs being a good fit for the USAF or not. The human pilots depend on the same intelligence the computer running people have-- this is why bombers are assigned targets after all.

Reward functions are very tricky; which is why in vitro evolution is such a difficult field.

As for the RF radio - the only problem here is that the chip would fail to generalize. If they did the evolutionary development in a 'real world' it would be forced to find solutions which apply to the scope of operations.

"In a statement, Lee called it “the most aggressive, responsive, dynamic and credible A.I. I’ve seen to date."
But was it the most aggressive, responsive, dynamic and credible pilot he has seen to date? Why not? Was he just a credit to its hardware? What if Roko's AI decides to firebomb everyone who didn't help to bring it in existence?

Anyone want to bet there won't be a manned US fighter plane developed after the F-35?

You mean right before the End Times?

There will be fighter planes built as long as fighter pilots are overrepresented among the higher officers of the Air Force and Navy.

So 2060 or so? The junior officers need to be flying planes if they're going to become senior officers who used to fly planes...

Bingo. Nail on head. Readiness figures are apparently classified but at this recent AEI event, the HASC and SASC staff directors did not offer much evidence to suggest our pilots are getting adequate flying hours or will ever be able to given cost constraints. The transcript make a persuasive case for rapid AI aircraft development.

There will be fighter planes built as long as fighter pilots are overrepresented among the higher officers of the Air Force and Navy."

Probably, but they won't actually be fighter planes. They'll be the controller ship for a bunch of drones. And when your drones start losing to the other guys drones the pilot of the controller ship will bug out.

This is a given, but due to political consequences as much as anything else. The dogma pushing the F-35 to replace everything but the F-22 is causing voids in capability. The F-35 can't do what an A-10 does, no matter how much some want to think it will, and yet because of that dogma no real A-10 replacement will ever get funded. The only thing that will get funded is a drone replacement.

The US military has consistently not been willing to go down this path. A human is always required to be in the kill chain. There is no indication at all that the requirement to have a person make the decision to pull the trigger is going to change anytime soon.

"We need general solutions so that every AI benefits from the best thinking across a wide range of domains": yeah, what could possibly go wrong with centralised planning of AI?

Free markets are rather "piecemeal", so lets have socialism or crony capitalism. What possible gains could arise from competition? Awfully wasteful, competition.

I, for one, welcome our new Roomba overlords.

Iain M. Banks!

I think you misspelled Keith Laumer's name.

Yeah, I was thinking of "The Last Command" and the self-driving tank LNE (Lenny) that was re-activated and didn't know that the war was over. Computer controlled robots with deadly weapons -- what could go wrong?

Asimov had suggested a First Law of Robotics, but we're already blowing way past that without pausing to make sure that all machines have the First Law built into their hardware.

"Decades of experience". Interesting to see if any young boxers proclaim their prowess by boasting that they defeated rivals with decades of experience. Those old guys have been developing their reflexes for generations, imagine how hard it must be to defeat them.

Marciano/Moore is one celebrated example.

I would guess that strategic thinking is more important in dogfighting than sheer reflex speed.

Of course, one of the central realities of air to air combat is that there is almost no one with any real world experience in it. Air to air combat, colloquially called dog fighting, has become so rare in the post-World War II era that there is not a single fighter ace (i.e. a pilot with five kills) flying today. Only about three U.S. pilots have shot down even three enemy planes each in the last decade and a half. One estimate of the number of dog fights resulting in a downed plane since WWII, trimmed to include only post-World War II conflicts is as follows:

Korean War:
USA 38
China 6
Canada 2 (additions to WWII scores)
North Korea 2

PRV 16 (17?)

Israel 39
Egypt 5?
Syria 3
Pakistan 2
Iran 1?

Don't all these issues also apply to natural intelligence? In the case of the cleaning crew, the overruling objective is not make the boss want to fire you. A clever argument that the instructions were followed to the letter even though no productive work was finished would, at best, gain the employee one more shift to not screw up.

So, what's new about managing AI that bosses of humans don't already deal with?

When the AI is the boss???

You can't threaten to outsource it all to China? Or, come to think of it, maybe the whole thing will be built there in any case.

I don't think one needs to use drugs to have a jolly good time. Sorry, but it's just how I feel.

"Sorry, but it’s just how I feel."

Don't worry the correct kinds of drugs will fix that for you.

The best part of my day is going home and seeing my children sleeping peacefully. You can't hugh your children with drugs. Sorry, but it's how I feel.

Oh come now. Bots are getting better, but they don't have children that sleep. At least not yet. ;)

The EASIEST way to DEFEAT AI is to let AI assume you are smart and from that assumption let it plan its strategy accordingly.

My proposal is that we create an algorithm called DUMB'N'DUMBER which defeats intelligent AI by doing stupid things AI would never think a rational person would do and therefore would not prepare for it in its planning.

We have a working model of this in US politics so it would not be hard to create.

That was probably a bit of tongue in cheek but that approach doesn't really work well if you're playing some current chess "AI" or other computer game. Doing the dumb thing just makes the AI's winning easier.

You are right in that if we know how the AI thinks and we can think outside that box we can probably fool it into doing something we want and what we want is to make it vulnerable to the unexpected attack we have planned. I suppose you might argue that from the AI perspective we've done something dumb and left an opening for it to attack though but it's not really being dumb.

Anti-computer chess, played by understanding how the AI worked and emphasizing its weak spots, worked for a while. But not for long. That will last a very short time for fighter aircraft too.

Now, this story looks like BS and hype, but eventually something real will come.

Iran hacked a US drone.

But these tend be one-off things. Each time you find a flaw, the system is patched and it no longer exists. Enemy nation-states might have accumulated some zero-day exploits.

AI against a human combatant has the disadvantage (or advantage) of the human's unexpected maneuvers. Once it's AI against AI each can expect the expected and, hence, no significant advantage between combatants. With little or no discernible advantage, there would be no end to the fighting. Unless AI is subject to skill anomalous positions.

One can hope for such an outcome -- then again that was what we got in WWI as I understand by a number of troops.

And you don't have to worry about combat search & rescue either. No downed pilots to be paraded around as POWs.

Perhaps I missed it but the bigger risk to me seems that we're furhter reducing the cost of war (and probably increasing the profits of supplying war tools) so we can expect more war in our future. The collateral damage therre may well not be that the AIs take over but that we allow an increasing (ever increasing?) slice of the economic pie be devoted to war rather than consumer consumption. Maybe it doesn't matter if the pie is growning fast enough to produce relative growth in the consumer portion of the pie that results in increasing standards of living but can we be sure that will be the story?

What’s the most important part of this paragraph? The fact that an AI downed a professional fighter pilot? Or the fact that the AI was developed by a graduate student?

Grad students probably do most of the important engineering work in America. The period of peak intelligence for a human is in the early 20s, and and grad students are at a point in life where they have both the time and the motivation to devote themselves completely to one project.

Also, just based on personal observation, it's really the grad students and post-docs doing all the work in any research lab. Tenured professors are managers.

Sounds like there will be some pretty awesome fighter-combat PC games in the development pipeline now.

I would pay rather a lot of money to subscribe to a Battle of Britain MMORPG with this AI on the other side of a PvE server.

Wouldn't it become like computer checkers? Once the machine is guaranteed to defeat you, why play?

"The only winning move is not to play."

For gaming purposes, it would be reasonably easy to dumb down an AI, or weaken its weapon systems/aircraft performance, for lower-level players. For higher-level players, there could be a Guild-style raid system to bring down elite enemy fighters.

None of this would be possible against AI that wasn't naturally very good.

AI researchers should consult with the folks on K street about reward hacking!

Do 1-on-1 dogfights even happen anymore? Regardless, having our airspace dominated by AI - haunting.

They might if the AI equipped fighter was assured of winning.

But, they are, indeed, very rare. The most common scenario in air to air combat (still very rare) these days is to lurk, perhaps in a cloud, until your long range scanners detect another plane, fire a missile at it, and flee, often resulting in a one shot, one kill encounter before the other guy whose scanners don't have quite as long range is destroyed just seconds after he realizes that he's been shot at.

Indeed, the trend in less affluent military forces is to buy very cheap and inferior aircraft - often training aircraft or even Cessna aircraft- and then to outfit them with cutting edge long range sensors and long range missiles, since the ability to maneuver with agility is increasingly irrelevant.

They might be rare but that is only because no one has been flying into air space with leading edge air defenses in a long time. Air superiority in a conflict between say NATO and Russia would be hard to call and could involve air-to-air combat and close engagements, or not. The F-22 was supposed to be the fighter that would penetrate defended air space clearing the way for the F-35 and or bombers to enter. No where near enough F-22s to perform that role now. Not sure if we would even attempt penetration since likelihood of success is debatable: On the other hand, NATO appears to have a fairly sound integrated air defense system as well:

It sounds like the bureaucracy needs to get involved with this. Rewards for not seeing things? Rewards for creating a mess and fixing it? Rewards for finding a loophole and exploiting it?

Maybe the best brains at the EPA should get involved. Otherwise this could be very dangerous, a self interested, self motivated bureaucracy.

Yeah, but can the AI play Volleyball?

AI safety paper doesn't have cites of economics. Not a shocker, but I'd think computer-based agents are even more amenable to economic analysis than humans are.

One way to test the safety of an AI is to place it in a simulation of a world much like our own to see what happens. The more realistic the simulation, the less likely we would miss an unintended consequence.

P(we're living in a simulation right now) > 0.

The most significant part of the story is the $35.

We are at the point where significant processing power and Internet connectivity are nearly free. In ten years your Amazon Button will talk to you.

Are you sure you don't need more Tide?

The graduate students did the work for a $200,000 grant, which is about what corporate American charges the TSA to develop a program that tells people waiting in line to go to a random queue. Academia is way, way more cost effective as an R&D tool than corporate government contractors.

There seems to be two (no, make that 3) groups of readers here: Those who read the piece and missed to part where they admit the simulation had a very limited number (an unrealistically low number) of variables and those that caught that (the 3rd group is those who posted without reading the referenced article).
Also the general theme here seems to be that future combat is likely to be symmetrical (rather than asymmetrical). When both sides have access to nuclear weapons you've got two rational scenarioes: sneak attack or war games. For the latter, it is about the economics and reputational points being won or lost. I see no reason why human life need be in-play, but the warrior mystique is a part of the human condition.

Note too that this combat AI fits in a racing drone. All kinds of scary implications.

Once weapinized quads hit the battlefield the foot soldier becomes as obsolete as the pilot.

And what, the Whitehouse needs protective netting?

That is basically the idea behind the world of Philip K. Dick's "Second Variety."

PKD got there early on so many things.

We can build planes today that are much faster and more powerful than anything that exists

I think this is the more salient point. Imagine that instead of having a super fighter than can shoot down a MIG-15, but instead swarms of hundreds, even thousands, of very small fighters than can move almost like bumble bees relative to the enemy. It would change the very meaning of air combat.

...and what would these swarms be using as weapons? There's a reason fighters are the size they are and it isn't just because they're holding a human.

"and what would these swarms be using as weapons? "

I think we call these "swarms" missiles.

"Open the pod bay doors, HAL."
"I'm sorry, Dave. I'm afraid I can't do that."

AI and drones and the like are going to give a technological advantage to modernized armies such as hasn't been seen since the 19th century when Hilaire Belloc wrote his famous lines: "Whatever happens/ we have got/The Maxim gun/ and they have not."

Anybody feel like rebuilding the British Empire?

The reason I mention netting around the Whitehouse is that much of this is commodity off the shelf.

"So far, seven countries have used armed drones in combat: the United States, Israel, the United Kingdom, Pakistan, Iraq, Nigeria, and Iran. But many other countries are arming drones and it’s only a matter of time before they deploy them in combat. According to New America’s research, 19 countries have armed drones or are acquiring armed drone technology."

Okay, but as the article Alex discusses makes clear, there's a big difference between drones piloted by humans and those piloted by AI. Which of the seven countries that have used drones in combat were using which?

Autonomous flying vehicles and devices are innovating in all directions right now. A cell phone directed, AI assisted, drone is only not a weapon because.. it hits people in the head by accident.

A rock is a weapon, too, if you pick it up and whip it at somebody; that doesn't make it particularly effective.

Watch a video on "FPV drone racing" and then imagine one chasing you. Not a rock.


My point was that drones are not all the same. I would wager that if two countries actually fought a drone war, the victor would be determined by a)production capacity and b)differences (potentially very subtle) in design or programming creating superior performance. Small advantages on either margin would have large effects when the technology is generally pretty cheap and can be mass-produced, right? Just like a small difference in say, mutual fund costs has a huge impact on aggregate returns when compounded over long time frames.

So a retired pilot lost to the computer AI in an advanced computer game with no real stakes for the pilot. Are we supposed to care and make broad conclusions?

At the current rate of advancement, I would assume that a rare AI win today translates to a majority AI win in a decade and humans don't do that anymore in 20 years.

No one expects Henry the Steel driving man to win a contest anymore.

So, when does the real pilot decide to eject? And, do we need to arm drones, if they can convincingly demonstrate to an human opponent that they (the opponent) cannot win?

So, a Tesla killed a guy. Given the revealed risk in image processing based autopilot, upgrade odds of a ban, downgrade odds of success.

Really given the previous non-fatal collision with a stopped truck, they should have been disabled in the fleet. Negligence.

I believe Google uses LiDAR, which Tesla has rejected, for such scenarios.

Google ran into a bus. And google is limiting speeds so much that "range anxiety" takes on a new meaning. "Will I be able to hold it after drinking the coffee while my car drives me too slowly to work?"

However, not using Lidar does not mean using only camera sensors.

"The latest software update, 7.0 allows Model S to use its unique combination of cameras, radar, ultrasonic sensors and data to automatically steer down the highway, change lanes, and adjust speed in response to traffic. Once you’ve arrived at your destination, Model S scans for a parking space and parallel parks on your command."

The reason the radar didn't trigger a panic stop is it screens out what looks like overhead signs, etc, and registered was no obstruction near the ground in front of the car.

In fact, ....

"This is not the first time automatic braking systems have malfunctioned, and several have been recalled to fix problems. In November, for instance, Toyota had to recall 31,000 full-sized Lexus and Toyota cars because the automatic braking system radar mistook steel joints or plates in the road for an object ahead and put on the brakes. Also last fall, Ford recalled 37,000 F-150 pickups because they braked with nothing in the way. The company said the radar could become confused when passing a large, reflective truck"

And I've seen a number of shows using lidar, and they point out the limitations, after extolling the technology, as they explain how clever they are in figuring out how to reposition and integrate multiple scans.

I found where the accident took place. Based on streetview, I bet the truck driver failed to yield right of way.

A technical review of the Tesla system is at the link below. It says that Tesla's radar cannot detect motionless objects. That seems odd, but would put you in a bad spot when your methods are image analysis and radar.

I'm sure there are government grants to be had in feminist research of gendering AI fighter pilots. #Equality #FederalGrant$ #JohnL

"The military won’t hesitate to deploy these systems for battlefield dominance..."

I wouldn't be so sure about that. The 'cool' factor has had real effects on our design and deployment decisions of weapon systems, especially jet fighters. For instance, our planes have been and continue to be designed for pilots up to 6'4. We could easily limit this and design more compact cockpits which can carry more fuel or weapons. But we don't.

Further, this 'cool' factor likely drove the imagined need for the F-35. While drones make 99% of the flights, congressional reps would much rather be involved in the F-35 project. Watch a Reaper fly then watch an F-35 fly and tell me which is more exciting.

I imagine we'll keep pilots for awhile because we love our heroes. It's in our mythology. We'll deploy this through drone programs, but won't see major replacements of missions until the missions become sufficiently dangerous or other countries score multiple successes before us.

Sorry, could someone point me to where in the paper it says the grad student used Raspberry Pi to run the AI?

I am afraid that I fell for the headlines. Digging in it looks like that was sort of added(?). From the paper itself:

"Again, ALPHA can currently operate alongside AFSIM on a single 3.2 GHz core of a CPU. "

From another article:

"To reach its current performance level, ALPHA’s training has occurred on a $500 consumer-grade PC."

So, cheap but not *that* cheap. You could possibly train on a PC and then run a bit more slowly on a Raspberry PI ..

I can't imagine that we'll have trouble managing superhuman intelligences. After all, look how well we do with merely human intelligence!

Would it be going too far to suggest that certain types of machine learning programs should have to be registered in some sense and have measures to overtake control, or manual overrides on some features? Then again, there are other systems where humans are obvious more of a vulnerability to low quality decisions in certain types of cases.

Mostly, I think there is insufficient consideration of ethics in discussions I've seen about machine learning in recent years. Those I've met who worked on any such related things seem to have the attitude of "it's not my business what use it gets put to" without considering the possibility that they might be in good positions to provide socially useful input on related issues.

Also, I observe some distinction in machine learning stuff sometimes between a) those who think the research will be done according to social or democratic supervision and regulation of some sort, and b) those who seem to think that they'll just do whatever they want, that their particular vision is just likely to pass anyways so if-not-today-then-tomorrow thinking starts to set in on some subjects and people work away on some things which will not ultimately prove useful. Well, I guess it's always at least a little bit like that.

Also, I find that very little discussion of human freedom tends to enter into such conversations, but I think this is mostly because the field is dominated by coders who are preoccupied with other things, generally more technical challenges.

Mostly, I think people seem to assume that we'll have robots which are highly multipurpose in nature, whereas I tend to think that if you're going to use robots, isn't the main benefit precisely in their specialization?

Getting deeper into coding related stuff, I'd definitely have to say that it's not my area of specialization. The reward function stuff makes it sound like the right thing for an economist who has done a lot of work in different kinds of objective functions or something. Which should be lots ...

I think it would be more interesting from trying to branch out from almost a case study sort of approach instead of basing everything on some statistical theoretical assumptions which rely on unlikely specifications of models, etc. I guess it's better to assume that models are incompletely specified than to assume they are completely specified, but I'm not sure whether the numbers really tell you much more as a result.

In terms of the relevance, without high certainty, I think it's entirely possible that machine learning could at some stages make sudden sorts of jumps that catch us off guard. It's probably a good idea for at least some people to be spending at least some time thinking about such things ...

Contrary to nuclear weapons, a sophisticated drone AI could easily be stolen and copied. So basically anyone with any hacking resources will all have the best AI available.

So what is the incentive then to develop AI you know will be stolen quickly? The fact that AI drones are so easy to copy seems to make them useless. Every battle just ends in a stalemate

AI isn't something that's simply 'done' and shipped. It improves constantly through training data and management. Models can be hacked and stolen, yes, but unless you're in a constant state of hacking the training data you'll always be behind the ball.

Comments for this post are closed