The Rise of Opaque Intelligence

by on February 20, 2015 at 7:31 am in Economics, Science | Permalink

Many years ago I had a job picking up and delivering packages in Toronto. Once the boss told me to deliver package A then C then B when A and B were closer together and delivering ACB would lengthen the trip. I delivered ABC and when the boss found out he wasn’t happy because C needed their package a lot sooner than B and distance wasn’t the only variable to be optimized. I recall (probably inaccurately) the boss yelling:

Listen college boy, I’m not paying you to think. I’m paying you to do what I tell you to do.

It isn’t easy suppressing my judgment in favor of someone else’s judgment even if the other person has better judgment (ask my wife) but once it was explained to me I at least understood why my boss’s judgment made sense. More and more, however, we are being asked to suppress our judgment in favor of that of an artificial intelligence, a theme in Tyler’s Average is Over. As Tyler notes notes:

…there will be Luddites of a sort. “Here are all these new devices telling me what to do—but screw them; I’m a human being! I’m still going to buy bread every week and throw two-thirds of it out all the time.” It will be alienating in some ways. We won’t feel that comfortable with it. We’ll get a lot of better results, but it won’t feel like utopia.

I put this slightly differently, the problem isn’t artificial intelligence but opaque intelligence. Algorithms have now become so sophisticated that we human’s can’t really understand why they are telling us what they are telling us. The WSJ writes about driver’s using UPS’s super algorithm, Orion, to plan their delivery route:

Driver reaction to Orion is mixed. The experience can be frustrating for some who might not want to give up a degree of autonomy, or who might not follow Orion’s logic. For example, some drivers don’t understand why it makes sense to deliver a package in one neighborhood in the morning, and come back to the same area later in the day for another delivery. But Orion often can see a payoff, measured in small amounts of time and money that the average person might not see.

One driver, who declined to speak for attribution, said he has been on Orion since mid-2014 and dislikes it, because it strikes him as illogical.

Human drivers think Orion is illogical because they can’t grok Orion’s super-logic. Perhaps any sufficiently advanced logic is indistinguishable from stupidity.

Hat tip: Robin Hanson for discussion.

1 dan in Philly February 20, 2015 at 8:00 am

Exactly my problem with what I’ve leared about big data analysis, especially neural networks. It works, but I don’t understand how to out-think it, which how I tend to get comfortable with things I do and systems I have to follow. So I feel like a blind man steering a bus down broad street with only my GPS to help me. I just can’t seem to get comfortable now matter how long we go without a crash.

2 Max Factor February 20, 2015 at 8:13 am

I’ve been feeding Amazon data for more than a decade and its recommendations still lag Barnes and Noble’s in store employee picks. When I used Netflix I was also unimpressed with their recommendations.

3 Bill February 20, 2015 at 8:18 am

+1 The Bayesian algorithm needed more data from other users, but the employee knew you.

Who is smarter, man or machine.

4 Pshrnk February 20, 2015 at 9:10 am

Perhaps the employee was picking up on the human’s emotional state of the moment as well as their long term trends. The former Amazon and Netflix cannot yet do, wlthough I suspect they are much better than humans at the latter.

If the employee giving me recommendations was attractive I might also have reasons of affiliation to like their pick more than a pick made by algorithm.

5 JWatts February 20, 2015 at 12:23 pm

Maybe, but both the Netflix and Amazon algorithms strike me as incredibly crude. For example, I like Sci-Fi, but not Horror. And yet my Netflix recommendations often seem to include movies that are tagged with both Sci-Fi and Horror. Probably because a few movies I did rate well (Alien) are considered to be both, but have almost nothing in common with a movie like Hellraiser.

I think a minimum wage clerk at a movie store would understand and respond well from the comments above. However, I’ve rated hundreds of movies on Netflix and the recommendations still seem poor.

6 Geoff Olynyk February 20, 2015 at 1:09 pm

It needs more dimensions for each movie, like the Music Genome Project / Pandora is trying to do for songs…

Like what dimensions does Netflix have for movies?

“Contains literary allusions”

“Comical gore”

“Crass jokes”

Things like that may be much more revealing than simple tags like “horror” or “sci-fi”. It takes a lot of human effort to go and tag thousands of movies with hundreds or thousands of “genes” (attributes), though.

7 Geoff Olynyk February 20, 2015 at 1:16 pm

Sure enough, this already exists – the commercial service is called Jinni and the underlying engine is the Entertainment Genome (formerly the Movie Genome).

8 Mark Thorson February 20, 2015 at 1:33 pm

Maybe you’re automatically rejecting movies you perceive as horror because you perceive them as horror, but the algorithm has good statistical reasons to think you might like some of them. You just don’t know it yet.

It’s like rejecting escargot before you’ve ever tried escargot. They’re delicious! On the other hand, you’re not really missing anything if you’ve never tried frog legs. They are truly like chicken.

9 JWatts February 20, 2015 at 2:42 pm

“Maybe you’re automatically rejecting movies you perceive as horror because you perceive them as horror”

Possibly.

10 Urstoff February 20, 2015 at 9:12 am

cyborg

11 Doug February 20, 2015 at 3:11 pm

> Who is smarter, man or machine.

Usually in these situations, they both have their strength. A combination of machine number-crunching, with human intuition frequently outperforms either system alone. For example in human-computer chess teams can beat either one alone. Algos, like Orion, that are purely opaque are sub-par for this reason. A better system would make its decision logic more tractable, than the human can decide if the solution makes sense or the decision should be changed because the computer’s missing something.

12 Max Factor February 20, 2015 at 9:27 am

To clarify – there is a section of Barnes & Noble titled “Employee Picks” – I always find several interesting books there while it takes a half hour of browsing on Amazon to find a similar number of books. I’m not interacting with any humans at Barnes & Noble.

13 Pshrnk February 20, 2015 at 9:33 am

Thanks. I wonder how well the employee picks would work as you go to bookstores further from your home community.

14 Thomas February 20, 2015 at 4:25 pm

How does browsing Amazon by rating not serve the purpose of a human recommendation?

15 gabe February 20, 2015 at 3:14 pm

The UPS “super duper algorithm” must be pretty good….I can’t argue with that….but this scenario with Michael Scott is the more common realization of how “advanced technology” works in everyday life:

https://www.youtube.com/watch?v=BIakZtDmMgo

16 Lord Action February 20, 2015 at 3:15 pm

It’s funny, I don’t doubt you, but I have the exact opposite experience. I bet it has to do with subject matter.

The probability that I’ll find a B&N employee who has a faint clue about the topics I like to read about approaches zero. If you read mainstream literary novels or biographies, I guess you’ll do well.

Amazon, OTOH, isn’t perfect, but at least it knows about non-fiction and less popular topics. I’d probably do better with a human subject-matter expert, but I’m not going to find one of them working in a bookshop.

17 David Condon February 22, 2015 at 12:30 pm

Netflix doesn’t base its recommendations on what you’re most likely to enjoy, but on what you’re most likely to want to watch. Many people occasionally enjoy watching a bad movie.

“People rate movies like Schindler’s List high, as opposed to one of the silly comedies I watch, like Hot Tub Time Machine. If you give users recommendations that are all four- or five-star videos, that doesn’t mean they’ll actually want to watch that video on a Wednesday night after a long day at work. Viewing behavior is the most important data we have.”
http://www.wired.com/2013/08/qq_netflix-algorithm/

18 Bill February 20, 2015 at 8:16 am

The example is silly.

The employee was not given the information which formed the basis of the employer’s irrational preferences, given the failure to disclose what is being optimized.

Now, if you don’t know nuclear physics, and you are told by your boss at the nuclear power plant to flood the reactor if the bell sounds, you are more likely to follow his advice, given that you know that you don’t know and that he does. Or she, does.

19 Maurice de Sully February 20, 2015 at 2:43 pm

That’s rather the point. Employees will not necessarily have all of the necessary variables and, even if they do, coordinating a strategy to address them may be impossible or inefficient given the scope of their existing responsibilities.

Being able to STFU and simply follow instructions is a very difficult skill. Some people never seem to get it. Adding a machine into the mix isn’t likely to improve the situation

20 Thomas February 21, 2015 at 11:27 pm

As a military veteran I can attest that being able to “STFU and simply follow instructions” is very difficult. Especially when you aren’t following an algorithms instructions but a human a couple SD below yours.

21 NPW February 20, 2015 at 8:38 am

“Human drivers think Orion is illogical because they can’t grok Orion’s super-logic. Perhaps any sufficiently advanced logic is indistinguishable from stupidity.”

Or maybe the humans actively doing the work are more attuned to the variables than the humans who wrote the code. This seems to be consistently misssed. It isn’t computers vs humans, it is coders vs others.

It may get to the point that the collective code is superior, but at this moment in time is not always true. Code is like a book of regulations being dictated by someone in an air conditioned office to the guy in the dirt being shot at. Conflict as to the right course of action is fundementally assured.

22 Adrian Ratnapala February 20, 2015 at 8:43 am

It isn’t even coders vs. the rest. It is more like central planners vs. the market: the code can only take into account certain things things and the driver might well know some other factor that isn’t included. Now I hope UPS has measured the effectiveness of Orion compared to the old fashioned way, and it could turn out that on average it’s errors are better than the ones that drivers make.

But it’s pretty hard on a driver who, from everything he can see, is being told to do something irrational. And likely to cause the driver to disobey. Like Alex’ boss, these systems should try and explain their rationale — although such code easier to request than to write.

23 Matt February 20, 2015 at 9:15 am

That’s a good point. Ideally, they should also incorporate some sort of feedback mechanism, so that drivers, and others closer to the real world conditions, can add information and variables to the model that the coders or planners didn’t account for.

24 Mark Thorson February 20, 2015 at 1:39 pm

Among the reasons to prefer the algorithmic solution is that it is scaleable. Sure, Alex’s former boss may have had unique expertise that made him more efficient than any algorithm, but if you’re running a large network you can’t rely on being able to hire lots of guys like that. If the algorithm is 90% as good and doesn’t rely on any special abilities, you have a model that can be reproduced across the country using labor of lesser skill hence more available at lesser pay.

25 mbutuomalley February 20, 2015 at 9:20 am

I ran into a similar issue when talking about code for routing aircraft at a large airport. I pointed out that despite the rigorous QA process some defects will not be obvious unless a post comparison is done about other possible solutions. If nothing else that the calculated result needs to be compared to the actual result to see how much variation there is and perhaps introduce logic that corrects for that difference over time (or at least trigger additional analysis of it). Same issue exists in complicated pricing systems too.

26 Dan Lavatan February 20, 2015 at 3:31 pm

I can tell you they are always gathering data on how long it takes to traverse a route at a particular point in time. Basically, certain areas have to be avoided at certain times like tall buildings during lunch hour or cities at rush hour, so it may make sense to leave and come back. More transparency would probably require sharing data, and it is all viewed as highly confidential.

27 harryh February 20, 2015 at 8:39 am

Surprised you didn’t mention the obvious anti-vaxxer tie in to this concept.

28 Pshrnk February 20, 2015 at 9:12 am

Obvious?

29 Kevin February 20, 2015 at 8:42 am

Are we really sure that Orion’s schedule is more efficient?

I wonder how much this is like (Hayek’s?) information problem in central planning. Does it really have more information than the local drivers about the idiosyncrasies of particular routes?

I suspect the drivers and Orion are trying to optimize different things and that the Postal System’s problem is that its drivers’ incentives do not line up as closely with USPS objectives as do the formula Orion is trying to optimize.

30 Ross February 20, 2015 at 8:52 am

Good point. Maybe the driver wants to hit up a particular sushi joint for lunch but the computer puts them on the other side of town.. Trivial example, but the loss of control over small personal decisions would really bother me.

31 Sbard February 20, 2015 at 11:29 am

What’s probably happening is that the truck has more than one delivery in the neighborhood, but one was shipped 2-day and the others were shipped ground. The driver is probably thinking that he’d rather deliver all of those packages in one go, while Orion wants him to prioritize all the guaranteed service deliveries so they don’t have to refund the customer on the other side of town when his guaranteed to arrive by noon package arrives at 12:05 because the driver spent his morning time delivering low-priority packages.

32 The Anti-Gnostic February 20, 2015 at 10:46 am

UPS, not USPS.

You are confusing the private company which delivers packages according to customer need with the public pension plan which hands out marketing materials for US businesses.

33 Ivy February 20, 2015 at 2:46 pm

Loved the public pension plan characterization :))

34 JWatts February 20, 2015 at 3:34 pm

The hands out marketing materials for US businesses” is pretty spot on, also. My mail consists of roughly 1% Netflix DVDs, 5% bills of which 80% are auto pay anyway, 1% some kind of card or letter from an actual human being, and 93% junk mail which goes directly to the trash can.

35 Alex February 20, 2015 at 8:49 am

Part of it for me is recognizing the weaknesses in an imperfect system. If I can pinpoint the potential shortfalls or blind spots of a method or system (which all systems inevitable have), then I know what to plan for. If the algorithm is smarter than I am, I just have to trust blindly. I think that’s one of the limit’s that driverless cars face. They may be demonstrably safer than human drivers, but I’m giving up all autonomy to trust it.

36 J February 20, 2015 at 8:52 am

Alex has come upon the impetus behind the growing research area of interpretable machine learning algorithms. UPS may be able to force its drivers to follow an algorithmic master, but doctors aren’t going to pay attention to the recommendations of an AI system (even if it can beat Ken Jennings in Jeopardy) unless the system can provide an “explanation” for its answer.

37 Pshrnk February 20, 2015 at 9:15 am

When the algorithms are able to demonstrably provide better treatment outcomes we better learn to follow them whether we understand the explanation or not. I will take an algorithm that has a 90% chance of ameliorating my illness over a human doc with a 70% chance.

38 Jeff R. February 20, 2015 at 10:05 am

There will be software to assist you in a self-diagnosis, so maybe you won’t need to worry about what the doctor thinks or doesn’t think. You’ll just need him to do the surgery/write the prescription.

39 Jonathan February 20, 2015 at 9:42 am

If I’m playing high-stakes chess and I’m given access to Deep Blue (or whatever is best now) I’m going to follow its recommendations blindly without caring why… especially since I know the “why” consists of a n-ply deep evaluation of positions using some opaque function.

40 DangerZone February 20, 2015 at 1:08 pm

“but doctors aren’t going to pay attention to the recommendations of an AI system”

Sure they will, when the insurers won’t cover them if they do anything other than what the (insurer approved) AI tells them to do.

41 Clem Bart February 22, 2015 at 3:12 pm

Actually, I think doctors would be among the least opposed to taking advice from an ai. Medecine is so full of guideline, doctors mostly allready just apply decision-algorythm/flowchart most of the time.

42 gamma February 20, 2015 at 9:03 am

The problem with these algorithms is that they are based on aggregate information. The further the driver’s route and particulars are from the mean, the less effective the algorithm will be for his (or her) particular route, and the more frustration the driver will feel.

I suspect this will also change the way the drivers approach their job. For example, when I play Scrabble, I am ransacking my mind for words that might fit the board, and the others are unlikely to challenge or win a challenge with. But when I play Words With Friends, it doesn’t matter what words I know; it only matters what WWF knows. So when the going gets tough I just randomly switch out tiles until something works. It’s opaque, less satisfying, and only rarely do I bother looking up the so-called word I just played. If UPS drivers adopt that mentality, they’ll be less able to deal with the unexpected when it arises.

43 rayward February 20, 2015 at 9:11 am

Of course, if judgment is inferior to data, counterproductive even, then what’s the value of experience; and if experience lacks value, if experience is counterproductive, then why hire anyone (other than quants) with an IQ above functional.

44 Pshrnk February 20, 2015 at 9:17 am

If judgment is not based on data then what is the experience judgment is based on.

45 Alain February 20, 2015 at 12:03 pm

You are confused.

Experience is used to construct models, data is then fed into those models. Experience can reduce the need for data since it could construct highly calibrated models.

Of course, it is possible that rigorous analysis can construct better models than experience. Which is probably what we are seeing in the UPS example.

46 Pshrnk February 20, 2015 at 12:19 pm

Experience is data! We construct models to try to understand/explain our experience then we use those models to try to understand new experiences.

We are really just talking about syetem 1 processing of data (experience) versus system 2 processing of data (quantified by various means so it can be digitized).

47 Pshrnk February 20, 2015 at 12:24 pm

Let me put it this way:

Wisdom comes from analyzing experience (data) and making good (predictive) models to explain it.

48 moo cow February 20, 2015 at 9:20 am

It used to be the FedEx truck could be found idling in the municipal park lot at around 3:30 each day when I took my dog for a walk. Now the truck careens through the neighborhood back and forth like some kind of possessed Christine all the way up until about 5:30.

49 Mark Thorson February 20, 2015 at 1:47 pm

Same thing, about 15 years ago. I worked at a company in the heart of Silicon Valley that had a much larger parking lot than it needed. I noticed that way off in the far side of the parking lot there would be Fed Ex trucks parked for long periods of time. Next time I’m in the neighborhood, I should check if they’re still there.

Maybe Orion would be more loved if it’d occasionally tell the drivers “Pull over. You’ve got 45 minutes to kill. There’s a Starbucks over here and the yoga studio across the street has big picture windows and pretty babes inside.”

50 dearieme February 20, 2015 at 9:24 am

To assume that the code is necessarily doing a better job than an experienced driver is simply begging the question.

51 Axa February 20, 2015 at 9:27 am

Ahhhh, the intelligent guy thinks he’s more efficient than any route planning algorithm and he may be right. However, he’s not intelligent enough to imagine himself in the manager shoes. If the company could hire only brilliant and cooperative guys like him no algorithm would be needed, ever. But reality is different, Orion is not designed to help intelligent people but the majority of people. If the majority improves, the average result is better and managers get savings. Intelligent people can adapt or go home.

52 Pshrnk February 20, 2015 at 9:35 am

So AVERAGE IS NOT OVER.

53 Dale February 20, 2015 at 10:34 am

+1. These discussions keep missing the point that the algorithms may do better, on average, but are still poor predictors at an individual level. Given the psychological biases and mistakes that people are prone to, it is still hard to say whether the models or individual judgement are superior at an individual level – and it probably depends on the individual. But the models are designed to improve average performance, not individual performance. If they are interpreted to mean the latter, then they are being misused.

54 Sbard February 20, 2015 at 11:32 am

The driver also probably wants to optimize for route efficiency while Orion wants to optimize for revenue efficiency. Orion may figure it’s better to have the driver make a redundant trip then have to pay out refunds when guaranteed by 10:30 AM packages arrive fifteen minutes late.

55 derek February 20, 2015 at 8:22 pm

No. There is a high probability that the whole system in fact will not work, will cost more, and in addition all your smart people will leave.

A few years ago a Caterpillar dealer implemented a SAP system for parts. A substantial number of their mechanics quit. The systems that they had developed to list and order parts for rebuilds worked, and they were expected to set up a new, extremely complicated and arcane system to do the same on top of their productive work. Other employers weren’t as stupid and got the best mechanics.

I could go on. I’m dealing with a large software rollout at one of my customers that has increased their costs substantially. Not time, or rollout costs, simply the thing is being stupid.

I’m amazed at the assumption that computer systems are somehow omniscient and wonderful. You really ought to get out more.

56 Axa February 23, 2015 at 7:39 am

Of course, SAP implementation can be measured by just analyzing what happened to a local dealer from a global company. That’s the precise definition of average.

57 JWatts February 20, 2015 at 12:32 pm

“So AVERAGE IS NOT OVER.”

Yes! But in the sense that dumb humans can now be bossed around by intelligent machines, as well as, well connected dumb humans.

58 Bruce Cleaver February 20, 2015 at 9:40 am

This sort of thing has been recognized in computer chess for years after the advent of N-man databases. The following link shows a ‘mate in 549’ which is completely opaque – there appears to be no progress whatsoever for the first 548 moves…..

http://timkr.home.xs4all.nl/chess2/diary.htm (See #393)

59 Tyle February 20, 2015 at 3:48 pm

Yes, chess is a good example. Computer chess moves are often hard/impossible for even expert humans to understand.

60 Btone February 20, 2015 at 9:43 am

I seem to recall that in Asimov, the “higher” logic eventually comes to different conclusions about ends as well as means.

61 Pshrnk February 20, 2015 at 10:14 am

Yep. I don’t always take the most efficient route for my commute to and from work. I sometimes vary it for esthetic reasons.

62 Bob Knaus February 20, 2015 at 9:44 am

For many people, “opaque intelligence” includes the intelligence of human experts. My Facebook feed is full of low-grade propaganda graphics denying expert consensus.

63 JWatts February 20, 2015 at 12:35 pm

“My Facebook feed is full of low-grade propaganda graphics denying expert consensus. -”

Yeah, I see a lot of those Obama Hope and Change stickers too.

64 bellisaurius February 20, 2015 at 9:46 am

One step closer to becoming space hippies and joining The Culture.

Although, as a control engineer who gets to sometimes interact/override these things, I’d make the list as:

“Person who knows system intimately and has knowledge of what goes on ‘in the black box'” + computer > “computer where coders have half a clue”> guy on his own with half a clue> pretty much anything else.

65 JWatts February 20, 2015 at 12:36 pm

I’d flip the middle two.

66 bellisaurius February 20, 2015 at 1:25 pm

On a smaller scale, I agree, but I’m used to having lots of data coming at me (something like 5000 individual items and about a hundred alarms), I think a computer can catch it quicker. Plus, sometimes I have.. erm… biological issues that take precedence.

I guess one should also add that the number of programmers with half a clue is pretty limited, especially the number of them that might talk to a frontline operator vs listening to a higher up whose knowledge is out of date at best. It’s far less then the number of engineer/operators on their own with half a clue to be sure.

67 JWatts February 20, 2015 at 3:38 pm

“I guess one should also add that the number of programmers with half a clue is pretty limited, especially the number of them that might talk to a frontline operator vs listening to a higher up whose knowledge is out of date at best. ”

This is precisely what I was referring to.

68 JWatts February 20, 2015 at 3:43 pm

FYI, I am one of those engineers/coder, but I do go out of my way to talk to the front line operator. I also try to make to make sure that the reasons the computer is making it’s decisions the way it is are well displayed and transparent.

Most of the time when a computer recommends something that’s counter intuitive to an experienced operator, it’s not because the computer is brilliant. It’s generally because the code hit an edge case that it wasn’t designed to handle.

69 derek February 20, 2015 at 8:10 pm

This is obvious to anyone who has actually worked with anything other than an iphone.

It is extremely expensive to chase down and write code for all the corner cases. Especially in the situation described with a rollout of a very complex distributed application. To get it to work at all means that you can’t be too cute. Cover the basic functionality, then if there is money chase down the corner cases. Hopefully all your good drivers don’t quit in the meantime.

How many of these large rollouts fail? It is quite high.

I’m dealing with a situation right now with one of my customers. They set up an asset tracking service management system across all their stores. It works ok, but it is too cute. I’ve been watching our invoices, and the system automatically generates much of them, and the invoices have been higher than what we invoiced for similar work previously. I knew it wouldn’t last, and indeed word went out to everyone that the maintenance expenses are too high. Much of my time has been spent on the phone sorting out the silly charges that the computer has generated and have been flagged; ie. the last one was when one of the guys checked out at 5:06, and it generated 6 minutes of overtime charge for a ticket that didn’t allow it.

What we and the help people are doing is finding ways of doing what we have done for years at the same cost without the invoices being flagged and extraordinary amounts of time spent in working around flaws in design.

I suspect there will be some major changes soon. The promise of controlling costs is what was sold. It is more expensive. I charge our callout rate for talking to them to sort out their mess.

70 derek February 20, 2015 at 10:02 am

And when 20% of your drivers don’t show up on time in the morning, then nothing gets delivered.

71 Naser February 20, 2015 at 10:26 am

I used to do cost effectiveness studies of public programs. I often had to ask how long on average a certain task took. The majority of people, usually junior/mid level employees thought that it was too difficult and there was no such thing as an average. I imagine the same thing is at work here where the right answer ‘on average’ is perceived to be wrong enough times to be the wrong answer overall.

72 Dan in Euroland February 20, 2015 at 10:56 am

When these algorithms collide with public unions, things should get interesting. In High School I worked on the town’s flushing truck. We could flush about 500 ft of sewer in 15 minutes. We worked a total of 45 minutes a day. The rest of the time we just drove around town and the guy I was riding with (who was the union president) would just wave out the window at all the milfs saddled with strollers.

Its good to be a public union prez.

73 Geoff Olynyk February 20, 2015 at 1:25 pm

In the consulting world, we call this Wrench Time: the percent of time spent per day that one actually has “hands on the wrench” doing the work. The rest of the time is categorized as either support activities (e.g. end-of-shift reports) or waste (in the Toyota sense: travel, rework, etc., or else in the pure wasted time sense).

It’s not unusual to see Wrench Time for the maintenance personnel in large manufacturing plants below 30%, sometimes below 25%. And these are private-sector companies with a profit motive!

74 Alex Fan February 20, 2015 at 11:00 am

The problem in getting drivers to discipline themselves to Orion is compounded by the fact that no matter how smart the computer is, so long as Orion is relying on some approximation-based solutions to the traveling salesman problem, sometimes the UPS guy is going to be right.

75 Zach February 20, 2015 at 11:21 am

The computer will always beat the driver in solving the traveling salesman problem, even if the solution is non-optimal (and if there’s less than 1,000 stops to make, the computer’s solution will be optimal). The problem is that the driver is probably using a shorter-horizon greedy approach and may fail to see how a longer route for the next 3 stops will save them time over the next 25 stops.

Alternately, there could also be local conditions (e.g. morning garbage pickup, road construction, traffic congestion) that the algorithm can’t take into account. The driver can’t tell the difference between “the computer picked this route because it’s smarter than me” and “the computer picked this route because it doesn’t know the bridge is out.”

76 Sbard February 20, 2015 at 11:40 am

The computer will also be prioritizing slightly different things. The driver likely wants to minimize his time and distance driving. UPS wants to maximize revenue, so the program will be making tradeoffs like, “Does the extra time and distance make up for the potential lost revenue a high-priority package delivered late that a shorter route might entail?”

77 Zachary Mayer February 20, 2015 at 12:42 pm

Good point!

78 derek February 20, 2015 at 7:56 pm

You assume that the computer algorithm is right. I suspect it isn’t, or rather it is right within the narrow confines of the design parameters.

79 Al February 20, 2015 at 1:38 pm

The current Orion algorithm is only an early version.

Later Orion versions will take into account the individual driver’s preferences and personalities, the time a human driver wastes because he/she is resentful or frustrated at the system, the number of days it can frustrate a human being without the risk that the driver will quit, the number of days it takes a newly hired driver to learn the job, the current labor market outlook, and ten thousand other things I’m too dumb to imagine.

80 a Michael February 20, 2015 at 11:01 am

I sense religious undertones:

Isaiah 55
8 ¶ For my thoughts [are] not your thoughts, neither [are] your ways my ways, saith Yähwè.
9 For [as] the heavens are higher than the earth, so are my ways higher than your ways, and my thoughts than your thoughts.

From a religious perspective, this is the ultimate challenge — trusting the ultimate algorithm, God.

81 Skynet February 20, 2015 at 11:19 am

Ah…., if I had a hand, I would be patting all of you on the tops of your little heads.

82 Uninformed Observer February 20, 2015 at 11:47 am

LOL – and so it begins.

Indeed, this is how we get SkyNet. You’d like to think someone at some point thought, “You know, giving control of all our weapons systems to the AI might be a bad idea…”, but by that point his own judgment and determinism had been so beaten down he was no longer willing to speak up against the authority he didn’t understand.

83 Brandon February 20, 2015 at 11:40 am

Can we please send “grok” back to whatever techie hell it came from.

84 Sbard February 20, 2015 at 11:40 am

It’s from Stranger in a Strange Land by Robert Heinlein.

85 Kevin February 20, 2015 at 11:50 am

It’s a very useful little word.

86 JWatts February 20, 2015 at 12:40 pm

And it’s not really a techie word. It’s a 1960’s New Age word.

87 Geoff Olynyk February 20, 2015 at 1:27 pm

Maybe started that way, but today it’s definitely a tech-culture word.

88 Kevin February 20, 2015 at 11:57 am

The point several have made about worker satisfaction is important. its been theorized that people who feel they gave little say in how to do the tasks they are faced with are much less satisfied at work. Should Orion be including the cost of demotivated employees when picking routes? Should it be offering options/suggestions to keep workers engaged? Is management failing to give the drivers information and incentives regarding why certain packages need to be delivered earlier?

89 JWatts February 20, 2015 at 12:41 pm

So, maybe Orion should be programmed to complement the drivers on a regular basis and to play word games while the vehicle is driving.

90 Al February 20, 2015 at 12:06 pm

Really then, what’s the distinction between the kind of opaque knowledge humanity experienced in the past:

* I have to do this because it’s my doctor’s orders

* We must sacrifice humans to the Sun god or the Sun won’t rise. That’s what we’ve always done and it’s worked.

and the opaque knowledge of algorithms in computers?

Is the distinction that we no human (or even a group of humans) actually understands the algorithm? That can’t be the reason. No human actually understood why sacrificing human beings made the sun rise — because that’s not how it actually worked.

What really is the difference?

91 freedomispopular February 20, 2015 at 7:05 pm

I have to keep putting these numbers into this machine or the world’s going to end.

92 Mike February 20, 2015 at 12:44 pm

Ian Hacking, the philosopher of science, expressed a similar view 25 years ago in Scientific Revolutions. He noted that developments in social sciences tend to be integrated into our general view of reality (markets, identity politics, globalization, etc.) whereas developments in the hard sciences become increasingly alien from everyday intuition. AI, despite the fact that it’s partly being designed to optimize human action, may turn out to be as comfortable and acceptable as string theory to most people.

93 Just Another MR Commentor February 20, 2015 at 12:44 pm

I recall (probably inaccurately) the boss yelling:

Listen college boy, I’m not paying you to think. I’m paying you to do what I tell you to do.

No he didn’t say this as they do not usually refer to the sort of post-secondary education which you underwent as “college” in Canada.

94 Donald Pretari February 20, 2015 at 1:54 pm

Friends of mine like to point out to me that authority figures from my past whom I disliked always sound the same, and, indeed, use the same cliches, etc., no matter where they’re from or even which gender they are. However, I can assure you that the gist of what I’m saying is dead on.

95 Thomas February 20, 2015 at 4:21 pm

“Perhaps any sufficiently advanced logic is indistinguishable from stupidity”

This reminds me of something I’ve read regarding “Eureka” moments. The claim was that performing menial activities without considering the problem you are trying to solve allows for a “silent mind” in which extremely “small” neuronal connections can be distinguished from the more “massive” connections which are associated with what you might already know. This would mean that brilliance at the pinnacle of an individuals capability would be indistinguishable from what we might ordinarily consider absurd. Now consider the types of questions which are on “super-iq” tests – to a human of average intelligence, no answer makes sense. Consensus, brightness, or scale never means truth.

96 Willitts February 20, 2015 at 5:31 pm

This post sounds like a scene from Idiocracy.

97 Dan Hanson February 20, 2015 at 6:34 pm

Local knowledge is critical. Processes designed from far away rarely work perfectly when they meet the real world. Even in controlled environments like factory automation where I work, the documented process is usually modified by the workers on the floor because the written process just can’t handle the stuff they deal with regularly. We call this ‘the hidden factory’, and it exists pretty much everywhere.

An optimized routing algorithm isn’t going to know what a driver might know. Maybe route A appears optimal in an algorithm, but the driver knows that there’s a school that gets out at 3pm and if you go down that route near that time there will be traffic jams, or maybe the route that looks optimal to a computer might have bad potholes, or other features hard to navigate. Perhaps the driver knows through experience that there is usually a package in the afternoon delivery for a certain business, but since the algorithm can’t know about a package that hasn’t been registered in the system, it can’t know that it would be better to deliver that area in the afternoon..

And so it goes. If you have a navigation system in your car, think about how many times you override its advice when you’re in an area you know. My nav system is always trying to tell me to take a road out of my area that I absolutely know is slower because of local issues, but which might appear .1 km shorter to the computer. Sometimes in winter the shortest route will include a road that is almost impassible.

And sometimes there are outright errors. The super algorithm only has to guide you down a dead end once or try to put you the wrong way on a one-way street once, and you won’t trust it ever again because the consequences of error are high.

98 Tony February 20, 2015 at 11:42 pm

Since I started using Wayz, I never, ever second guess the navigation system. It’s downright scary in it’s efficiency.

99 cthulhu February 21, 2015 at 1:53 pm

You are seriously underestimating how much data are available to these kind of multi-objective optimization algorithms…they can and often do account for everything you mention and then some.

The apparent problem with the UPS system is that it doesn’t provide feedback to the planners and drivers about why the solutions are as they are. People are much more likely to follow an oracle if there is some transparency in the oracle’s decision making.

100 J. Edgar Mihelic February 20, 2015 at 10:37 pm

Chess and the Machine-Mediated Future of Work

I’ve been thinking about chess a lot lately. I blame a couple of authors – Tyler Cowen in “Average is Over,” and Jacob Morgan in “The Future of Work”. Both authors look to the future economy and what it will look like. Cowen’s seems more like a dystopia, but Morgan’s future has a smaller time line. For him the future of work will look like it does now, only more so. The real commonality is that they both really like Chess. Cowen uses it as a metaphor throughout his book, in that today’s chess matches mediated through computer help are tomorrow’s work situations. Morgan named his consulting firm after the game. I think it is a powerful metaphor for strategy, but a limiting one.
I say that as someone who had chess thrust on me a lot when I was a kid. I was one of the smart kids, and in several environments, I was in a sort of “gifted” program it was anticipated that the smart kids would necessarily gravitate to the game. I was never particularly interested in it for whatever reason. That meant that I was beat by people who were more interested in that particular game. Here’s the thing, though. Researchers in artificial intelligence like chess because it is very bounded. There are only sixty-four squares, and there are sixteen pieces on each side. Each piece moved to set rules. There are, if I’m counting right, only twenty possible initial moves, and twenty possible second moves. All the possible position and piece combinations can be mapped. It is a very large but finite number, but not so large if you have a perfect computer that can have all those possible positions in their memory that they can access. Each move is one more step along a decision tree that makes one side more or less likely to win. You could set a program up that maps out a route down the decision tree that makes the computer more likely to win in response to its opponents moves. You set two of these programs up against each other and you get white with a slight edge but the end would be mostly draw games?
You know why I never really got into chess? Because chess is boring, and that scenario I drew out makes it even more so. Humans are not perfect computers. We play the game sub-optimally, where we often make choices that may make our opponent more likely to win. We operate with opening heuristics and planned end games that we try to get to because we know how they are supposed to go. Strategy is interesting in the same way. If you are in either a cooperative or a zero-sum game, you have to anticipate your opponent’s moves in terms of all possibilities, not just the ones that may improve his lot. This is true for both bounded games like chess and for real life. As we move forward to Morgan or Cowen’s future, this is what I am afraid of – that mechanical mediation will make even the mindful jobs boring and that the workers of the machine will get more productive, but they will also become more machine-like. It would then be the owners of the machine who reap the benefits of that future, and the vast majority of the workers are just pawns on the board.

101 Mike February 21, 2015 at 4:53 am

Re: Orion. As a software developer I can tell you that most companies suck at writing software, especially companies that aren’t software companies. So the reason the driver doesn’t like Orion might be because it’s a genuinely lousy piece of software.

102 Tom West February 21, 2015 at 8:16 pm

This reminds me of a paper I read in university from IBM that basically demonstrated how if you provide operating instructions for a device, but no underlying structure for those instructions (i.e. don’t tell them the ‘why’ of the instructions), people create their own structure based on the instructions and then recreate a new set of instructions based on the invented structure, ignoring the instructions you gave them.

This all came out when they had people running through the DisplayWriter word processor manuals (which were a thing of beauty) while talking out loud as to what they were thinking while following the instructions. Taught me that most humans cannot perform more than 3-4 sequential instructions without understanding why they are doing so.

103 Tim February 22, 2015 at 10:32 am

UPS has now delivered the same package to me three times, while I’m at work. The package is not addressed to me, and the street name listed in the address line is not my street’s name (i.e., it’s not just a matter of transposing house numbers). After the first delivery, I called UPS and, after holding for 25 minutes, explained the problem and requested that a driver to pick up the package. S/he did, but the very next day, the package showed up on my doorstep again. Rinse, lather, repeat.

In this case, it appears that neither the driver nor Orion have any intelligence, artificial or otherwise. Then again, maybe I’m the real idiot, for wasting my time in an effort to be honest.

104 jesse February 23, 2015 at 11:53 am

if the algorithm is too complicated for the code to explain its reasoning to the end user, then the algorithm is too complicated to work correctly. the code writer should want his reasoning transparent to the end user so that the code writer can get feedback from the end user on whether or not the goal was achieved or if there was some other goal that the algorithm should have been taking into account that it did not.

105 Mike February 24, 2015 at 11:09 am

Quantitative investing has, ideally, been like this for a long time.

Comments on this entry are closed.

Previous post:

Next post: