In tech, we fear what we can’t control

That is the topic of my new Bloomberg column, here is one bit:

Like drones, driverless cars possess some features of an especially potent scare story. They are a new and exciting technology, and so stories about them get a lot of clicks. We don’t actually know how safe they are, and that uncertainty will spook people above and beyond whatever is the particular level of risk. Most of all, driverless cars by definition involve humans not feeling in direct control. It resembles how a lot of people feel in greater danger when flying than driving a car, even though flying is usually safer. Driverless cars raise a lot of questions about driver control: Should you be allowed to sleep in the backseat? Or must you stay by the wheel? That focuses our minds and feelings on the issue of control all the more.

And:

The recent brouhaha over Facebook and Cambridge Analytica (read here and here) reflects some similar issues. Could most Americans clearly and correctly articulate exactly what went wrong in this episode? Probably not, but people do know that when it comes to social networks, their personal data and algorithms, they don’t exactly feel in control. The murkiness of the events and legal obligations is in fact part of the problem.

When I see a new story or criticism about the tech world, I no longer ask whether the tech companies poll as being popular (they do). I instead wonder whether voters feel in control in a world with North Korean nuclear weapons, an erratic American president and algorithms everywhere. They don’t. Haven’t you wondered why articles about robots putting us all out of work are so popular during a time of full employment?

We are about to enter a new meta-narrative for American society, which I call “re-establishing the feeling of control.” Unfortunately, when you pursue the feeling rather than the actual control, you often end up with neither.

Do read the whole thing.

Comments

Comments for this post are closed