The First Law — robot sentences to ponder

The robot administers a small pin prick at random to certain people of its choosing.

The tiny injury pierces the flesh and draws blood.

Mr Reben has nicknamed it ‘The First Law’ after a set of rules devised by sci-fi author Isaac Asimov.

He created it to generate discussion around our fear of man made machines. He says his latest device shows we need to prepare for the worst

‘Obviously, a needle is a minimum amount of injury, however – now that this class of robot exists, it will have to be confronted,’ Mr Reben said on his website.

Here is more, with pictures of (slightly) injured humans, via the excellent Mark Thorson.


A guy can write, "Obviously, a needle is a minimum amount of injury ...." But that claim is not obvious or even true.

I was surprised that neither the artist's website, nor the BBC coverage, nor the Fast Company coverage mention whether the pin is reused for multiple pricks, or the risk of cross-infection that re-use could create.

Here is Asimov's First Law of Robotics: A robot may not injure a human being, or, through inaction, allow a human being to come to harm. Here is the Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. And here is the Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. Did Mr. Reben read the Three Laws of Robotics, or is he being ironic?

He's being ironic, it's an art project.

Though my first thought is the robot uses some other mechanism to identify those likely to have diabetes and does a blood sugar test.

In an earlier blog post by Cowen I mentioned misquoting Jesus by many Americans; here, I am suggesting misquoting Asimov. Here is what Asimov said about clarity in writing: "I made up my mind long ago to follow one cardinal rule in all my writing — to be clear. I have given up all thought of writing poetically or symbolically or experimentally, or in any of the other modes that might (if I were good enough) get me a Pulitzer prize. I would write merely clearly and in this way establish a warm relationship between myself and my readers, and the professional critics — Well, they can do whatever they wish." And here's what Asimov said about ignorance: "There is a cult of ignorance in the United States, and there always has been. The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that "my ignorance is just as good as your knowledge."

It's called that because it violates the first law, not because it conforms to it.


Sorry, should have provided some context: "Why Asimov's Laws of Robotics Don't Work"

If they had, his stories would have been much less interesting. Most people who talk about them in the artificial intelligence and robotics space are dilettantes

Uhm, this 'class' of robot has existed for a long time. Pretty much for as long as robots have existed and humans not paying attention can get in the way.

The 1st Law is 'may not harm or allow to be harmed, a human'. All current robots fail to be able to do the latter and a lot will violate the first if someone gets within reach.

He's created a robot that can not recognize a human and randomly stabs a pin down if there's something triggering the sensor. its functionally equivalent to disabling the interlocks and running out onto a manufacturing assembly line.

"its functionally equivalent to disabling the interlocks and running out onto a manufacturing assembly line."

Or just walking into the safety cage and closing the door behind you. The safety features assume you aren't actively trying to defeat them and a minimal level of intelligence.

I also struggle with the whole point the guy is attempting to make by creating his special robot. As you note, few existing robots would satisfy any of the three laws Asimov suggested so it's not clear that creating the one he did has really done anything to change the debate materially in my view.

Personally, in the sense of war machines, rather than attempting to put ethics into the machines we need to drop the entire idea of autonomous fight machines and rely on people doing the fighting if one wants to control the impetus of war. Take people out of danger in waging the war and you'll get more war -- and then have the types of problems this guy was getting at I think (a lot of "collateral damage" -- but largely in the undelevloped world and not in the top tier countries I suspect). But at the end of the day, yes, if we make robots to destroy people they will likely not include code related to the first or second law -- again, nor really a problem with the robots but with humans.

There's also a bit of irony, if I'm recalling the orignal story correctly. I think the original bit was the robots were suppose to be fixing some problam that was too dangerous humans to fix but it also endangered the robots so they never did the work they were sent to do. The solution the technician send to fix the robots ultimately had to implment was to put himself in danger so the robots completed their taks to save his life.

You gotta love it... "Hey, robots...Leave the killing and causing pain to us." One thing I know for sure...Humans are going to use robots to kill and cause pain...And why shouldn't they join the party? They'll just be following the rule that , if you are superior to it, you can kill it. Maybe robots will help us see ourselves as we really are.

I do hope it disposes of that needle after its RNG lands on "prick".

The fourth law of robotic: A robot must not runaway.

"""Russian 'runaway robot' causes traffic jam"""

Better to call them the Three Suggestions of Robotics. Easily ignored, easily circumvented, and the essential nature of machines is complete indifference to what material they do work upon.

Comments for this post are closed