Wednesday, January 28, 2009

Personal Responsibilty

In his robotic books the late great Isaac Asimov created the Three Laws of Robotics, for those not familiar with them they are quoted as being

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Those seem fair and reasonable, however a spin off series by Roger MacBride starting with "Caliban" examined these in a greater social context. The novels he wrote points out the main problem with the First Law - the "inaction" clause. This meant a robot had to protect a human being against potential harm, but how does a robot determine that? Simply put it doesn't - want to go skiiing in a robot society sure you can but you'll need one robot on either side to ensure you don't fall and break a leg, one in front to stop you from hitting anything and another behind to make sure you don't fall badly; want to do some cooking hmm you could burn yourself on the hob, and those are very sharp knives. Better in both cases just to not bother or let robots take care of it.

These were written in 1993, but they could be seen as a prediction of the line the present government is taking. Orphi summed it up in a comment when he said "there are rules to keep people from putting other people in unecessary danger." and that's the point of the rules - to stop you from doing something to other people that might cause them harm or distress. Except how do you judge harm or distress? We're all individuals so what might be considered distressful to one person might be fine to another, heck even the same person might find the same thing distressful or not at different times so how to judge?

Well like the robots the government's answer seems to be not to bother at all and simply judge everything as being potentially harmful etc. and like the robots the inaction clause is kicking in so as to prevent harm to ourselves. So we're raising a generation who is expected to look to the government for answers to everything they do - don't smoke; don't do drugs; don't drink; don't loiter on street corners, heck don't even go out at night; don't bother working. We'll tell you want to do and how to do it, leave it all to us.

Is it any wonder that some are simply accepting this rule while others react violently to it, is it any wonder that it seems we're losing our own sense of personal responsibility while at the same time expecting everything to be handed on a plate to us?

Time to stop messing around with under-age drinking, smoking bans, and food police and get back to what the laws, government and police are supposed to be all about - protecting person A from the actions of person B.

2 comments:

Anonymous said...

I think your being a little unfair on Asimov to suggest that MacBride was the first to consider the implications of the Laws of Robotics. The 'Asimov programme' formed from his robot works was essentially a really long demonstration of why the Three Laws, and by extension all simplistic rules-based moral systems, can't work. Almost every one of his stories pivoted on a way in which an unintended consequence of the Laws caused some mysterious problems, and his longer works (like the Caves of Steel trilogy) dedicate whole chapters to issues like how you can make Three-Laws robots discipline children, sacrifice few that many might benefit, and so forth.

It's unfortunate, IMO, that so many later authors have chosen to pay homage to Asimov by making their robots be based on the Three Laws, which is exactly the opposite to the message he was showing. But that kind of unexpected outcome is exactly the sort of thing he liked to play with in his books, so there is a kind of wry poesy to it all.

FlipC said...

Dan it wasn't meant it any way as a criticism of Asimov. You're right in that he understood the problems associated with the simplistic rules and dealt with the fallout in the Caves of Steel series, but to an extent that was exaggerated to provide both a strong contrast and 'moral' meaning. It was MacBride who to my mind dealt with the nitty-gritty of the effect such robots would have on a world's social order and postulated a solution and then dealt with the fallout from that too.

Oh and yes it is amusing that so many take the Three Laws as read another reason MacBride should be applauded.