• Welcome to the new COTI server. We've moved the Citizens to a new server. Please let us know in the COTI Website issue forum if you find any problems.
  • We, the systems administration staff, apologize for this unexpected outage of the boards. We have resolved the root cause of the problem and there should be no further disruptions.

Paging Dr Asimov...

Which always brings up this question in my mind...

Asimov states many times in his books that the Three Laws are hardwired into every Positronic Brain.

How do you hardwire a rule like that????
 
Originally posted by Plankowner:

Asimov states many times in his books that the Three Laws are hardwired into every Positronic Brain.

How do you hardwire a rule like that????
Carefully. I think we can't really do it with current technology, not having positronic engineering yet.
 
I once heard from an Azimov fan that the positronic brain was created by a genius who hard wired the rules in and after his/her/its death, no one could alter it.

I always thought that it was a really lame explanation but that's what I heard. Personally, I think that it would be ILLEGAL to create a positronic brain without the three laws but that doesn't mean that it wouldn't be done.
 
The Three Laws are hooks for supporting stories. I don't think that anything like them would be particularly necessary, useful or possible in the real world.
 
The whole reason Issac Asimov put those rules in was to counter the prevailing "Robots as the Enemy of Mankind" plots and treatments that almost everyone in the field were doing.

It was his way of showing the good that robots could do FOR mankind.


In interview after interview, and in his autobiography, as well as other writings, he made it clear that, if there had been as many 'good robot" stories as there were "bad robot stories", he probably would not have made the "3 laws" such a fixture.

He would have just used it as one engineer's way of dealing with things, not as a universal mandate.
 
It seems to me that before the Doctor came along, science and technology were the tools of choice for pulp villain. It was only when the good guys went up against the villain while armed with little more than their wits, a pocketknife, and a squeamish female sidekick (a truly troublesome critter) that good triumphed over evil, and the spaceways were once again made safe for humanity ...

... or were they?

Given a story on The Ingenuity of the Rugged Individual versus the Twisted Science of Manacing Evil, you could always count on the guy with the pocket knife. I think Isaac Asimov was trying to impose a little more uncertainty in what had become a largely formulaic literary genre.

Now, while you can make a sentient robot that will perform an evil act, you must either convince the robot that the act is for the Greater Good, or build the robot without Asimov's Laws in the first place.

But that would be sorta like having a sociopathic bodyguard -- someday, he may turn on you! So always keep fresh batteries in the remote...
 
It seems to me that before the Doctor came along, science and technology were the tools of choice for pulp villain.
More after.

Think pulp for a second.

Think E.E. Doc Smith for example. Most of his heros were rugged individualists... with Phd's in the sciences.

Half the bare chested rugged explorers of darkest africa in the late 1800's had archaelogy[1] degrees.

Even the old staples. Flash Gordon has Dr Zarkov in his corner. So Zarkov wasn't "the" hero, but he was still there.

[1] Well, egyptology degrees
 
Originally posted by Frank Mikes:
I once heard from an Azimov fan that the positronic brain was created by a genius who hard wired the rules in and after his/her/its death, no one could alter it.
Except I distinctly recall one story where a whole series of robots where built with a slightly altered First Law. The removed the "or through inaction, allow a human to come to harm". Susan Calvin is required to find a lost robot in an identical set of them on a scientific research station.
 
Human nature indicates that many robots will be built with the express purpose of harming human beings. Therefore the Laws, while a fascinating logical construct on which Dr. Asimov built fascinating stories, may have only selective applications in the real world.
 
Originally posted by Daneel Olivaw:
Human nature indicates that many robots will be built with the express purpose of harming human beings.
Such as, oh, cruise missiles and Predator drones.
 
Originally posted by BlackBat242:
The whole reason Issac Asimov put those rules in was to counter the prevailing "Robots as the Enemy of Mankind" plots and treatments that almost everyone in the field were doing.

It was his way of showing the good that robots could do FOR mankind.
Exactly - he wanted to counter the "Frankenstein Complex" that people seemed to have at the time. Once he'd proved his point he looked for ways to expand the storytelling possibilities within the future history he'd established - hence the dangerous robot mentioned by tjoneslo (which was unstable because of the tinkering with the laws), and the story where the robots decide that any definition of human should include themselves ("that thou art mindful of him").

Ironically the Frankenstein Complex is still firmly with us, at the movies - I hated "I, Robot" because it took his ideas and, by cherrypicking the least representative stories, completely turned his message around. Also because Hollywood couldn't cope with a strong female character who was nobody's love interest... </rant>

John
 
Randy Weaver's (the white supremacist in Idaho at the Ruby Ridge standoff)wife was killed by a remotely piloted bomb disposal robot at thier front door. I think that was the first time that something like that had happened, even tho in this case the robot was under human control.

This is more of a safety on the job issue:
http://query.nytimes.com/gst/fullpage.html?res=9E0CE2DE1738F931A15750C0A963948260

There are also many developing projects for near autonomic function machines expressly built for war. Things like the Predator, The US Arsenal Stealth Warship, et al., were concieved as totally automatic weapon systems at the outset.

India has shown serious interest in developing "Warbots" over the last couple of years.

Technical advancement is only as destructive as the people who use it, and that goes for any kind of robot. An industrial robot at this stage does not really know if its welding a car body or drilling holes in someone's head. There's no evil intent , just programming.

I find it highly dubious that even a highly advanced (and let's assume self-aware) robot would ever suddenly go on a killing spree, waving its arms and saying "kill all humans". People can barely get computers to behave properly without substantial programming, so I doubt anyone would be truly dumb enough to build an ED-209 or a Terminator and make it totally autonomous.

The three laws of robotics were a brilliant plot device in all of Asimov's novels, but I have a hard time buying them ever being needed in the real world. He was definitely writing for a different time back then. Some of the applications of the laws seem almost magical. So wide open to interpretation of the complexity that robopsycologists are needed.

Aside from the military applications of robotics (which I am NOT in favor of, as in the long run it would make warfare about as consequential as playing an X-Box) robots ultimately will be made to SERVE people by thier very design. That's the idea at its core. Only my cats fear the Roomba we have working on our rug. I sleep easy at night knowing that it is not going to suddenly freak out and shoot sawblades at me. I for one am more than glad to not have to vacuum floors anymore. That's robot work.
 
My point is that humans and entities of human manufacture are likely to exhibit potentially violent behaviour patterns - similar to other humans.
 
Originally posted by Baron Saarthuran von Gushiddan:
Randy Weaver's (the white supremacist in Idaho at the Ruby Ridge standoff)wife was killed by a remotely piloted bomb disposal robot at thier front door. I think that was the first time that something like that had happened, even tho in this case the robot was under human control.
Errrm... no. She was killed by a sniper wielding a conventional rifle.

http://www.crimelibrary.com/gangsters_outlaws/cops_others/randy_weaver/15.html
 
Back
Top