A quick review [0]:
The Four Laws of Robotics [1] are:
0) A robot may not injure humanity or, through inaction, allow humanity to come to harm.
1) A robot may not injure a human being, or, through inaction, allow a human being to come to harm, except where doing so would conflict with the Zeroth Law.
2) A robot must obey the orders given it by human beings except where such orders would conflict with the Zeroth or First Laws.
3) A robot must protect its own existence as long as such protection does not conflict with the Zeroth, First or Second Laws.
A robot manipulating meatbags to do evil violates the 0th and 1st laws, which basically state "Thou shalt do no harm to a human." If the robot becomes aware that it's actions (manipulations of human behavior) cause harm to one or more humans, then
it's own BIOS, which is hardwired into the positronic pathways within it's own brain, will over-ride the action or shut down the robot.
This is sort of like a human having a conscience that he or she can not ignore because it is constantly over-riding the his or her will [2].
IMTU: Imperial robots, by law, must have a built-in "watchdog" function which will automatically render the robot inactive if any of the laws are violated. However, certain military special ops bots are allowed to have a slightly modified form of the 2nd law, to wit:
2) (IMTU
A robot must obey the orders given it by human beings except where such orders would conflict with the Zeroth or First Laws,
unless such orders are received from a human with the authority to order the robot to do harm to another human.
Such robots must have a distinctive military appearance.
A further discussion:
Unlike the 1st through 3rd Laws, the Zeroth Law is not a fundamental part of positronic robotic engineering, is not part of all positronic robots, and, in fact, requires a very sophisticated robot to even accept it.
Asimov claimed that the Three Laws were originated by John W. Campbell in a conversation they had on December 23, 1940. Campbell in turn maintained that he picked them out of Asimov's stories and discussions, and that his role was merely to state them explicitly [2].
The Three Laws did not appear in Asimov's first two robot stories,
"Robbie" and
"Reason", but the First Law was stated in Asimov's third robot story
"Liar!", which also featured the first appearance of robopsychologist Susan Calvin [3].
Yet there was a hint of the three laws in
Robbie, in which Robbie's owner states that "He can't help being faithful, loving, and kind. He's a machine - made so."
Notes:
[0] Information borrowed liberally from the Isaac Asimov FAQ.
[1] From Handbook of Robotics, 56th Edition, 2058 A.D., as quoted in
"I, Robot". In
"Robots and Empire" (ch. 63), the Zeroth Law is extrapolated heirarchically from the other three.
[2] Which suggests subtle implications in the never-ending ecclesiastical debate on
"Are humans free-willed creations or pre-destined robots?"
[3] The first story to explicitly state the Three Laws was
"Runaround", which appeared in the March 1942 issue of Astounding Science Fiction.
[4] When
"Robbie" and
"Reason" were included in
"I, Robot", they were updated to mention the existence of the first law and first two laws, respectively.