• Welcome to the new COTI server. We've moved the Citizens to a new server. Please let us know in the COTI Website issue forum if you find any problems.

I Robot

TKalbfus

SOC-14 1K
Many D&D ideas were originally influenced by JRR Tolkein's Lord of the Rings books. Traveller was in many similar ways conceived out of the Foundation Series by Asimov. The Foundation Universe had no significant role for intelligent Robots and likewise for the OTU. My idea is simply this. What if there was another Traveller Universe where robots and Artificial Intelligent computers played a more important role? I'm not talking about TL17, what I'm talking about is lowering the tech level required for developing AI's to late TL8 or early TL9. Everything else is basically the same as the Traveller rules state. What effect would this have on a Traveller campaign? I think this would be a good setting for people to use robot player characters. Humans and other biologicals would be reduced in importance, but to balance things out the robot characters are compelled to obey Asimov's laws of robotics. In other words Robot characters cannot solve problems by killing humans, they have to resort ot other means to defeat a human adversary. If there was a human villain for instance, the robot character could try to foil that villian's plot without killing or harming him. A build in fail safe would automatically prevent the robot from moving his limbs to kill the villian no matter how much he wanted to, otherwise the task would be easy as robots are stronger, faster, and smarter than humans. Ability scores can reflect this, for instance how about rolling 3d12s for abilities in T20 instead of 3d6s?
 
Intelligence isn't the same as Wisdom.

The robot might know many things, but not necessarily how to apply them.

also, the Zeroth law makes things very difficult more deeply involved with the "must not do harm or let be harmed" thingny
 
Just laws 1, 2, and 3 will do for an RPG setting. I would't want to tie the robot characters hands too much. Also it should be possible to have robot villians. The robot might not be able to kill a human directly, but he might be able to manipulate humans to do his bidding.
I'm not entirely sure how superior a robot's ability scores should be to humans, the options I'm considering are Roll 4d8 (picking the best 3 scores) roll 4d10, or roll 4d12. I don't really want robots to be like comic book superheroes, but I do want their average ability scores to be above humans. I'd welcome any opinions on that subject. As for random ability score, Yes I know their manufactured and usually for a purpose, but the robot itself has no control over what his purpose and his ability scores are. Robots are made of real materials that are tougher than human flesh, but not superman tough. They may have built in armor, but weapons will damage them. The are stronger, but they can't lift a starship over their heads or even throw a ground car 100 meters and other such ridiculous feats only comic book superheroes are capable of. They are faster than humans, but they can't break the sound barrier by running or outrun a bullet. They are simply fast enough to be inhumanly fast. The robots player characters will be humanoid, no beeping and whistling trash cans on wheels. The robots can also speak to humans without translators. Robots tend to be smart and they learn things quickly. They can be charismatic, evil robots might manipulate humans with their charisma scores to do things they can't do because of the three laws or robotics. (for example, kill other humans that get in the way). Just some ideas.
 
A quick review [0]:

The Four Laws of Robotics [1] are:

0) A robot may not injure humanity or, through inaction, allow humanity to come to harm.

1) A robot may not injure a human being, or, through inaction, allow a human being to come to harm, except where doing so would conflict with the Zeroth Law.

2) A robot must obey the orders given it by human beings except where such orders would conflict with the Zeroth or First Laws.

3) A robot must protect its own existence as long as such protection does not conflict with the Zeroth, First or Second Laws.

A robot manipulating meatbags to do evil violates the 0th and 1st laws, which basically state "Thou shalt do no harm to a human." If the robot becomes aware that it's actions (manipulations of human behavior) cause harm to one or more humans, then it's own BIOS, which is hardwired into the positronic pathways within it's own brain, will over-ride the action or shut down the robot.

This is sort of like a human having a conscience that he or she can not ignore because it is constantly over-riding the his or her will [2].

IMTU: Imperial robots, by law, must have a built-in "watchdog" function which will automatically render the robot inactive if any of the laws are violated. However, certain military special ops bots are allowed to have a slightly modified form of the 2nd law, to wit:

2) (IMTU:) A robot must obey the orders given it by human beings except where such orders would conflict with the Zeroth or First Laws, unless such orders are received from a human with the authority to order the robot to do harm to another human.

Such robots must have a distinctive military appearance.

A further discussion:

Unlike the 1st through 3rd Laws, the Zeroth Law is not a fundamental part of positronic robotic engineering, is not part of all positronic robots, and, in fact, requires a very sophisticated robot to even accept it.

Asimov claimed that the Three Laws were originated by John W. Campbell in a conversation they had on December 23, 1940. Campbell in turn maintained that he picked them out of Asimov's stories and discussions, and that his role was merely to state them explicitly [2].

The Three Laws did not appear in Asimov's first two robot stories, "Robbie" and "Reason", but the First Law was stated in Asimov's third robot story "Liar!", which also featured the first appearance of robopsychologist Susan Calvin [3].

Yet there was a hint of the three laws in Robbie, in which Robbie's owner states that "He can't help being faithful, loving, and kind. He's a machine - made so."

Notes:

[0] Information borrowed liberally from the Isaac Asimov FAQ.

[1] From Handbook of Robotics, 56th Edition, 2058 A.D., as quoted in "I, Robot". In "Robots and Empire" (ch. 63), the Zeroth Law is extrapolated heirarchically from the other three.

[2] Which suggests subtle implications in the never-ending ecclesiastical debate on "Are humans free-willed creations or pre-destined robots?"

[3] The first story to explicitly state the Three Laws was "Runaround", which appeared in the March 1942 issue of Astounding Science Fiction.

[4] When "Robbie" and "Reason" were included in "I, Robot", they were updated to mention the existence of the first law and first two laws, respectively.
 
A robot manipulating meatbags to do evil violates the 0th and 1st laws, which basically state "Thou shalt do no harm to a human." If the robot becomes aware that it's actions (manipulations of human behavior) cause harm to one or more humans, then it's own BIOS, which is hardwired into the positronic pathways within it's own brain, will over-ride the action or shut down the robot.
An evil robot might take refuge in the uncertainty principle. Lets say that a robot knows that human A is jealous of human B. The robot knows that if it tells Human A what Human B is doing with Human C, it might just push him over the edge, he might do something drastic like kill Human B, but only might, humans are so unpredictable. The robot from past experences with
thinks there might be a 60% change that he might kill human B if he learned the truth, but there is also a 40% chance that he might do nothing. Anyway the robot is only telling the truth so it figures it is doing the right thing. This robot has a good imagination, but is not certain what effect revealing certain information to human A will have, so his three laws programming doesn't come into effect. You see the robot really doesn't like human B, but can't harm him directly because of his programming. This is how a robot villian would operate in a Robot Campaign. The hero robot would have to stop Human A from killing Human B without harming human A. Naturally the Evil Robot arranges not to be around at the time that he figures that human A would perform his dark deed, he'll find some excuse to be elsewhere so that he is not compelled to stop him. This is what makes playing in an I Robot Universe so interesting. I really think Far Future Enterprises should really give a robot heavy campaign setting some consideration. Such robots might even exist in the OTU along the edges. Some heretofore undiscovered branch of humanity might be discovered along the edge of known space, but for some reason they choose to remain in their own region rather than spreading outwards.
Robots would replace people in most things in a robot heavy campaign, humans would do hardly anything at all because robots can do them much better. The robots when they encounter the new humans might realize, that their presence is bad for humanity and so choose not to mingle with the new humans by remaining where they are and not entering the Imperium, they can't abandon their current charges because of the other three laws, they can't allow a human to come to harm, they apply the 0th law to the Imperial humans. Imperial humans who venture into robot space can buy a robot there, but various export laws will prevent him from removing that robot from Robot Space. Robots have been manipulating humans to pass this law for a very long time. Robots pretty much control things here, but they need humans to be their masters, because their programming states that they must serve them, but they see the local human race in decline due to atrophy Human deaths exceed human births, since any human can have a sexy robot slave, who wants to marry and have children with a real human? The robots realize they are a danger to the human race, but as long as their are real humans out their that they don't have contact with, they feel they are not violating the zeroth law. Immigrants are welcome of course, they just can't leave with their robots.
 
Originally posted by Tom Kalbfus:
This is how a robot villian would operate in a Robot Campaign. The hero robot would have to stop Human A from killing Human B without harming human A. Naturally the Evil Robot arranges not to be around at the time that he figures that human A would perform his dark deed, he'll find some excuse to be elsewhere so that he is not compelled to stop him.
If your "Evil" Robot is operating under the Three Laws, he couldn't even do that.

If there was the potential for harm to a human, the robot would be there to see it didn't happens..

Remember the laws... "Cannot harm a Human by Action or through Inaction". Meaning he just can't ignore the threaths, he must ACT to prevent even the CHANCE of threath.
 
"Traveller was in many similar ways conceived out of the Foundation Series by Asimov."


Mr. Kalbfus,

Yes and no. There are certain aspects of Traveller to be found in the Foundation Trilogy, but other series 'fit' the feel of Traveller better.

If you really want to read a series that influenced Traveller pick up any of E.C. Tubb's 'Dumarest of Terra' series. It's all in there; the low berths, the type of ships, even the types of drug listed in the LBBs.

Check out H. Beam piper too.


Sincerely,
Bill

P.S. Nice ideas on robots. I don't agree with most of them, but they're intriguing all the same.
 
IMTU, only robots expected to have a great deal of human contact need "the Rules"; these are your typical household and business robots (i.e. nannies, maids, translators, drivers, etc.).

Most industrial robots don't need this programming and military robots definitely won't.

In a nut shell, "the Rules" exist as a selling point for the average consumer, ensuring their robot doesn't go beserk and wipe out the family or the executive board.
 
Mr. Kalbfus,

A robot would not even think of harming a human. Nor would it think of ways to circumvent it's own Asimov circuits [1] - or those of another robot. These circuits are hard-wired into the synaptic pathways of it's positronic brain. Any attempt to modify or remove (lobotimize) the Asimov circuits will cause an immediate cascade overload [2], which will short out all of the other pathways, effectively causing the robot to become permanently brain dead in just a few seconds.

Not to mention splattering the immediate vicinity with droplets of molten metal-doped semiconductor material.

Let's assume that my wife has a violent temper, and that she has been known to physically attack people with the nearest object whenever she gets into one of her rages [3].

If I order my robot to remove it's arm and hand it to my wife when she starts cursing at over 80 decibels, and my wife immediately uses that arm to bludgeon another human to death, and the robot witnesses the murder, it will realize two things:

1) A murder occurred too quickly for the robot to act.
2) It participated in an act of murder by providing the murder weapon.

The robot will either go catatonic from the conflicting potentials within it's brain, or the Asimov circuits will take over and shut the robot down [4].

IMTU: The Asimov circuits are a mandatory part of every Imperial robot's positronic brain. Other TL15 empires might have different laws, but Imperial bot brains must be built with the Asimov circuits intact and fully operational.

I hope that this clears things up.

Notes:

[1] Asimov circuits emulate Asimov's 1st, 2nd, and 3rd laws with hardware, and are the first circuits to be laid down in a robotic brain, thus forming the "Core Values" for all Imperial robots.

[2] Cascade overload: One circuit blowing up causes two or more circuits to do the same. Very messy.

[3] My wife is a sweet and peaceful woman. Really.

[4] I'd prefer the former. A robot in shut-down can be re-booted in safe mode and interrogated for evidence. A catatonic bot stays that way forever.
 
It's all in the programming, guys.
file_22.gif


A human (meatbag) programmer would be the one to determime personelity traits and mannerisms of the robot.

one programmer: requires the laws to be followed

next programmer for a manufacture: requires simple operations (drill hole here, weld there & so forth)

programmer for Robotic Maids: requires the droid to behave like a British nanie or French maid ;) .

programmer for Warbots R Us: Requires the droid to be like a droid in Terminator (Movie)
file_23.gif


It could have PC's as Robots
However I would set there scores as follows:
Repair droid:
ST: 10
Dx: 10
Con: 10 or nil (D&D golems)
WS: 10 depending on TL
EDU: 18
INT: 18
CHA: 7 to 10
Soc: 1 to 10

War droid:
ST: 18
Dx: 18
Con: 10 or nil (see D&D golems)
WS: 10 depending on TL
EDU: 10 to 12
INT: 10 to 12
CHA: 7 to 10
Soc: 1 to 10

Host/chef/maid droid:
ST: 10
Dx: 10
Con: 10 or nil (D&D golems)
WS: 10 depending on TL
EDU: 10
INT: 10
CHA: 10-12 (18 for French Maid w/skin and features undistingishable form human)
Soc: 10-12

Well there 2 cents for you
 
Oldcolt,

Interesting, but only if you want to produce a set of stats for a game that lacks depth and continuity.

Classic Traveller Book 8 provides stat generation and sufficient rationale for bot building.

Referees might want to ask themselves...

1) Why were bots developed in the first place? A good answer would be "To relieve humankind of the burden of hazardous or unpleasant tasks." In which case, humaniform bots would not be necessary. If the answer is "To enslave and dominate humankind.", then why stop at building just a few Terminator bots? Why not a whole bot empire, bent on eliminating intelligent carbon-based life? (Ever read the "Berserker" series?)

2) How does society react to bots? Hostile reactions could result in violence against all bots (Luddism). If the general public is aware that killbots exist, then the citizens will rise up and eliminate all bots in any form. This parallels the extreme hostility that some Imperials have toward psionics - "Kill them before they kill us all!"

3) What high-tech society would waste time building killbots, when the same society would also have the technology to detect and destroy them? Congruently, using a killbot in a low-tech society would be a waste of a very valuable piece of technology, when a few well-place bullets from a human-wielded and silenced weapon could do the same job.

4) What is the life expectancy of a humaniform killbot in a society that is capable of detecting and destroying even the most human-like TL15 killbot? Not very long, I'm afraid. The sophistication required for a killbot to pass as human long enough to seek out it's target, stalk it, kill it, and return to it's owner without any collateral damage would be beyond any sub-TL15 technology. Wasting millions of credits and man-hours to build something that sophisticated, just to blow it up somehow makes little sense (unless you are planning on running for US Congress).

file_23.gif
Spoiler: What follows is opinion only.

If the goal of your game is to simulate bot-to-human or bot-to-bot warfare (or both), then may I suggest Battletech or Yu-Gi-Oh?

If instead your goal is to simulate a society in which bots are accepted as a class of specialized menial laborers and armed sentries, then this is fully within the canon of Traveller.

If your goal is somewhere in between, while still being able to simulate a balanced society, then good luck - and tell us how you do it.
 
Originally posted by Keklas Rekobah:
... If the goal of your game is to simulate bot-to-human or bot-to-bot warfare (or both), then may I suggest Battletech or Yu-Gi-Oh?...
Ahem... sorry for the interruption but Battletech has nothing to do with robots. They simply are humanoid (or four legged) tanks with humans aboard. Proceed at will. Thank you for your cooperation.
 
Oldcolt said,
It's all in the programming, guys.

A human (meatbag) programmer would be the one to determime personelity traits and mannerisms of the robot.

one programmer: requires the laws to be followed

next programmer for a manufacture: requires simple operations (drill hole here, weld there & so forth)

programmer for Robotic Maids: requires the droid to behave like a British nanie or French maid .

programmer for Warbots R Us: Requires the droid to be like a droid in Terminator (Movie)
Only the most primitive TL7 robots would require programming to do everything. A advanced robot would have the robot laws programmed into it, but it wouldn't be programmed to be a maid, it is an AI, it learns to do these things on its own. It would take to long to program all the mechanical motions and collision avoidance algorythms needed for it to be a maid. What happens is you turn on one robot, teach it to be a maid and them copy its software and database with all that it has learned to thousands of other robots. These robots are then shipped to thousands of households to begin their work, as they are turn on they begin to learn things while performing their duty and they differentiate from each other.

Keklas said,
1) Why were bots developed in the first place? A good answer would be "To relieve humankind of the burden of hazardous or unpleasant tasks." In which case, humaniform bots would not be necessary. If the answer is "To enslave and dominate humankind.", then why stop at building just a few Terminator bots? Why not a whole bot empire, bent on eliminating intelligent carbon-based life? (Ever read the "Berserker" series?)
Humaniform robots are better at performing certain services than non-humaniform robots. For instance a "clunky trash can on wheels" wouldn't make a very good spokesperson, or waitress. Humaniform robots are better at making humans comfortable around them while performing their services. Some people value close personal contact or the warm feel of the flesh, it doesn't matter if the flesh is real so long as it feels real.

2) How does society react to bots? Hostile reactions could result in violence against all bots (Luddism). If the general public is aware that killbots exist, then the citizens will rise up and eliminate all bots in any form. This parallels the extreme hostility that some Imperials have toward psionics - "Kill them before they kill us all!"
But also alot of people in society want something for nothing, they want to get paid without hardly working at all, or at least that is the goal of the labor unions. They all want 9 to 5 jobs with a 1 hour break for lunch and plenty of time to spend with their family, and when they go home and want to go out to eat at a restaurant, there will be somebody to serve them after hours and that somebody won't be human or even alive. When people go shopping after hours, they want the stores to be open, but they don't want to work at those stores during those hours, who fills this nitch? The robots of course. After a time the people will realize that they won't need their 9 to 5 jobs either. Government will tax the owners of the robots and all the profits they bring and distribute much of this windfall to the voting public who does not work. These generous welfare benefits will make these politicians very popular. If someone wants to be a luddite, he'll have to work and laziness trumps Luddism almost everytime.
 
Oops! My bad - the Robotech comment was way off-center.

I think that I finally see the point in some of the arguments here, specifically:

1) If a robot's sole purpose is along the lines of "Locate coal seam, extract coal, repeat" then it will have little need for Asimov safeguards - a few warning lights would suffice.

2) Once a robot's purpose includes making judgement calls that could affect or endanger human life - the Johnny Cabs in Total Recall, for example - then Asimov safeguards need to be built in.

Asimov's laws are not necessary for modern-day robots (Maytag dishwashers, Sunbeam coffee makers, etc.). These bots are both extremely primitive and specialized.

So maybe the need for Asimov's laws rises with the intelligence, mobility, adapability, and creativity of the robot brain. Not to mention the responsibility that the bot may have over the safety and health of humans (Robocops, robodocs, etc).

Then the question becomes: "What minimum criteria are required for the addition of Asimov's laws into a robot's core programming?"
 
These are what you call social robots, they have personalities and part of their job is to interact with people. These can be player character robots if you prefer. Their intelligence must be at least 3 to require Asimov laws. The other kind of robots aren't characters, at best they are mechanical creatures, if they encounter a situation they can't deal with they shut down.
 
IMHO (1): If the bot's function requires the care, feeding, and handling of humans, then it should have Asimov circuits, indicating the need for application of Asimov's laws for bots that would duplicate the human functions of:

- Drivers and pilots.
- Doctors, nurses and medical technicians.
- Cooks, waitbots, and other food service.
- Domestic help (nanny, maid, butler, etc.).
- Security (guard, sentry, screener, etc.).
- more...?

IMHO (2): The function is determined first, then the level of proficiency, then the INT needed for that level of proficiency.

IMHO (3): If single-function bot can be duplicated by a modern-day household appliance (Maytag dishwasher), then it does not need Asimov circuits. If, however, the bot must operate a wide variety of household appliances in the presence of humans (especially humans that cannot tell the bot to stop what it's doing), then it would definately need the default guidance of Asimov's laws.

(Example: I don't want my domestic bot putting the baby in the dishwasher at bath time, nor should I have to rely on human supervision of my domestic bot's daily routine.)

IMHO (4): Bots that are military or para-military in nature would need to distinguish between combatants (which could be harmed, if necessary) and non-combatants (which must never be harmed). I propose a heirachy for military action (i.e., shooting) against identified combatants:

1) Shoot to warn.
2) Shoot to wound.
3) Shoot to incapacitate or immobilize.
4) Shoot to kill.

Entry points to this heirarchy would depend on defcon e.g., at Defcon Two, a combot would skip the warning and seek to wound every identified combatant. At Defcon Four, every identified combatant is shot with the intent to kill.

Now the question becomes: "If a human is not clearly identifiable as a combatant, then which should be the combot's default assumption, combatant (to be shot) or non-combatant (maybe shoot later)?"
 
Concur, any android/robot PC must start with the Rules hardwired into it. It's only fair to curtail any violent tendencies when, in most cases, the robot PC is superior to non-robot PC's. As it would not be able to reprogram itself, another PC/NPC would have to do it. This could be made to be an extremely difficult/expensive task requiring serious equipment. Or, removing the Rules requires a complete reformatting, resulting in a new PC with no memories or accumulated skills. That's pretty fair.

Now what if the Rules are not in the "conscious" programming of the robot (the part that can be downloaded to other media) but resides in the motor/servo controllers of the chasis?
 
Now what if the Rules are not in the "conscious" programming of the robot (the part that can be downloaded to other media) but resides in the motor/servo controllers of the chasis?
That's just what I was getting at. It may be possible for a robot to have ill will toward a human, yet his Asimov circuts won't let him act on it, directly... This would frusterate the robot to no end, however he might find a way not to be in technical violation of the Asimov Laws, by setting up a situation that may go badly for the human he so dislikes. Think in terms of Greek Tragidy, plays like Oedipus Rex. The robot studies the character he dislikes very closely, he knows the character may react a certain way given a certain situation. The reaction he expects is of the sort that would have negative consequences for that character. The robot then awaits for a certain set of circumstances to occur and when it does he calls that character's attention to them.
 
The robot studies the character he dislikes very closely, he knows the character may react a certain way given a certain situation. The reaction he expects is of the sort that would have negative consequences for that character. The robot then awaits for a certain set of circumstances to occur and when it does he calls that character's attention to them.
But isn't this akin to discovering that a particular human being is a diabetic with a sweet-tooth, and then tricking him into a situation in which sugary foods are plentiful, but his insulin supply is inaccessible?

As I see it, the best way for an Asimovian robot to harm human beings is to have a peculiar idea of how best to take care of them and keep them safe. The Second Law (obedience) only takes over when the First Law (non-harmfulness/protection) is satisfied. If, for instance, a robot somehow decided that all human beings were basically foolish children, to be restricted and disciplined (spare the rod and spoil the child!) for their own good, things could get very scary (esp. if the "Zeroth Law" gets involved).
 
The Second Law (obedience) only takes over when the First Law (non-harmfulness/protection) is satisfied. If, for instance, a robot somehow decided that all human beings were basically foolish children, to be restricted and disciplined (spare the rod and spoil the child!) for their own good, things could get very scary...
(Hee-hee!)

Does the term "Mad Robot" come to mind? Not in my Traveller univers! Never! Not at all! I'd never think of it. It would be cruel to do this to my players. Really! ;)

Although I am reminded of a story entitled "The Iron Chancellor" that addressed this very issue...
file_23.gif


(Hee-hee!)
 
Back
Top