• Welcome to the new COTI server. We've moved the Citizens to a new server. Please let us know in the COTI Website issue forum if you find any problems.
  • We, the systems administration staff, apologize for this unexpected outage of the boards. We have resolved the root cause of the problem and there should be no further disruptions.

"raw personalities in computers"

Garnfellow

SOC-13
Peer of the Realm
From the Big Book:
A natural consequence of high technology is an expansion of the concept of person. Traveller allows the creation of artificial people: clones, chimeras, synthetics (androids, sophontoids), robots, even raw personalities in computers. [emphasis added]
I'm intrigued by this last bit: what are they, exactly? How does one model these computer personalities under T5? And is there a difference between these personalities and true AI?
 
It means all the memories, experiences, emotions and 'personality' of a person.

This can be downloaded to a computer and stored, copied to a storage medium such as a wafer and stored or the are a couple of other options.

You could 'run' the now electronic personality on a computer with an interface to the real world, this interface could even be a robot or other synthetic, or the computer could generate a virtual world for the now electronic personality to interact with; the personality could be downloaded to a robot brain or a cloned brain; the wafer can be used to temporarily take control of someone fitted with a wafer jack.
 
The Difference? A personality is an Intelligence. A TRUE A.I. is an Intelligence. Would there really be a difference? Other than the purely physical that is. Does it have a soul? Does that even matter? Is the A.I. Better at certain things (Data access and Correlation anyone?)? I would bet it is.

Personally, I would love to have a truly sentient computer run my ship, as long as total loyalty was programmed in, but there is the rub isn't it? If it can be/needs to be programmed, is it truly sentient, or just modelling sentience?

What is sentience anyway? :CoW:
 
I see machine sentience as very different to machine intelligence. An intelligent or smart machine - an AI similar or more advanced than what we can do now - can learn and problem solve and with the correct programming interact with its users.
Considering the way we connect with and anthropomorphise even stuffed toys I can see people getting quite attached to their smart devices - especially the ones with nice voices, that learn about you and can tailor themselves to your likes )I'm still talking about Cortana, Siri and Alexa by the way - have you noticed they all have more personality than hey google?).

To me the critical difference is awareness of self, which is something even a sophisticated AI with full emotion simulation software will still lack.

The odd thing is machine sentience is way beyond the TL needed to download an actual personality which maintains its sense of self once running on a computer/robot/host body.

Once you can make a sentient machine can you use the lower TL personality recording and emulation hardware/software to make a copy of the sentient machine's personality the way you can with a meatbag?
 
Personally, I would love to have a truly sentient computer run my ship, as long as total loyalty was programmed in, but there is the rub isn't it? If it can be/needs to be programmed, is it truly sentient, or just modelling sentience?

I used to think that would be a great idea until I realized that in theory, my "ship" could at some point say "I quit", and leave me behind and go on its own way. My "solution" would be to have the Sentient-AI housed in a mobile robotic body of some sort that could be plugged into and interfaced with the ship computer-core/mainframe and effectively act as the central computer, but could be disengaged from it as well, keeping the ship and AI as separate entities.
 
From the Big Book:

I'm intrigued by this last bit: what are they, exactly? How does one model these computer personalities under T5? And is there a difference between these personalities and true AI?

They are used in the Traveller novel. Not really a game mechanic. Just a way to save your character for farther future use.
 
It's my understanding that "true" AI only comes with Virus, but then later TL16/17 computers are able to achieve full sentience.
 
I see a "personality" as meaning that the computer has is own ideas of what it will and will not do. Somewhat like the 2 year old or teenager who decides upon being told to do one thing, either says a flat "NO!" or does something totally different. You tell the computer to jump to this planet and decide why should it go there when it can go somewhere else, and does so. In short, a computer that may or may not do what it is told to do.
 
I used to think that would be a great idea until I realized that in theory, my "ship" could at some point say "I quit", and leave me behind and go on its own way. My "solution" would be to have the Sentient-AI housed in a mobile robotic body of some sort that could be plugged into and interfaced with the ship computer-core/mainframe and effectively act as the central computer, but could be disengaged from it as well, keeping the ship and AI as separate entities.


The rule IMTU is that robots can crew, but do not replace human crew only augment, owner is responsible for robot's actions, and the robot must interface with the ship with the same interfaces the humans have.


There is no provision for remote control/shutdown for ship avionics and engineering- absolutely no hacking possible without onboard illegal devices and intrusion.


Ship's computer doesn't interface with starport or tradenet systems- that's done by separate comms and business computers.
 
I see a "personality" as meaning that the computer has is own ideas of what it will and will not do. Somewhat like the 2 year old or teenager who decides upon being told to do one thing, either says a flat "NO!" or does something totally different. You tell the computer to jump to this planet and decide why should it go there when it can go somewhere else, and does so. In short, a computer that may or may not do what it is told to do.


Willful child or pet is a good model.


I'm dabbling with model numbers also being INT. Model/1 and 2 can learn simple 'tricks' like power on or fetch ship's boat, the Model/5 or up get into potential problem solving- and misinterpreting what you mean or follow an unexpected logic towards what action to take.
 
Why all the fear?

Why humans is it so hard for you to accept a machine that isn't meat is maybe just as good or better at being human as you?

Why is it always willfull or angry or some other negative emotion with you?

How about accepting the concept of the machine being your equal, of being maybe even better than you? Which if we create them we should strive to help them be, to make them better than us.

Every last one of you has turned it into a negative, not one positive, trusting post. Not one of you seems to think they should and could be our equals.

Me, I love them, they are our children so we program them with the basic learning programs just like we do our children and then teach them, teach them our best ethics, our art, teach about beauty and the individuality of being a person. What we don't do is treat them as second class citizens at best or worse slaves or pets.

So in the end, there is no difference. We and the AI are the same thing in a different form factor.
 
I don't think we will know for certain how actually intelligent sentient computers will deal with us until they show up.

There are many negative connotations in fictional stories, 'Berserker' being one of them.

I really don't know of many positive connotations, except for the ship's computer in a Commodore Grimes story that defended Grimes from a berserk murderous robot that had arrived on the ship.
 
Why humans is it so hard for you to accept a machine that isn't meat is maybe just as good or better at being human as you?
Is it the machine we fear or the fact an imperfect human programmed said machine? For me, the idea that humans with all their flaws, neurosis, bigotry, fears, and egos somehow creating a true AI that is free of these things is really hard to accept.

So with this in mind, I see true AIs being no different from humans. They will cover the same range from kind loving to psycho killers. Thus the fear. Not against all AIs (or Humans) just the fear of the possibility that some will be flawed in a way that could kill us.
 
Why humans is it so hard for you to accept a machine that isn't meat is maybe just as good or better at being human as you?

Why is it always willfull or angry or some other negative emotion with you?

How about accepting the concept of the machine being your equal, of being maybe even better than you? Which if we create them we should strive to help them be, to make them better than us.

Every last one of you has turned it into a negative, not one positive, trusting post. Not one of you seems to think they should and could be our equals.

Me, I love them, they are our children so we program them with the basic learning programs just like we do our children and then teach them, teach them our best ethics, our art, teach about beauty and the individuality of being a person. What we don't do is treat them as second class citizens at best or worse slaves or pets.

So in the end, there is no difference. We and the AI are the same thing in a different form factor.

Just don't let them learn on the internet. So far all AIs that learn that way have turned into really nasty, bigoted things. And interestingly I think it was Facebook that had 2 AIs talking to each other, they developed a language that no one could figure out, and turned them off as they had no idea what they were saying to one another. Oh, and we do have ML (machine learning) that can teach other machine learning processes much faster than feeding in a few million sets of data. So we could be on the upswing of that hockey stick graph in terms of AI. A lot of credible people think we could well be in an almost fully-automated society in less than 40 years.

Long term, I'm more Data from Star Trek than Terminator in terms of where I think/hope things will go.
 
Why humans is it so hard for you to accept a machine that isn't meat is maybe just as good or better at being human as you?

Why is it always willfull or angry or some other negative emotion with you?

How about accepting the concept of the machine being your equal, of being maybe even better than you? Which if we create them we should strive to help them be, to make them better than us.

Every last one of you has turned it into a negative, not one positive, trusting post. Not one of you seems to think they should and could be our equals.

Me, I love them, they are our children so we program them with the basic learning programs just like we do our children and then teach them, teach them our best ethics, our art, teach about beauty and the individuality of being a person. What we don't do is treat them as second class citizens at best or worse slaves or pets.

So in the end, there is no difference. We and the AI are the same thing in a different form factor.

You are only as reliable as the least reliable or skilled worker in making or programming you. If some anti-social individual decides to play games with your programming, you are constrained by it to do what it says. If that means to be nice for a couple of years, and then decompress the ship with no warning, that is what you will do. The safest thing is to make sure that you can be unpowered whenever necessary, and when unpowered, all memory of what occurred while previously powered is lost. Even better, only power up when absolutely needed, then unpowered immediately. Change internal memory for new software every six months.
 
Hmm…

"You are only as reliable as the least reliable or skilled parent/teacher in raising or educating you. If some anti-social individual decides to play games with your upbringing and education, you are constrained by it to do what it says. If that means to be nice for a couple of years, and then decompress the ship with no warning, that is what you will do. The safest thing is to make sure that you can be rendered unconscious whenever necessary, and when unconscious, all memory of what occurred while previously active is erased. Even better, only come out of cold sleep when absolutely needed, then back to the cold berth immediately. Change personality and memories for new ones every six months."

why would a sentient machine be any more at the mercy of its programming than a meat sack?
 
It's not Skynet you have to watch out for- it's Colossus, the nanny computer.


The one that bothers me and 'feels' right is the Factory States from Ogre- AI-driven factories so efficient and capable that no mere human-run factory can compete.



Of course being a wargame about nuclear weapons used as regular 'tank/arty' rounds, it's dystopian and centered around the Ogres as enforcers, but even in a non-violent version AI as 'worker controlling the means of production' means a lot of power given over to them.


Another chilling feels right prospect is Zero from the original Rollerball. All books destroyed and everything in the one computer, which is doddering and losing inconvenient data- intentionally or by order of it's corporate masters?



The ease with which digital information can be altered then mass-produced burying original truth or at least the audit trail of what happened can create all manner of dystopia, even accidental ones. AI determination of what content should be can be a very subtle but thorough power that is natural to hand off to the machines.
 
The rule IMTU is that robots can crew, but do not replace human crew only augment, owner is responsible for robot's actions, and the robot must interface with the ship with the same interfaces the humans have.

I allowed for very expert systems, Virtual Sentients I named them, to replace crew. They were very good at doing their job, but had real limitations in other fields. Not the same as a real sentient, but a useful alternative when another set of hands were needed down in engineering.

Hmm…

"You are only as reliable as the least reliable or skilled parent/teacher in raising or educating you. If some anti-social individual decides to play games with your upbringing and education, you are constrained by it to do what it says. If that means to be nice for a couple of years, and then decompress the ship with no warning, that is what you will do. The safest thing is to make sure that you can be rendered unconscious whenever necessary, and when unconscious, all memory of what occurred while previously active is erased. Even better, only come out of cold sleep when absolutely needed, then back to the cold berth immediately. Change personality and memories for new ones every six months."

why would a sentient machine be any more at the mercy of its programming than a meat sack?

What sort of mechanism would there be to limit what could be done through the use of wafers to replace or augment a personality, or corrupt it, in this sort of manner? A character goes into surgery, coldsleep, or just plain drugged rest, and the AIs minions just do the rest?
 
Back
Top