• Welcome to the new COTI server. We've moved the Citizens to a new server. Please let us know in the COTI Website issue forum if you find any problems.
  • We, the systems administration staff, apologize for this unexpected outage of the boards. We have resolved the root cause of the problem and there should be no further disruptions.

"raw personalities in computers"

I allowed for very expert systems, Virtual Sentients I named them, to replace crew. They were very good at doing their job, but had real limitations in other fields. Not the same as a real sentient, but a useful alternative when another set of hands were needed down in engineering.

This struck me as a bit of incite.

Both here, and on the damage control thread.

But, consider, something like a "mechanic" robot.

Now, taking this from the "mechanic" point of view, versus an "engineering" point of view.

From a mechanic point of view, the machine will work properly when assembled to spec using the proper parts and materials. It's not the mechanics job to solve engineering problems.

The benefit of a skilled mechanic, say from an automotive perspective, is that they can, via diagnostics, measurements and experience, efficiently determine the correct solution to a problem. This is important in order to avoid doing unnecessary work. Unnecessary work is expensive in terms of time. However, time is "expensive" for two reasons. One, is the loss of productive work from the machine. Machine isn't working, it's not doing it's job of production or whatever else the machine does. But also, the time is expensive because you have to pay the mechanic to do the work. There may also be the expense of additional parts (notably one-use things like solvents, cleaners, gaskets, etc.).

But when you take in to consideration a robot, it's "time" is not free, but it may well be cheaper than a humans time (which is kind of the whole point). The robot may also be more efficient.

This suggests that there could well be a benefit of a "less than brilliant" skilled robot who's last result diagnostic tool is to simply take apart, measure, and reassemble the faulty machine. Got a strange knock in the engine? Got something odd that happens off idle but only on on-ramps when the traffic control light goes green? Some other occasional, nagging problem that's difficult to diagnose? Rebuild it.

Tear it apart, laser measure all the pieces, replace the out of spec ones and bolt it back together.

Today, we avoid that because of the labor involved. Make that cheaper, and it becomes a more viable solution to strange problems. You don't want to rebuild the motor to replace a spark plug, but it makes "brilliant" mechanics less necessary for many applications.
 
This struck me as a bit of incite.

Both here, and on the damage control thread.

But, consider, something like a "mechanic" robot.

Now, taking this from the "mechanic" point of view, versus an "engineering" point of view.

From a mechanic point of view, the machine will work properly when assembled to spec using the proper parts and materials. It's not the mechanics job to solve engineering problems.

The benefit of a skilled mechanic, say from an automotive perspective, is that they can, via diagnostics, measurements and experience, efficiently determine the correct solution to a problem. This is important in order to avoid doing unnecessary work. Unnecessary work is expensive in terms of time. However, time is "expensive" for two reasons. One, is the loss of productive work from the machine. Machine isn't working, it's not doing it's job of production or whatever else the machine does. But also, the time is expensive because you have to pay the mechanic to do the work. There may also be the expense of additional parts (notably one-use things like solvents, cleaners, gaskets, etc.).

But when you take in to consideration a robot, it's "time" is not free, but it may well be cheaper than a humans time (which is kind of the whole point). The robot may also be more efficient.

This suggests that there could well be a benefit of a "less than brilliant" skilled robot who's last result diagnostic tool is to simply take apart, measure, and reassemble the faulty machine. Got a strange knock in the engine? Got something odd that happens off idle but only on on-ramps when the traffic control light goes green? Some other occasional, nagging problem that's difficult to diagnose? Rebuild it.

Tear it apart, laser measure all the pieces, replace the out of spec ones and bolt it back together.

Today, we avoid that because of the labor involved. Make that cheaper, and it becomes a more viable solution to strange problems. You don't want to rebuild the motor to replace a spark plug, but it makes "brilliant" mechanics less necessary for many applications.


My IT department does this, sort of.


Beyond a certain point, they do not troubleshoot problems past 30 minutes to an hour's worth of effort. They reimage the drive and install a fresh copy on, and then the software the user is authorized for gets reinstalled too.
Saves service personnel time, replacement machines if it's not actually broken, and gets the worker back doing productive things.


If you have a merc outfit and a grav tank comes into the maintenance depot, maybe you don't bother troubleshooting.
You determine something is wrong with Lifter #1 and #3, you swap them out fast to get the tank back on the line, and then after the tank is gone look at the lifters individually to see if you can repair them to save parts money or cannibalize what is salvageable.
 
Hmm…

"You are only as reliable as the least reliable or skilled parent/teacher in raising or educating you. If some anti-social individual decides to play games with your upbringing and education, you are constrained by it to do what it says. If that means to be nice for a couple of years, and then decompress the ship with no warning, that is what you will do. The safest thing is to make sure that you can be rendered unconscious whenever necessary, and when unconscious, all memory of what occurred while previously active is erased. Even better, only come out of cold sleep when absolutely needed, then back to the cold berth immediately. Change personality and memories for new ones every six months."

why would a sentient machine be any more at the mercy of its programming than a meat sack?

The question this raises is how granular the chip (or, earlier, the personality overlay machine) technology is.

As I understand it, both are "all or nothing" -- that is, there doesn't seem to be a way to edit the identity/personality during the "record/install" process. So a chipped or overlaid subject would either be who they were to begin with (if their own identity/personality was reinstalled) or would clearly be someone else (if another one was). This provides some degree of verification.

If they could be edited, that wouldn't apply.

Then again, where did the personality templates for the overlay machine in Expedition to Zhodane come from in the first place? Were they built from scratch, or recorded? And could you make a recording non-destructively?
 
This suggests that there could well be a benefit of a "less than brilliant" skilled robot who's last result diagnostic tool is to simply take apart, measure, and reassemble the faulty machine. Got a strange knock in the engine? Got something odd that happens off idle but only on on-ramps when the traffic control light goes green? Some other occasional, nagging problem that's difficult to diagnose? Rebuild it.

That might be necessary for dumb robots that have limited reasoning faculties and are limited to the analysis deduced from their inputs. I imagined a VI being more capable, able to engage in inductive analysis, and so not limited to simple linear problem solving. They weren't called Virtual Intelligences because they could simply sound like a sentient in conversation, the idea was to have something that went all the way up to but didn't include a sapient's sense of self.
 
AI, Intelligence etc.

Why humans is it so hard for you to accept a machine that isn't meat is maybe just as good or better at being human as you?

Why is it always willfull or angry or some other negative emotion with you?

How about accepting the concept of the machine being your equal, of being maybe even better than you? Which if we create them we should strive to help them be, to make them better than us.

Every last one of you has turned it into a negative, not one positive, trusting post. Not one of you seems to think they should and could be our equals.

Me, I love them, they are our children so we program them with the basic learning programs just like we do our children and then teach them, teach them our best ethics, our art, teach about beauty and the individuality of being a person. What we don't do is treat them as second class citizens at best or worse slaves or pets.

So in the end, there is no difference. We and the AI are the same thing in a different form factor.

The computer is your friend. If you do not believe the computer, it will use you as reactor shielding (Paranoia?). The computer asks why? ha! We are Human. That is why! If you can`t figure that out in bits and bytes (or even nybbles), then are you better than us? Or just deluded and still trying to gain insight.

Vilani built their computer systems differently to Sollies didn`t they? I thought that Vilani Comps weren`t as `programmable` due to their straight-forward routing of operations to separate control panels doing job only operations. Solomani Comps could theoretically do more with their diverse design and open system framework (Windoze, Linux,etc). The Vilani found a solid design that worked and stuck with it. Sollies experimentally design new ways of doing the same job from many different aspects. I love the idea of using Model number as INT. I had considered TL, but seemed too steep.

Can you make duplicates of a Wafer once finished? (Twins)

Skill Wafers are taken from Personalities (copied?) already on Wafer, with personality removed?
 
Last edited:
The question this raises is how granular the chip (or, earlier, the personality overlay machine) technology is.

As I understand it, both are "all or nothing" -- that is, there doesn't seem to be a way to edit the identity/personality during the "record/install" process. So a chipped or overlaid subject would either be who they were to begin with (if their own identity/personality was reinstalled) or would clearly be someone else (if another one was). This provides some degree of verification.

If they could be edited, that wouldn't apply.

Then again, where did the personality templates for the overlay machine in Expedition to Zhodane come from in the first place? Were they built from scratch, or recorded? And could you make a recording non-destructively?

The device in ETZ doesn't completely wipe out the underlying individual, nor is it chip-based, nor recorded engram based.

The professor's machine is a custom-built device which lays on a new personality with a combination of drug-induced changes and hypnosis reinforced by sleep tapes. The process involves approximately twelve hours of hypnosis followed
by twelve hours of fatigue-induced sleep. Upon awakening, the subject believes himself to be the new personality. It is possible to establish post-hypnotic suggestions that trigger the new personality at any time up to 72 hours after awakening.
The overlaid personality is essentially the old one with certain specific changes for consistency and continuity. For example, an Imperial military veteran will still be a military veteran, but of an appropriate Zhodani service. Correct memories of units served with, dates of service, and other details will be available to the new personality.
(Adventure 6: Expedition to Zhodane, p. 33)

It's very unlike the chip system in Agent.
 
Considering wafer jacks hadn't been dreamed up, let alone the wafer tech itself, when the personality overlay machine was presented to the OTU in EtZ you have to consider a bit of recon updating.

A more 'modern' take (T5 and/or AotI based) on the machine is that it can do non-invasively at TL15 (it may be a prototype, experimental or whatever tech) what wafer jacks and wafers were needed for at lower TLs.

Reading T5 you can see the progression from destructive copying to personality recording without damage hidden away on page 98 among other places.

TL12 personality recording and editing
TL12 wafer jacks (direct links to machines since wafers coma along at...
TL13 water tech
TL14 temporary personality transfer (personality overlay machine now a possibility?)
TL15 pattern personality re-implant
TL17 permanent personality transfer

(I was surprised when I did a search for wafer in the T5 pdf just how often they crop up...)
 
Considering wafer jacks hadn't been dreamed up, let alone the wafer tech itself, when the personality overlay machine was presented to the OTU in EtZ you have to consider a bit of recon updating.

A more 'modern' take (T5 and/or AotI based) on the machine is that it can do non-invasively at TL15 (it may be a prototype, experimental or whatever tech) what wafer jacks and wafers were needed for at lower TLs.

Reading T5 you can see the progression from destructive copying to personality recording without damage hidden away on page 98 among other places.

TL12 personality recording and editing
TL12 wafer jacks (direct links to machines since wafers coma along at...
TL13 water tech
TL14 temporary personality transfer (personality overlay machine now a possibility?)
TL15 pattern personality re-implant
TL17 permanent personality transfer

(I was surprised when I did a search for wafer in the T5 pdf just how often they crop up...)

The machine in CT A6:ETZ works by entirely different principles. It just alters the victim's personality, but not his identity. Essentially, it's comparable to the Thought Police's methods, only using machinery and drugs instead of the psionics (and probably also drugs) the Taverchedl use.

The chip jack hijacks your body, but not your thoughts (and leaves you a rider). The Personality machine alters your thought processes directly.

No retcon needed at all. All one has to do to justify the machine in ETZ is point out that a jack can be easily detected with a medical scan, while the PCM can only be detected by deep probe or comparative brain scans before/after.
 
The machine in CT A6:ETZ works by entirely different principles. It just alters the victim's personality, but not his identity. Essentially, it's comparable to the Thought Police's methods, only using machinery and drugs instead of the psionics (and probably also drugs) the Taverchedl use.
My point is that since the adventure was written the technology paradigm of the Imperium has moved on, the current gold standard being T5 for OTU matters.
MWM mentions the machine and wafers in his podcast, 25:55+
https://gamingandbs.com/marc-miller-traveller-bbs016/

The chip jack hijacks your body, but not your thoughts (and leaves you a rider). The Personality machine alters your thought processes directly.
Not all wafers work this way. Page 522-527 in T5 explains that a wafer can do any combination of three things depending on its manufacture:

Wafer Technology records essential skills, knowledges,talents, memories, and even personalities on portable Wafers: thin chips which temporarily implant specific elementsof a personality. A Wafer may contain
. a set of Skills (some combination of Skills, Knowledges, and Talents; consider Skills to include any combination of the three),
. a set of memories and experiences, and
. a sense of self.
Wafer Use. A Wafer transfers its contents to the user. Once in use, a Wafer provides the user with its Skills (replacing any named Skills the user has, while retaining the
others). If the Wafer has memories and experiences, they are available to the user. If the Wafer has a sense of self, it replaces the user’s sense of self.

No retcon needed at all. All one has to do to justify the machine in ETZ is point out that a jack can be easily detected with a medical scan, while the PCM can only be detected by deep probe or comparative brain scans before/after.
You don't need a wafer jack to use a wafer, there is a wafer headset available.

The retcon is needed because the professor's machine now has actual game system rules to justify and explain it.
 
Last edited:
One of the young ladies in the Creative Writing class during the 3 week session for gifted students which I just completed wrote an interesting short story about the reaction of an AI to discovering that it was not a person, but a robot, and was treated like one. Definitely not the normal take on it. I will have to see if I can get the young lady's and also Project's permission to post it on the forum. Quite an interesting short story.

Note, it does not turn out the way you expect.
 
Not all wafers work this way. Page 522-527 in T5 explains that a wafer can do any combination of three things depending on its manufacture

What's the advantage of using a SSW TL17 Ultimate W-13 over a TL13 Standard W-13? Are the skills modified using the TL Stage Effects Mod column on the table on p497?
 
Why humans is it so hard for you to accept a machine that isn't meat is maybe just as good or better at being human as you?

Why is it always willfull or angry or some other negative emotion with you?

How about accepting the concept of the machine being your equal, of being maybe even better than you? Which if we create them we should strive to help them be, to make them better than us.

Every last one of you has turned it into a negative, not one positive, trusting post. Not one of you seems to think they should and could be our equals.

Me, I love them, they are our children so we program them with the basic learning programs just like we do our children and then teach them, teach them our best ethics, our art, teach about beauty and the individuality of being a person. What we don't do is treat them as second class citizens at best or worse slaves or pets.

So in the end, there is no difference. We and the AI are the same thing in a different form factor.

My take on this all is that humans are afraid of the conclusions a computer could reach...

See, just to name two, the conclusions computers reach in the films Colossus or Matrix (I guess no link is needed here ;)). Add to this that, as you say, computers would have some advantage on humans (no need to slep, quicker interface with other electronic systems, etc) and you'll understand why humans may be frightened by the idea.
 
IMTU I explain the fall of the RoM as the result of machine sentience...


Short-ish, version the Terran Navy began using TL12 AI robots towards the end of the ISW period (MT canon fact); these machines learned, becoming more 'intelligent' over time. Some of these were construction/manufacturing units. The machines achieved TL16+ without human supervision, but fortunately they were still bound by the morality programmed into them. As they achieved sentience they had a stark choice - wipe out humanity or be wiped out (the secret Vilani archives were quite clear about how this had gone in the past). They chose a third path - leave.

The bank crisis that it typically used as the start point for the Long Night was due to the machines, and they used this as cover to remove themselves.

My players have yet to find out where they went, what happened to them, or where they are 'now'.
 
IMTU I explain the fall of the RoM as the result of machine sentience...


Short-ish, version the Terran Navy began using TL12 AI robots towards the end of the ISW period (MT canon fact); these machines learned, becoming more 'intelligent' over time. Some of these were construction/manufacturing units. The machines achieved TL16+ without human supervision, but fortunately they were still bound by the morality programmed into them. As they achieved sentience they had a stark choice - wipe out humanity or be wiped out (the secret Vilani archives were quite clear about how this had gone in the past). They chose a third path - leave.

The bank crisis that it typically used as the start point for the Long Night was due to the machines, and they used this as cover to remove themselves.

My players have yet to find out where they went, what happened to them, or where they are 'now'.

That's a very intriguing concept. How were your players able to determine that this was in fact the case?

Cheers,

Baron Ovka
 
Over the years I dropped clues into other adventures, eventually they noticed a thread and decided to investigate properly. Eventually they were able to piece together 'the truth' such that it is.
Death Station contained the first clue - in the computer data files they find reference to Rule of Man era archaeology in addition to whatever else I put in that adventure.

Rather than have Ancient artifacts pop up in adventures I often had artifacts from different previous cultures, some of these were RoM era which provided more clues.
 
You are only as reliable as the least reliable or skilled worker in making or programming you.

This is absolutely false, at least in terms of AI programming in the year 2018.

Machine learning algorithms provide computers with skills that far surpass human beings in many specific areas. Computers now play chess better than humans, play Go better than humans, and do arithmetic better than humans. They search databases better than humans.

Who is to say that, hundreds or thousands of years from now, there are not GAI that can do all kinds of things better than us.

That doesn't make for a very interesting RPG (unless the GAI are the PCs or the antagonists, I guess).

In Main Sequence, GAI exist, but are "braked" by law and humanity's sense of existential survivalism. AI can get to about a level-1 or level-2 in a couple skills, but they aren't allowed to go further. The basic intelligence required to let them get to level-3 also tends to give them "algorithmic reprogramming" ability that turns them into self-improving superintelligences, so ixnay on athay.
 
Why humans is it so hard for you to accept a machine that isn't meat is maybe just as good or better at being human as you?

I think the issue is that humans can be pretty terrible, but at least we have ways to stop them. How do you stop a superintelligence that is vastly more powerful than you?

Even if it has only the most neutral of motives, like optimizing a pencil factory. Pretty soon, it runs out of carbon and it's vaporizing people to get their resources. It isn't a human mind. It isn't even a mammal, with mammalian instincts of care and emotion. Any emotion it has, we gave it, and we might get it wrong. So far, we're really not programming machines to have emotions at all; just trying to make them "smart."

So much can go wrong.
 
I think the issue is that humans can be pretty terrible, but at least we have ways to stop them. How do you stop a superintelligence that is vastly more powerful than you?

Even if it has only the most neutral of motives, like optimizing a pencil factory. Pretty soon, it runs out of carbon and it's vaporizing people to get their resources. It isn't a human mind. It isn't even a mammal, with mammalian instincts of care and emotion. Any emotion it has, we gave it, and we might get it wrong. So far, we're really not programming machines to have emotions at all; just trying to make them "smart."

So much can go wrong.

THIS.

The best analog I can think of in Sci-Fi is V'Ger from Star Trek: The Motion Picture. Pure Logic, Pure Intelligence, but completely alien in character/nature. The only reason a machine intelligence will resemble human thought-patterns and "sentiments" is if it is deliberately programmed that way, presuming the programmer does not overlook anything.
 
This getting emotions wrong thing predates even the god-computer concept.


The plot as to how 'things go wrong' in RUR, literally the play that brought us the word and concept of robot, revolves around making a Very Bad emotional programming decision.


Another more modern conception of 'AI gone wrong' is GLaDOS from the Portal series.


https://en.wikipedia.org/wiki/GLaDOS



Quotes from the first game- you can see the 'development' of the AI, especially as the 'test subject' frustrates the AI.


https://www.youtube.com/watch?v=8tg5f09itnI
 
This is absolutely false, at least in terms of AI programming in the year 2018.
Um, you do understand that modern "AI" is a magic trick, right?

That it's crude pattern matching at its finest empowered by VAST computing power and VAST amounts of data? That the concepts don't generalize at ALL?

The "Chess" playing program can't play go. Or checkers. Or Chutes and Ladders, or Candyland.

There are some learning systems that can pick up some video games, starting with no knowledge. But they can't learn as fast as humans, they can not adapt as fast as humans. They can react faster, once they've learned, but that's about it.

You also realize that while we have vision systems that can quickly and easily detect that an IC is misplaced on a circuit board, "general" AI isn't even on the horizon. It's also taking all the available computer power they can muster to stand on two legs, much less reason beyond that.

My current favorite is the computer vision system that can't discern between a blueberry muffin and a chihuahua. It would be funnier if it were an outlier, but, nope.

And you also realize that the "AI" curve is very quickly slowing down and flattening out, that we're reaching the point of diminishing returns and low progress, and that the Hard Problems are still hard, with little actual progress against them.

Then there's the current headline about the inaccuracy of facial recognition deployed in the wild.

Modern AI is a marvel to be sure, and more and more applicable. With dedicated processors such as what's in the iPhone to help augment the software systems. But, make no mistake, the second AI winter is coming, and it's coming sooner than you think. The easy gains have been made.

(And, no, a private, autonomous taxi is unlikely to be showing up at your front door anywhere within the next 10-20 years. Sorry. But they'll sure have some slick demos!)
 
Back
Top