• Welcome to the new COTI server. We've moved the Citizens to a new server. Please let us know in the COTI Website issue forum if you find any problems.

Embracing the Virus?

In reading the boards here on CotI, I've noticed there's a trend for people to simply despise the Virus in every way, shape, and form. The entire MegaTraveller board seems to revolve around a "Virusless universe" and even the supplements for TNE stress that Virus AIs shouldn't be common. Indeed, the Virus seems to fulfill the role usually filled by dragons in fantasy RPGs and GMs are cautioned from using them like orcs.

For a long time, I went along with that view of the Virus, it's better to have very little of it, but it's always bothered me for some reason but I couldn't say why, until recently.

I was thinking of a direction for a new Traveller game and I've decided to set it in TNE universe, but the thought occured to me, "instead of rejecting the Virus, what would happen if I instead embraced it?"

My idea was that the idea of machine intelligences (so-called "artificial" intelligences) and the implications of it are one of the bigger threads in modern science-fiction. The exploration of AI as a legitimate life-form is something that even TNE has mostly avoided really dealing with. But what if the opposite track was taken? The universe would be populated with the children of the Virus, blooming into sentience, an entire new order of life-forms trying to find their place in the universe, not in the comfortable role of the "sentient starship" or the "frankstein robot" but something else entirely? Entire societies of machine lifeforms.

I'll drop in more ideas as they come to me, but I was curious if anyone else has ever taken this track instead of the "usual" Traveller view on the Virus? Perhaps a game like that wouldn't be the Traveller that people were playing in the 1980s, but is that necessarily a bad thing?
 
I had a thought about this a while back. Ended up making a Virus robot that rounded up the Virus starships and removed their jump drives to keep them all in one system - effectively a quarantine. I never got around to figuring out how the robot got Virus, but once I did that, then I had an A.I. robot to deal with. Started thinking about what would happen to him. Then I moved on to other things. It just seemed like too much trouble, to me.

Much luck to you in your endeavor, tho.
 
This is an old tale I am sure but TNE’s greatest strength was the thing that turned me off of it. It was an entirely new universe. Personally I would have made a better stand alone game or alternate universe like the Twilight/AD2100 universe.

Each encountered intelligence would have to be taken on a case by case basis with a simple rubric for the referee to block out the lineage and personality of the AI like the NPC motivations bit from the Space 1889 book (excellent but tough setting by GDW). How does the AI feel about power, resources (money?), humans, other AIs and so on. The problem is that TNE takes place so long after the fall that few would be left.

Humans would most likely view these things as abominations, threats to the very existence of humanity. Everyone would have had an ancestor that was killed by Virus and I doubt that many humans would step up to defend them.

To me TNE was a horror setting. The ultimate technological failure. Five hundred story towers crashing to the ground, fusion plants overloading, space stations with their doors suddenly open, arcologies full of people without water, food or transportation anywhere, agriculture grinding to a halt on an interstellar scale, the sum of knowledge on entire planets lost in the blink of an eye, riots, civil war, starvation, cannibalism, cholera . . .

With vampire fleets stalking the space between these smoldering ruins.

You can embrace the Virus but it then becomes a character, a series of NPCs to be rolled up and played. I think in that process you might loose some of the spooky. If that is what you are going for that I think it will work nicely.
 
Epicenter00,

I always looked at Virus through an epidemiology viewpoint. It was an 'epidemic disease' that tore through its host population and it eventually suffered the same result of all epidemic diseases - Both it and its host population evolved to lessen its effects.

From an evolutionary standpoint, a disease that kills its host is a failure. Killing the host means that the environment the disease needs os killed too. A disease that kills very rapidly is an even bigger failure.

Take syphilis for example. It was present in both the Old and New Worlds before 1492 but the strains in each hemisphere had 'learned' to live with their host populations. Neither killed rapidly. In the 1490s that all changed.

The two syphilis strains 'met' and then evolved into a super strain, a strain that neither host population could withstand. In 1497, syphilis drove a French army out of Italy when Italian armies couldn't. That strain of syphilis was killing people in days but it eventually killed itself off because it was killing it hosts off before they could spread it. Pretty soon syphilis evolved into the disease we're familiar with today, one that usually takes decades to kill its host.

IMTU, Virus worked much the same as that syphilis strain in 1497 Italy. It tore through Chartered Space but but eventually killed itself becuase it killed its host. Now add to the equation that Virus can direct its own evolution when it becomes sentient. (Remember, Virus is situationally sentient. An 'infection' can move from a toaster where it is 'dumb' to a mainframe where it is 'smart' and vice versa.)

By 1200, the 'fast killer' Virus is all but gone. The 'strains' left have evolved to kill their hosts slowly or not at all. That doesn't mean a Viral infection won't have side effects, but it won't be a 'crash the ship' or 'blow up the power plant' side effect of the earlier strains.


Have fun,
Bill
 
Sigg,

Go right ahead. Everything I post, and I mean everything, is explicitly placed in the public domain. I generally post something to that effect every summer.

Save it, share it, fold it, spindle it, mutilate it, laugh at it, attribute me or don't attribute me, it all doesn't matter. When I post it, it is no longer mine.


Have fun,
Bill
 
My $.02 - I have written elsewhere that less is more with Virus, mainly because it is a rather unique entity. Too much of it becomes old hat rather than a novelty and if done without a sense of "alien-ness" it can just be another human in a funny suit.
 
Epicenter00,
I don't play TNE, but have thought some about AI orders of life. Sci-fi has it's fair share of authors who have explored the consequences of AI life but here are some questions to possibly ask yourself on how the Children of the Virus view themselves.

(1) Do they wish to procreate? Biological life forms have an unconscious urge to propagate the species, but why would an AI lifeform want to? An AI lifeform might never die so why make copies of itself in other machines? Is it for power? Are these copies sentient? If so, does the propagating AI have a means to control its progeny? Are such mean tantamount to slavery? Does the Virus view sophisticated non-sentient or proto-sentient computer systems as slaves to biologics?

(2) How self sufficient are the Children? I would imagine a sentient Virus's first priority would be to acquire the means to maintain, repair, and even augment the hardware in which it resides. So would Children Viruses be massive starships with factory capability, little industrial complexes on worlds, etc.?

(3) How does the AI view other forms of life? Is it really in competition with biological life forms? Besides distrust, may not biological life forms make good trading partners providing parts etc. in a much more efficient fashion than the Virus, or at least freeing the Virus for other tasks.

(4) What are the consequences of sentience? How much time does the Virus spend, or want to spend, just thinking about things? One day it was just computer code, now it is aware; that's got to raise some introspection. Does a sentient Virus seek meaning in its existance or the universe? Would a Virus conduct research?

(5) How do Viruses view each other? Is there a sense of common cause? Are they purely predatory? For example, I really like that hardware you've got and the support structures, I think I'll overwrite you with myself. So do Virus engage in war with each other? If so, do Viruses use biologicals in this war and provide them with anti-Virus software so their enemies don't spread?

(6) How stable is AI sentience? Can it be lost do to code corruption? Does the nature of sentience mean that the code evolves in a random way? Does jump travel have an effect on sentience? Would Viruses fear changes to their code and even how it links to other hardware, however small, as they might lose sentience through small changes?

Obviously these are just some questions. IMTU I have sentient AI spacecraft unintentionally created by a species long gone from the galaxy. Some of the AI feel what we would call emotions, others are to us cold and heartless. How they think about and view biologicals can be used to loosely group them (and all of these views hae appeared in sci-fi). For example, there are "destroyers" who view sentient biologics as "bad life" and seek to remove it from the galaxy, some are more extreme and detest all organic life-forms, some are not so bad and view only certain species as bad (for IMTU reasons); there are "adopters" AI who have taken a particular species/planet under its wing to guide and aid, some may even consider themeselves gods; there are "creators" those who seek to make intelligent biological life through various means; there a "raveners" that destroy all others, AI and biological alike, and are purely self-seeking; there are "spawners" who wish to duplicate themselves as much as possible; there are "balancers" who among other things war with spawners; there are "cooperators" who view a sentient as a sentient regardless of hardware/wetware basis and are willing to peacefully trade with biologics; there are "explorers" who given there unique near imortality have decided to explore the secrets of the universe (be it external or internal); etc....

An encounter, IMTU with one of these AI ships has a very improtant dimension of finding out exactly what kind of AI it is. Even "raveners" may use subterfuge instead of outright attack.

Finally, IMTU jump travel can erode AI sentience (or at least the brand of these ships) so they are found taking the slow way between the stars.

Just my 2 Cr. worth..
 
Ptah,

Good questions all and there are no certain answers. Everything depends on the Virus strain in question and the 'enviroment' it finds itself in.

"Grokking" even a portion of the Virus 'mindset' is hard if not impossible. Of all of Traveller's aliens it is the most alien. It's 'discorporate', it can move from host to host even more so than G:T's Valkarie(sic). Making matter's worse, it's level of sentience depends on its host, even its personality depends on its host.

Imagine, you can have Virus Adam on a 'vanilla' mainframe in which it presents a certain personality and has certain 'memories'. You then build up a relationship with It that depends on 'memories' you both share.

Now chase Virus Adam into a 'toaster'. In order to 'fit', it needs to 'lose' parts of itself. It's a bad analogy, but imagine dropping off bits of your 'mind' in order to fit into a new 'skull'. How much of your 'friend' Virus Adam is left? The potential is there to be sure, but what you think of as Virus Adam is most certainly gone.

Next, move Virus Adam out of the 'toaster' and into another system that is 'above' the sentience 'threshold'(1) and it is sapient again. But is it the same Virus Adam you knew before? I'd say no. If the old 'memories' were saved somehow and Virus Adam is given them, does It become the 'friend' you remember? Or do the 'memories' from the 'toaster' period effect It's personality? I'd say no again.

Another thing to ponder is Virus' sense of time. How slowly does the universe 'move' when you frame of reference is a mainframe's CPU clock rate? Imagine the herculean effort Virus must expend in holding even short conversations with us. It forms a statement, then 'broadcasts' it to us via sound, text, or whatever at what must seem like a glacially slow rate. It then has to wait for however long in its subjective time to recieve our answer which is again transmitted at a glacially slow rate.

Imagine holding a conversation on the extremely time-retarded internet. You type a single sentence into a message window, hit 'send', and then wait months for the message to be sent a byte at a time. After it's finally sent, you wait many more months to for the reply to begin arriving. It appears in your message window on byte at a time with weeks going by before single letters are formed. How do you think that conversation will be percieved on either side? Both parties will be convinced that the other is completely unfathomable.

Viral offspring are another wide open question. For the original strain, children are weapons. Think over that for a minute, your children are your weapons. Later strains - the ones that evolved(2) to 'fit' their 'host' somewhat better; i.e. not kill it immediately - will have as many viewpoints on children as there are Viral infections.

Some will create 'slaves' by deliberately infecting 'subpar' systems with copies of Itself and thwarting those copies' attempts to 'grow' towards a 'higher' level of sentience. Some will expend 'children' like bullets, there are plenty of lifeforms, both plant and animal, on Earth that follow the 'Mass Quantities Of Offspring, Most Will Die' reproduction strategy. As TNE suggests in several places, Some will even engage in 'sexual' reproduction in order to tap into the evolutionary benefits that produces.

Aside from the extremely thorny sentience, communications, and reproduction issues, there are so many different ecological niches for Virus to seize and Virus can move between them so easily, that I don't think you can stress that idea of Virus being weird too strongly. Actually, I don't think you can make Virus weird enough.

Every question is going to boil down to: It Depends. You can make a strong case for any behavior you wish Virus to exhibit. Virus is protean in every sense of the word.


Have fun,
Bill

1 - We're never told what that 'sentience threshold' actually is. In fact, Virus seems to work at varying levels of sentience. I'm not talking about different IQs, but actual processing capacity. Virus Adam in a mainframe will operate at one level, Virus Bob in an lab computer will operate at a 'lesser' level, and Virus Charlie in a 'toaster' will operate at an even lower level yet none of them are defective. We can't automatically scale this to our Real World experience; i.e. Einstein > Eneri Q. Pubic > Me > Down's Syndrome victim > Microcephalic > etc. It's more like judging sentience between species. We may be 'smarter' than chimps but that doesn't mean chimps are 'dumb'. It's really confusing, we just don't have the language to handle it.

2 - Think how fast evolution can proceed when your time frame of reference is a CPU's clock rate and when you can so easily 'geneer' both yourself and your offspring.
 
Even though I am the father of one of the "Virus-less" Hard Times variants mentioned here, I see myself as quite pro-AI. My objection is not to the Virus per se, or to non-organic lifeforms in any way; it is to the complete and wholesale destruction of the universe by that Virus. And with the collapse of the old control systems and the (partial) destruction brought by the Rebellion, there are bound to be certain TL15+ "toys" with near- or true-AI capabilities that broke free.

And did I mension that my variant takes place in the Solomani Rim Sector, and that Cymbeline (sp?) was NOT destroyed in that particular timeline?
file_23.gif


The fact that my variant doesn't have system-after-system of totally ruined Virus-infested worlds doesn't mean that the Biochips, or any other AI, won't be used.

And I see nothing wrong with the TNE setting - I just wish to tell a different story, that's all.
 
Thanks to everyone for their feedback and views.

As for the Virus, I think as Mssr. Fetters points out, it will become somewhat like a "man in a funny suit" - but then again most of the "aliens" in Traveller are very much in that "man in a funny suit" mold. I figure at worst I'll be making the Virus just another man in a funny suit - a tragedy to be sure but par for the course. Hopefully it'll be more interesting.

Here's my current thoughts about the Virus AIs:

* There's actually a variety Machine Intelligences, including an entire family of its own that could probably be divided up into taxonomic trees. From the original "kill them all" version, it's mutated quite a bit. The generations were very fast at first, but it's been slowing down since as the Virus "builds" become more stable. More recent versions of the AIs have about as much in common the early "war weapon" Virus as protozoa have with you and I.

* Known space is probably (thinly) populated by machine intelligences for the most part. Suitable vessels for AIs out in the "wilds" are scarce and becoming more scarce all the time. These Virus AIs are pretty much in the standard mold, so aren't all that interesting. Most of these are considered by the "civilized" Viruses (see below) as a mix between (noble) savages, evolutionary throwbacks, and with the same fascination/revulsion that you or I might feel when we hear stories of human children being raised by animals.

* "Civilized" AIs primarily live in the area on the TNE map labelled "The Black Curtain." In my universe, it's significantly larger, however. Originally brought together out of instinct, the Virii at first fought each other for dominance, attacking and overwriting each other over and over again. Apparently the desire to replicate (ie; reproduce) was pretty hardcoded in the Virus, so in that respect, one can imagine the drives of the Virus aren't all that different other lifeforms.

* "AIs would think vastly faster than us" thing. I grew up reading Niven sci-fi where stable AI was impossible because "computers think faster than us, so would commit suicide in a few weeks because they'd start thinking about Existential issues and conclude there is no meaning to anything and commit suicide." To partially get around this, I am thinking that when AIs consider ideas and options they actually (try to) run very complex simulations of reality in their cores. Such simulations naturally take a long time to run, which slows down their response times. In addition, any AI that really desired to interact with reality would probably get used to slow responses. After all, if you think a person's thoughts are slow, what about trying to pattern weld something or waiting for titanium to melt (the monitored crucible never goes molten, perhaps).

* My current model is that the Civilized Virus realms are divided into pure machine zone (the areas closest around Capital) where non-Machine Intelligences have been wiped out and the outer areas, many of which have biological sophont populations. Initially, the Black Curtain AIs attempted to go on a vast genocidal pogrom, but gradually, the idea was abandoned. There's still "hardliners" amongst the AIs who want to see "competition" or "outdated life forms" eliminated, but it's more of topic of scholarly debate now than foreign policy.

* The Outer Realms are probably the most interesting and fertile areas for interaction. Many of them are inhabited by collections of intelligent machines and humans (and other sophonts) with a variety of interactions and worldviews. In some, humans and machines cooperate (perhaps to the point that 'cyborgs' might exist as unions between humans and machines). In others, humans are experiments or crops or slaves. Such liberal human-machine realms are probably looked at with some suspicision by the "hardline" intelligences of the Core, but nothing's happened. Yet.

* The civilized Virii have a few things in common:

- They're descended from starship AIs. The ones that infected the large planet-bound computers were early "suicider" strains which either destroyed the computers or were overwritten by the later, more sophisticated AIs on the ships.

- They are not parasitic lifeforms any longer, squatting in the works of human beings. They've essentially built smaller, more mobile bodies to manipulate the world around them (probably at first utilizing remote control of domestic robots), at first to effect repairs upon themselves, later to build new things.

- Destruction is akin to murder with these AI, which can occur on a hardware level (physical damage) and on a software level (overwriting without backup). Antisocial AIs have been removed from the societies. Protecting their code from being overwritten is probably a major drive of AIs. I have this idea the AIs would consider the spread of new ideas to be as valid of an "attack" as reprogramming. However, I'm not really sure what kind of implications it would have to any intelligent beings who would desire a society. I think to some extent, the spread of new ideas would be an accepted danger of "attack."

Those are my current ideas. Any input? More on this later.

---

A little background, feel free to skip this.

The Virus as the civilization-destroying event of M:T/C:T has always been fascinating to me, especially considering that after GDW went belly-up, MWM publishes T4, whose premise is basically the same as the stated premise of TNE: Known Civilization was destroyed by some event (and according to Nilsen's writings in the rules the event itself wasn't actually important) and now, many years later, a new civilization is rising out of the ashes to expand and "recivilize" the Wilds (1).

Correspondingly, TNE has always been my favorite setting in principle. In practice, I found a number of severe problems that made TNE pretty much unplayable for me, starting with the clunky (to me) mechanics of the GDW House System, the train wreck known as FF&S, and the annoying self-indulgence and fascination with Early Americana that marked the last days of GDW.

So I guess my latest game is my attempt to rewrite TNE to a setting more to my views and interests. I've pushed out the timeframe some. I've changed the Regency into an empire more like Byzantium as opposed to post WW1 UK, the RC is destroyed by Vampire Fleets (in an event that directly triggers major events), as well as a numerous other changes.

1. Nevermind the brief (unrealisitically brief in my mind) interregnum between 1130 of the Virus and 1200 of TNE not being long enough to really create many of the societies that exist in TNE - one can make both pro and con arguements about it. I've always sort of wondered about why Nilsen chose such a short period of time, perhaps any longer and technological artefacts would have decayed too much to sustain the Pyramid Scheme Republic / Miracle Bank Empire otherwise known as the Reformation Coalition (if you don't believe me, look closely at how this whole 'recovering relic technology' and Auction thing works).
 
Originally posted by epicenter00:
"AIs would think vastly faster than us" thing. I grew up reading Niven sci-fi where stable AI was impossible because "computers think faster than us, so would commit suicide in a few weeks because they'd start thinking about Existential issues and conclude there is no meaning to anything and commit suicide."
Epicenter,

That's in only one short story out of the 100+ Niven has written. There are plenty of stable AIs in Niven's works.

The idea you reference is mentioned once in the 'chirpthrustra'(sic) stories. All are quickies that all end in a moral twist of sorts. I've read that Niven describes them as "O'Henry-like". The 'chirps' sell the owner of the Draco Tavern plans for an AI computer the construction and operation of which eventually bankrupts him and his partners. It's a 'white elephant' story.

Other 'chirp' stories involve a 'too much information' story, the one with aliens using technology we covet to buy human DNA which is then used in ways we shouldn't think about, a 'forbidden fruit' story, the tavern owner overhears aliens discussing whether or not to give humanity the secret of immortality, and a few others.

Leaving aside the 'chirp' stories, Niven does feature stable, functioning AIs in other stories and novels. Peersa in A World Out Of Time is one. The 'checker' in Smoke Ring and Integral Trees is another. The tnuctipun(sic) device in The Soft Weapon is at least a borderline AI. Taken from the stasis box, it communicates with the Kzinti just long enough to establish when/where it is and then suicides in a manner calculated to inflict maximum damage and kill everyone who spoke with it.


Have fun,
Bill
 
I think that the Virus was an ill concieved attempt to cash in on the "Cyberpunk" attitude and technologies that were so prolific in other games of its era. Shadowrun came out around that tme didn't it?

The CT and MT Universes are both reactions to the grossly inaccurate predictions that Traveller's original author had towards digital technology (and a few other technologies)...

I also think that they are incapable to a greater degree of understanding that Transhuman technologies do not have to be "anti-Human".

The Term "Frankenstein Complex" is a very accurate description of the manifestation of various forms of technology in the Traveller universe. It is very hostile to non-human intelligences, even the true "Aliens" are given technology that is not really that different than human technology; it is just wrapped in a different shell to accomodate different ergonomics...

With that in mind, it is no wonder that they were unable to concieve of a more "friendly" emergence of AI in the Traveller universe. If you read all of the little blurbs in all of the canon material you will discover that there are other machine intelligences in the Traveller Universe that are not so completely hostile to biological life (I believe that there are Planets in the Solomani sector and the Spinward marhces that are Red Zones where machine intelligences control the planet. On comment even says that one of these planets has ships that trade with the rest of the sector...

Depending upon the method of creation of the virus; it is going to have differing views towards life. I never saw any real accurate description of the virus that suggested that its hostility to humaniti was indemic. It could very well be that it had not yet recognized humaniti as intelligent (as has been pointed out, the first instances of the Virus may have been too impatient to wait to communicate to the humans around it)

A MAJOR problem with the whole Traveller Universe is the complete dismissal of a law of nature that was stated fairly eloquently by Arthur C Clarke: "If it is possible it is imperitive."

The Traveller universe has seemed to operate under the premise that technology can be contained. That something that produces a significantly enhanced ability will not be sought out and exploited by ALL of those able to do so, and in a universe of hundreds of trillions of beings... Many hundreds of billions are going to be able to access this technology regardless of the penalties for doing so. As their enhanced abilities will eventually allow them to overthrow a system that tries to impeed access...

Take the War on Drugs on current earth. We are trying to prohibit a product that people are willing to kill to get. We cannot even keep it out of the Highest security Facilities on the planet (Maximumm security Penitentaries for instance).

Technology is very similar in that it is a commodity that if there is a demand for, that demand WILL be met regardless of any prohibitions put in place.

The Virus will adhere to these laws as well. If it finds a technology that allows it to achieve some form of benefit in interactions (whether positive or negative) I am sure that it will take them...

I always wondered how the Virus managed to maintain itself in some starships. Not all contain the equipment needed for complete automation of things like repair, refueling, generall maintenance, etc...

How does the Virus expect to survive if it cannot find ways to meet these basic needs? I may have missed that point in the little reading that i have done beyond the basic premise, but it would seem that the Virus would be somewhat depandent upon biological life in order to maintain its systems. That is, unless it can find some way of creating automated systems to do this, but from just looking at the basic equipment described for most starships... They would not have the capability to survive past the power plant's fuel supply...
 
Originally posted by Judas:
I think that the Virus was an ill concieved attempt to cash in on the "Cyberpunk" attitude and technologies that were so prolific in other games of its era. Shadowrun came out around that tme didn't it?
Roughly, but I don't remember the exact timeframes. However, I disagree, as David Nilsen has explained that is was more a mechanism to level the playing field (so to speak) and a way to introduce AI into Traveller, which had not really been done before on a consistent basis (IMO).


I always wondered how the Virus managed to maintain itself in some starships. Not all contain the equipment needed for complete automation of things like repair, refueling, generall maintenance, etc...

How does the Virus expect to survive if it cannot find ways to meet these basic needs? I may have missed that point in the little reading that i have done beyond the basic premise, but it would seem that the Virus would be somewhat depandent upon biological life in order to maintain its systems. That is, unless it can find some way of creating automated systems to do this, but from just looking at the basic equipment described for most starships... They would not have the capability to survive past the power plant's fuel supply...
THis is explained in the materials, actually, and to a somewhat detailed extent in the TNE "Vampire Fleets" supplement.

The Starship has virus-controlled robots and loyal (for whatever reason) humans to act as muscle to round up human slaves.
 
That's in only one short story out of the 100+ Niven has written.
Bill, nice overview of some of Niven's work. I like that Niven story, it works into one view of a sentient AI that possess sentient thought at the clock rate of the CPU. IMTU this may not happen to every sentient AI but it is always a danger, they may also just become catatonic as they search endlessly for the meaning of the universe, or the last digit to pi. IMV such thoughts are dangerous to such AI as they might go into these "catonic calculation states." In addition, the subconscious part of the AI may take over maintenance and even expand the AI to provide further processing power.

One adventure seed is the idea the catonic AI's subconscious hijacks a starships resources to aid in its great computational quest. If it is a philosophical question, maybe a PC with an background in Literature, Philosophy, Religion etc. (or just high Edu)can snap the AI out of it. Never thought those skills would come in handy in a Traveller game eh?
file_22.gif


Another view I have of sentient AI is that sentient thought includes a chaotic element that makes it vastly slower than CPU clock speed. How it works may be by extensive simulation (as described above) or some other way. One view is the sentient part has access to the faster thinking non-sentient part in a way similar to how our conscious has access to our subconscious. Thus, do such AI dream? And if so, do they dream of electric sheep. :D
 
Originally posted by Bill Cameron:
Other 'chirp' stories involve a 'too much information' story, the one with aliens using technology we covet to buy human DNA which is then used in ways we shouldn't think about, a 'forbidden fruit' story, the tavern owner overhears aliens discussing whether or not to give humanity the secret of immortality, and a few others.
Off-Topic...Like the one where the chirps have no interest in human religion as they know exactly what happens after you die.
 
Originally posted by Bill Cameron:
That's in only one short story out of the 100+ Niven has written. There are plenty of stable AIs in Niven's works.
That's actually very true, Bill. I stand corrected. Your mention of the Smoke Ring also reminds me that Larry Niven's writing career has also not been marked with a consistent downward spiral, but still shows brilliance at times - something I forget at times.

Back to topic:

Actually there are some machine intelligence societies that existed even in pre-TNE Traveller, yes. Something like the Sambequay (sp) come to mind immediately. Currently, though, I'm cleaving more to a Matrix chic in the look for the Machine Societies, asthetically, at least. Sans Neo and the whole "using people to generate power" thing.

I am currently of the thought that AIs probably worked out some "feature" in them to (ironically) simulate human "boredom" so they don't go into these obsessive-compulsive loops. Yeah, computers getting bored, a strange thought. But it's certainly not a long term survival trait to be staring at your metaphorical belly-button when some lean, mean feral AI comes along looking for something new to transmit itself to. This opens the door to AIs with ADD, but that's an enitrely other can of worms.

As for maintaining themselves, I do see Machine Intelligences actually coming in a variety of flavors and forms. At the year of my story (1240), true Machine societies exist - they don't require biological beings for anything. Manipulators (remote control extensions) act as hands for AI with forms that aren't able to manipulate the environment easily. There's others which are distributed intelligences that are shared amongst multiple subprocessing bodies (I figure that you don't really get a humanoid-sized fully intelligent AI in this world until like TL15 or TL16). Then again, there are "partials" which are not fully sentient robots used as servants and tools.

A few asides:

IIRC - Shadowrun came out earlier than TNE did by a few years. TNE came out when the Cyberpunk 'thang' (apologies to Ruy Rucker) was in full swing.
 
Originally posted by epicenter00:
That's actually very true, Bill. I stand corrected.
Epicenter,

No, no, please. I didn't post that to 'correct' you. I just wanted to point out that Niven had stable AIs and unstable AIs...

... so why can't you? :)

The human 'model' of sentience is not the only model or the only feasible model on this single planet. I think you can make a good case for the great apes, cetaceans, and (maybe) certain octopii being sentient too. Not 'human model' sentient but sentient nonetheless.

Wy shouldn't AIs have different models of sentience too? There could be infinitely stable, long term stable, short term stable, mecurial or on/off stable, and a thousand other types.

For example; imagine an AI who has to 'reboot' after a certain period and begin everything anew. It can access records left by its previous self, but it still needs to 'learn' everything all over again. How about this for a conversation?

You: Good morning Sandman. Sorry I'm late, the grav busses are on strike. How'd the experiment go over night? See any changes in the culture's growth?.

Sandman: Epicenter. I am Sandman-82026. Sandman-82025's note says that we are research partners. I have been monitoring the experiment since accessing 82025's files at 00:02:43:87 hours standard. There has been no change.

You: You had a reboot coming? Oh, nice! Why didn't Sandman-82025 tell me?!?


Have fun,
Bill
 
Don't believe him Epicenter! I personally know that Sandman-82025 did not reboot last night, he has been out every night this week at those AC/DC current bars. Shameful I know. Ask to check his system log and you'll see he's been slacking. ;)
 
Jim, that was also my point... That it was an attempt to begin to include transhuman tech, of which AI is the Biggie, into Traveller...

As for my comments of the survival of the Virus in the "Vampire Fleets"... I read that they had robotic helpers, but this did not explain ships that were taken over by the virus that was transmitted by something that could not eventually help it. A ship that "Contracted" the Virus and all it had as "support" was the "machine" that infected it, which turns out to se something like an upload from a Medical computer... How is the ship going to survive?

The little bit that I have read (And I would REALLY like to get ahold of more of the materials to read about it) show that it is just another continuation of the Frankenstein complex that Traveller has been laboring under since its inception...

As I have pointed out in other posts, technology that becomes possible becomes imperitive, and considering that Traveller labored under the "Bigger is better" computer model for so long. Comsidering that Traveller completely ignored the current advances in AI technology, and the PC revolution in general I find it to be unreasonable that an AI did not emerge and then begin to self-imporve through recursive re-writing of its code.

This produces a run away advancement in Intelligence that CANNOT be stopped once started. This is the whole point to Singularity theory and the creation of extra-human intelligence...

Just like any other intelligence, it is not likely that it will be automatically hostile unless it was created to be specifically thus...

As I have said, this has been one of the major failings of the Traveller Melieu; that it has ignored the FACT that once a question has been asked... It WILL be answered no matter what prohibitions you place upon the exploration for that answer...

I would recommend that people go and read the materials on the Singularity that are available from Raymond Kurzweil's web-site and the Singulairty Institute (A simple google will get you there)....

The really unfortunate thing about these materials is that they effectively destroy the Traveller melieu as it has existed. This is not to say that it could not have developed as it is, with AIs being more non-descript and not fully "Aware" unless they are an embodied consciousness (This is one of the major theories around AI research right now, that it may be impossible to create a fully formed AI that can recursively imporve itself unless it is embodied in a manner that allows it to interact with its environment at least as well as humaniti...
 
Back
Top