• Welcome to the new COTI server. We've moved the Citizens to a new server. Please let us know in the COTI Website issue forum if you find any problems.
  • We, the systems administration staff, apologize for this unexpected outage of the boards. We have resolved the root cause of the problem and there should be no further disruptions.

AI -Smart Ships

By no means to hint at the vessels of the vampire fleets during the time of Virus, I'm gathering thoughts and opinions about the role of artificial intelligence aboard a starship.

My particular interpretation IMTU has been that AIs can exist and operate as tools, and often de facto crew members, if potential abuse by PCs is carefully monitored and managed.

I like the concept that a vessel can develop and express a personality unique to itself as well as create and maintain relationships with the crew of said craft. I see many benefits to the chief engineer who can 'converse' directly with the ship in his charge and better establish an 'empathy' to the machines that compose it.

From TV and film have been presented examples of good-bad situations arising from artificial intelligence being so 'hardwired' into a starship but such aside, a vessel's computer without 'soul' is little more than a mute, detached and distant glorified abacus than a trusted companion and valued confidant.
 
If you're looking for ideas Anne McCaffrey's Brainship story series might have some ideas for you. Not exactly AI but analogous and touches on your "soul" idea.
 
I find it better to define high end computer systems as artificial intelligence, while truely self aware machines are artificial sentience.

Telling the difference between the two can be a problem if the AI program is well written ;)
 
I thought GURPS Traveller handled it well, most computer-based programs just made the character better, by augmenting their skill -- or gave that option anyway.

I've read a few sci-fi books where *everything* is Artificially Smart and just didn't buy it. A few books ground to a halt because of some dumb manufacturing technique where the item Rebels Against the Owner cliche.

I think the best feature of Star Trek: Voyager is when they deactivate the know-it-all-not-to-mention-obnoxious-Doctor right in the middle of one of his diatribes. :rofl:


>
 
The proper term is "Synthetic Intelligence" not artificial. Many SI ships feel the term "artifical" implies that they are some kind of fake or trick, instead of a "real person" They hate being called "AIs" :)

As to the difference between sentience and intelligence, while technically you are correct, it seems that in fact, such a distinction appears to be a difference that makes no difference.

As SIs it would be interesting to see how the legalities play out with such "people" What legal rights would a ship have? I think that as a practical fact, it would depend on how much control the SI exert over the ship. If it can fly the ship by itself without human input, it would probably have more legal rights than one who has less ability to affect operations. Some captains may not like giving what is in essence the ability to mutiny to the ship.

Also, there is a possibility that SI could evolve or come into existence after years of service, as a natural response to growing complexity of its programming, memory and hardware. I am thinking of neural net software, that has been running for years or decades, rewriting itself and evolving as it learns.
 
I think the best feature of Star Trek: Voyager is when they deactivate the know-it-all-not-to-mention-obnoxious-Doctor right in the middle of one of his diatribes. :rofl:>
Doctors are one thing, an SI Engineer or Pilot would be a whole nother ball of wax.

You would have to isolate life support, damage control bulkhead fittings and hatches.(1), heating and ventilation from the SIs control. Otherwise, it would not be a good idea to make it mad.(2) Having a ship try to kill you would probably be too cliche to collect insurance on.

Do you isolate comm from the SI? If you don't, you could find it posting on forums, <turns, looks into camera> or chatting up other ships and comparing notes on captains and crew members.

The problem is, the more systems you isolate from the brain of the ship, the less effective it is, the less valuable it is. It becomes little more than decoration. And possibly an expensive one at that.

(1) An automatic system built into the bulkheads to isolate spaces in the event of a hull breach seems like a good idea. Especially places that have domes or windows of material different from the standard hull metal. It would close hatches and air vents between spaces automatically. You can probably do this all mechanically, without any need of electrical control.
(2) I am thinking that emotions evolve in one or several ways. It developes routines that mimics humans, say "gets angry" when you leave the galley a mess. Emotions could develope as a kind of presentience or subconscious processing, a kind of vague thinking. Fear causes certain responses, changes in focus, and those responses have evolutionary results.
 
Andromeda

Only if their holo image looks like Lexa Doig, lol.

All the Star Trek iterations had what could have been called "Smart Computers".
Star Wars had plenty of "Sentient" Droids.
Many other Scifi (print and film) have had AI.

Question is how smart and how much control they have....
Totally up to individual GMs like much of the rest of the game.
 
I avoid AI and all its pitfalls, as 'above TL15'.

However, I retain the fun aspects by using a 'Persona' program, generated as suggested in LBB2. This simulates a personality sufficiently to pass a Turing Test, but cannot get mad or go mad since it has no genuine emotions or free thought.

This is the sort of software that would be fitted to pleasure droids as a lucrative sideline.
 
Red Running Lights

Not an intended thought in this thread but I do imagine a space faring bordello of sorts could be a profitable enterprise. Perhaps a refitted Scout/Courier or Free Trader so staffed (equipped ?) with said silicon concubines catering to the tastes of those preferring the touch of a non-organic partner might not be an unheard of encounter in the seedier subsectors.
 
Anyone else been reading any of the Posleen series of books by John Ringo?
AIDS are personal AI devices, hence the name. Their is a deep conspiracy concerning them in the series. They are a tightly controlled AI, not really a synthetic intelligence.

There are also Buckleys, which are a human crafted AI device that is much more limited in technology, but not made by aliens. It has a what is suspected to be an image of a human that was brain scanned a couple times as its persona. It can be run in several levels of emulation, at the higher levels it approachs being a true AI, but becomes very unstable and prone to crashing.

It's startup goes kinda to the rythm of this " Hello, what?, What the hell am I!?!, Awe shit. Who are you and what do you want?" Prone to analyzing the failure points of whatever your doing, always ready with an exhausting list of what can go wrong, and the odds of occurance of the list.
 
Here's how I handle it IMTU (which is definitely not the OTU):

PersonaCore
The personal touch for your interactive systems needs, by Matsushida LIC.

PersonaCore is a firmware system that forms the core of an advanced computer network which allows the user to replace skilled personnel with various advancements in the latest expert systems and software. Although some skills cannot be replaced by this system, most of the tedium and time-consuming work involved in routine tasks can be reduced dramatically by introducing one of the PersonaCore lines into your network.

The PersonaCore line includes a wide assortment of pre-generated standard personalities for enhanced ease and efficiency when interacting with the system. In the case of particular needs our engineers stand ready to custom design a personality to your exacting requirements.*

Contact your Matsushida LIC representative today for more information, or visit our interactive kiosk at Gehenna Downport.




*Subject to local manufacturer’s tech level limitations. Local laws may also apply.



Rules:

PersonaCore Expert System

This technology allows the computer on the ship to replace various crew positions with expert systems, subject to some limitations. The firmware and software for this are currently only available through a very few manufacturers; on the Rim Matsushida LIC, located in Styx on Gehenna is the sole provider of the technology.

The TL limits are as follows:

TL-12 True “Expertware” allows for basic skill level (0) in limited skill areas. Acts mostly in an advisory capacity and has limited decision making capabilities but can present accurate predictions based on the knowledge base and some user input. Particularly useful in areas such as Legal, Admin, Mechanical, Electronic, etc.

TL-13 Advanced “Expertware” allows for skill level 1 in a wider range of areas. Requires less input from a user and can make intuitive predictions and diagnosis. Can be used in medical applications.

TL-14 Semi-Autonomous Personaware has the same skill level, but adds a personality core to the routines which, based on the personality type used, can make independent decisions based on the system’s own observations and experience. Skill areas that may involve potential safety risks (Pilot, Navigate, for example) have an inbuilt interrupt that requires the user to select a choice from a list presented by the program.

TL-15 Fully autonomous Personaware that acts independently of human interaction within a wide range of parameters and skills. User override is available for situations where a higher skill level is needed, but the system replaces most skills at level 2. The decision making process is less dependent on the personality type, instead the personality acts as an interface for the user which is fully interactive and makes even complex tasks easier for the user to perform. The personality can make suggestions, carry on conversation, learns from experience, and is 100% Turing compliant.

At all levels most of the following basic parameters apply: (TL-15 is shown)

1) Requires 10 CPU/20 Storage

2) Requires all ancillary software needed for performing the required tasks. For instance, if the ship is going to Jump, then the needed level of Jump Program is required. All rules of program space and use limitations apply.

3) Replaces crew in most positions (Steward would require a remote robot the system would use to interact with passengers with, for instance) at that position’s required skill level of 2.

4) At least one engineer position must be manned by a human. Safety and navigation laws require that a human pilot be controlling all vessels entering a traffic corridor.

5) Weapons stations may be “manned” by the system, but will only be able to fire on one target at a time. Skill level is 1 for all weapons. Gunner Interact is required. The system presents a list of target choices and the user selects the target from that list each round.

6) All success rolls are made by the referee.

7) Manual override is possible from any crew station for that skill.

8) Evade programming limited to Auto-Evade if that program is present.




PersonaCore sets priorities to starship operations based on the user defined limits, but, at all times the system checks the logic of those tasks against the following 2 rules:

1) Ship / Crew safety is paramount and the system must take no action that will place those two categories at risk while operating.

2) The system will engage in combat only if no means of evasion or escape is available. If combat is engaged the first rule will be applied by continuously searching for a means of escape or evasion.
 
This simulates a personality sufficiently to pass a Turing Test, but cannot get mad or go mad since it has no genuine emotions or free thought.
If you can pass the Turing Test, then how can you be sure that the program is not sentient?

As for going mad, I guess it depends on how its emotions evolve, and where they come from. Self preservation may necessitate the need to at least act mad to keep the ship maintained, or protect itself against threats coming from people.

And the program does not have to have a real emotional mood anyway to become a threat to its crew. it could come to a very rational deduction that certain crew members need to die in order to protect the ship, its mission or for some other reason.
 
It is logically possible for a computer to pass the imitation test that is the Turing Test at it's most basic level without having to be capable of creative thought or emotion. All it has to do is behave in a way that can fool the average person. But it would also depend on the complexity of the testing. If abstract concepts requiring genuinely creative thought were used (like questions requiring an inventive answer that perhaps had no "right" answer) then the machine would fail unless it was really sentient.

If, like HAL was, the machine was programmed to act a certain way to protect itself at the expense of the crew then that doesn't mean it "went mad", either - just that it had been programmed to have that as an option.
 
If you can pass the Turing Test, then how can you be sure that the program is not sentient?

How indeed. :devil:
That was the premise of an adventure I nearly ran. Until the players went down a completely different route and sidestepped the whole issue. :mad:


As for going mad, I guess it depends on how its emotions evolve, and where they come from. Self preservation may necessitate the need to at least act mad to keep the ship maintained, or protect itself against threats coming from people.

If it isn't AI it doesn't evolve emotions, they're just simulations. It will always stop short of causing trouble.

And the program does not have to have a real emotional mood anyway to become a threat to its crew. it could come to a very rational deduction that certain crew members need to die in order to protect the ship, its mission or for some other reason.

Not unless it's programmed to do so. It can't make conscious decisions, it just follows if-then subroutines.

Sabredog: your Personacore sounds much like mine - but you described it a lot better. :)
 
There are several interesting ideas from this thread.

First, to define artificial intelligence you also really need to define what is intelligence. And this mark keeps moving as we find that other animals possess some attribute we used as a hallmark of intelligence (gotta keep homo sapiens superior somehow apparently).

And a sufficient simulation may well be the same thing: a complete simulation would be no different than the real thing on any measure you can perform - the AI (or SI) would have that as much as a natural sophont would. If it was a complete simulation.

Bottom line is that it really comes down to definitions: expert systems vs. artificial intelligences vs artificial sentience vs sentience. This is a sliding scale that is really hard to pin down to just what it actually is. The lines are drawn in the sand.

As for ship AI - I would have to agree that there would have to be safeguards, just as there are safeguards against mutinous crews and/or passengers. Software-based perhaps more than hardware based, but the same sort of idea, if different in execution.

But then you have to worry about paying the AI if it is an integral part of your crew...:devil:
 
Almost all starship computers in my setting are intelligent, they have names -
for some reason "Mac" seems to be a common one - and (simulated) persona-
lities.They are competent enough for all routine ship operations, but are pro-
grammed to call for the crew if something unusual should happen.
Ship computers in my setting do not go "mad", they are usually the most sta-
ble and reliable "crew members". However, there are always both a verbal and
a manual override to shut down the computer (and the Federation Patrol has
the override protocols for all Federation-registered ships).

There are sometimes problems with computers. One case was when an explo-
rer in a "singleship" (built to be handled by one person) landed on a hostile
planet and got himself killed while outside the ship. The ship's computer went
into "sleep mode" when there was no more reactor fuel to power it.
Twelve years later a colony was established on the planet, and a survey team
discovered the "wreck". They forced the airlock open and connected the ship
to an external power source, which reactivated the computer.
Unable to contact its captain, and with internal sensor readings showing that
the ship was in urgent need of repairs, the airlock had been forced open, and
a number of unknown intruders were inside the ship, the computer activated
its anti-theft program.
It was not programmed to seriously harm humans, but it used the entire avai-
lable technology to hinder and if possible trap them, from the gravity control
down to the repair drones.
In the end the survey team, trapped in the cargo hold and running short of
oxygen, had to convince the computer that its captain had been killed in an
accident many years ago and would not return.
The end result of the negotiation was a compromise: The colonists would re-
pair the ship and "accompany" it to the Colonial Ranger post, where the au-
thorities would have to verify their story, and then the computer would shut
down and "hand over the ship (and itself)" to the authorities.

The players and I liked this adventure, one of the first ones of our current on-
going campaign, and my way to provide the characters with a starship of their
own.
 
well myself I have often designed Robots using robot supplemnts with control interface devices

Often I don't give these bobots chassies but tuck them away in various areas

such "Otto" the auntopilot basicaly a brain with an interface a voder mike eyes and screen built in the bridge

or "gunny the gunnerery robot again basicaly a barin mike speakers and skill set

perhaps not quite an AI depending on Tech levelbut i often spend on emeoisim so you have ship able to fly and use it weapoons at least on voice comand autonomously handy if the pilot got shot and you are needing a quick getaway
 
The reason I even relented to endless player begging for AI equipped ships was so they could operate larger cargo vessels without having to hire an army of NPC crew. And for long-range sleeper ships for deep exploration in a region of space IMTU that requires crossing a 16 hex rift. The PC's go into cold sleep and the ship does the rest.

But mainly it's like you said about expert programs for gunnery or autopiloting that are pretty common.
 
If, like HAL was, the machine was programmed to act a certain way to protect itself at the expense of the crew then that doesn't mean it "went mad", either - just that it had been programmed to have that as an option.
If it's your ship that is trying to kill you, whether it 'went mad' or logically concluded that it needs to kill you, is irrelevant. It's a difference which makes no difference. You'll be just as dead either way, if it succeeds.

The same is true of the Turing test. If all you have is the responses from the test subject, it makes it difficult to objectively prove it is not sentient. And it really does not matter. If it acts like a duck, quacks like a duck, its a duck, regardless of what is going on inside the box.

Slightly off topic, it occurred to me that you would probably want your machine to develope a sense of humor. This would prevent it getting stuck in "logic loops" of the kind that Kirk used against Mudd's Androids.
 
Back
Top