• Welcome to the new COTI server. We've moved the Citizens to a new server. Please let us know in the COTI Website issue forum if you find any problems.
  • We, the systems administration staff, apologize for this unexpected outage of the boards. We have resolved the root cause of the problem and there should be no further disruptions.

Computer and Robot-Brain Paradigm

Golan2072

SOC-14 1K
Admin Award
Marquis
In the (roughly) two years of my involvement with Traveller, I've heard alot criticism about the Traveller computer paradogm, and I am inclined to agree, for the very least, with a portion of this criticism. The main issue, IMHO, is that of Traveller using a relatively accurate (is it?) model of how computers used to work in the 1960's and 1970's, prior to the info-tech (and personal-computer) revolution of the early 1980's. That is, Traveller usually portrays computers as massive machines that use magnetic tapes for storage and use alot of power to operate. Also, Traveller computers usually have "storage space" and "CPU space", with almost no reference to processing power as something seperate from the amount of available RAM.

My layman's understanding of current computer technology (I am not very knowledgable in this field) leads me to propose the following model for Traveller computers:

The important characteristics of a Traveller computer should be its long-term storage space, its RAM (short-term storage) and its processing power; RAM might be subsumed into processing power if you want. This will lead to a system similar to T4, in which computers have processing-power and tasks (or programs) have complexity, and the comparison of these two values determines the computer's performance with that task. I'd add long-term storage to this, but with modern concepts such as multitasking and virtual memory, RAM seems to me to be another component of processing power in thefinal count. So you could load alot of computer programs at once, but too many will reduce performance (more HD-to-RAM swapping and CPU resource overload).
 
I don't tend to think of the actual technology in Traveller computers. IMTU they are future technology (vaguely described).

I interpret the CPU x/y number as
x - programs resident in memory (available for use)
y - programs that can run at the same time (running tasks)

Storage I don't think about. The computer has enough storage to store the programs, immediately load them on startup, and hold whatever incidental data the PCs can throw at it.

The programs in the rulebook (T20) have a storage cost. I interpret this as the cost for loading the program in to memory. If the PCs buy more programs than can be fitted in to the computer, the computer stores them but the PCs can't load everything in to memory for use without changing the program load sequences (DC10 computer check). Changes are put in to place at the next computer startup. A hot-swap of programs is more difficult (DC20 computer check).

Starship computers incorporate the obvious programs (critical functionality) as part of the hardwired functionality. The CPU x/y number is what is left.
 
Originally posted by Employee 2-4601:
In the (roughly) two years of my involvement with Traveller, I've heard alot criticism about the Traveller computer paradogm, and I am inclined to agree, for the very least, with a portion of this criticism. The main issue, IMHO, is that of Traveller using a relatively accurate (is it?) model of how computers used to work in the 1960's and 1970's, prior to the info-tech (and personal-computer) revolution of the early 1980's. ...snip...
Actually, The original Traveller expressed a layman's understanding of the computer technology of the late 50's and early 60's.

IBM announced the System/370 in 1969 and had a working example in late 1970 or early 1971. The improvements over the System/360 were Virutal Memory (Storage in IBM-speak), choice of two online environments (CICS and TSO), and larger arrays of online storage (disk drives). The new operating systems, which are now known as VSE and MVS, offered multi-tasking support and 2 different Virtual Memory schemes. VSE, the baby O/S, offered a single 16Mb virtual memory space into which all programs must fit while MVS, the big boy, offered a separate 16Mb memory space for each task. Typical size, in Traveller terms, would be about 1 dton for the main system, another for the storage array.

BUT, also keep in mind that those systems could be in smaller packages, but as several manufacturers at that time found out, customers who pay $1M for a system expected to see a big box. From "IBM and the Seven Dwarves" - IBM, Univac, RCA, GE, Honeywell, NCR, Burroughs, CDC - at the beginning of the decade, the mainframe business dropped to "IBM and the 4 Dwarves" (maybe 5) - IBM, Unisys, Cray, and Hitachi and some consider DEC's VAX Clusters were mainframes. But, enough geek-history.


Perhaps, for Traveller, it would be nice to see what everyone's complaints about the CT Computer rules are?
 
Hey,
That is why the "Proposed Alternate Universe Project" thread is in "The Lone Star" forum. What would you change and why? Here is your chance to do more than discuss the problem. Propose a solution!
 
I liked the Book 8 approach that allowed you to build you system from the ground up so I would suggest using that as a place to start.

TL10 standard processors for dumb systems. TL15, 40% synaptic super boxes for those high tech features. IMTU we already have computers that can take the place of gunners, they cost millions but they exist. I would like to see a better rating system for autonomy and decision making but the apparent intelligence and education work well enough. I would add an Autonomy score that judges the computer’s ability to make correct decisions when confronted by new data or strange situations.

When it comes to processor capacity . . .*
For ever tech level above the introduction level double the power.
Cut the size in half.
*(robject’s idea)

All of this is predicated on the idea that computers from TL10+ become more and more alive. At TL 15 they are very nearly a lifeform unto themselves.

Heck to us TL 9 computers would seem magical in their processing power and their architecture would be alien.

(edit)
I agree that actual processing power should be as vague as possible. Computer stats should (IMHO) concentrate on game effects and PC interactions with the technology. The nitty-gritty is not that important to most players.
 
I think that, in the end, I'll go with something similar to the LBB8 two-component system - but replace CPU "spaces" with Processing Power, that is, you could run more programs than the number of "CPU Spaces", but it'll reduce performance. It'll all be an abstract matter of processing-power vs. complexity, a good idea lifted from T4. Storage space will remain.

I'll also extend the CPU table, with lower volumes and prices for higher-TL versions of the same system (i.e. a TL7 desktop computer has the power of a TL6 supercomputer but is way cheaper and smaller).

Regarding "Fundamental Logic" and "Fundamental Command", I'll merge them into one "OS" rating for simplicity.

So basically, my designs will have:
1) Processing Power
2) Storage Space
3) OS
4) Applications
5) Peripherals (if any; "brains" installed on vehicles/ships will simply interface with the vehicle's/ship's sensor and bridge-control systems).
 
Why limit storage space? Even now storage is cheap. With optical media and quantum/light computers in experimental stages, storage will potentially become even cheaper. I'd just worry about limiting the number of programs the players can run at one time, and leave it at that.

Your description for Peripherals sounds more like sub-systems - minor systems under the control of a central system. Peripherals would cover the workstations/computer interfaces, internal cameras, etc.

I'd agree with combining the logic and command. This would assume that the easiest method of computer interface would prevail. A true AI should be able to figure out the ambiguity of natural speech - although misunderstandings could lead to interesting times aboard ship ("I'm sorry, Dave. I must have misunderstood. I thought you said to open the cargo bay doors").
 
In one explaination to deal with this, I propsed that semiconductors would not operate in a jump field, requiring starship computers to use alternate technology. Even with microtubes, you end up with computers that mass tons and are far less capable than current RL technology.
 
We talk about CT-style Traveller computers, don't we ?
At least since MT I think their representation in a RPG is just ok (even sized and prices are quite ok).
 
Originally posted by Corejob:
In one explaination to deal with this, I propsed that semiconductors would not operate in a jump field, requiring starship computers to use alternate technology. Even with microtubes, you end up with computers that mass tons and are far less capable than current RL technology.
My current approach is to ignore the actual technology used to build the computer (e.g. mechanical, electromechanical, vacc-tube, transistor, semiconductor microchip, optical, DNA, quantum and so on) in most cases and use an abstract measures of the following:
1) The way the data is processed (linear vs. parralel vs. network/"synaptic").
2) The amount of total (and abstract) processing power available.
3) The TL - subsumes the actual technologies used (same "processing method", higher TL means less volume/weight and a lower cost for the same processing power).

Originally posted by TheEngineer:
We talk about CT-style Traveller computers, don't we ?
At least since MT I think their representation in a RPG is just ok (even sized and prices are quite ok).
I'll have to reread the MT section about computers - how is it represented, by the way?
 
Hi!

MT starship computers:
Abstract representation of processing power via "control points". No more messing around with program and storage areas...
Control point requirement is at least based on the price of the ship and its components.
To control the ship, the computer needs to be able to manage an appropriate amount of CPs.
Special interfaces enhance those abilities.

No actual catchable relation to real world computers, so they are somehow argumentation and aging proof here.

Personal computers:
This little thing is described as a supercomputer with the pretty performance of a model/1 starship computer.

Oh...have to watch soccer.......
 
Originally posted by Valarian:
Why limit storage space? Even now storage is cheap. With optical media and quantum/light computers in experimental stages, storage will potentially become even cheaper. I'd just worry about limiting the number of programs the players can run at one time, and leave it at that.

You seem to have a point, then; and it is also similar to how the new (forth) edition of Shadowrun deals with storage space (it ignores it in most cases). My idea, then, is to base a robot's apparent Education on some kind of "Library" program rather than on storage, and then to subsume storage space in the CPU cost/volume.

Your description for Peripherals sounds more like sub-systems - minor systems under the control of a central system. Peripherals would cover the workstations/computer interfaces, internal cameras, etc.

By "Peripherals" I've meant anything beyond the "core" (CPU, storage and programs) which is linked to that "core"; I might have picked a wrong term.

On a different subject, I'm thinking about lowering the TLs for AI - in the OTU, TL15 for "Low AI" and TL16 for "High AI", and IMTU, TL12 for "Low AI" and TL13 for "High AI", though limited by synaptic processing and the sheer pricetag of such a machine.
 
A question about designing brains: How do I deal with processing-power ratings (0 to 9 IMHO, corresponding to the CT ship computer models) in a design system? LBB8 uses a linear scale (one more CPU module = one more CPU space), but I'd use geometrical progression for processing power (e.g. if Processing Power 1 costs X, Processing Power 2 costs X^2, Processing Power 3 costs X^4, Processing Power 4 costs X^8 and so on).

Each CPU type (Linear, Parralel and Network/Synaptic) would have its own table with costs and volumes per processing power in different TLs.

Linear would be available from TL5 up; Parralel from TL8 up, and Synaptic from TL10 up, but costs and volumes at initial TLs will be VERY high, and each type would be commercially viable for small units (i.e. robot brains) about two TLs after its technical introduction. Same goes with AI "OS's" - while you might get a "SHODAN" at TL10, she'll be a huge, expensive experimental super-mainframe unit prone to malfunctions; at TL15, a "Rommie" would be standard-issue for capital ships in some cultures (NOT the conservative OTU Imperium or the anti-AI OTU Solomani, but maybe the OTU Hivers...).

How would TL reduce costs and volumes? I'd go for a logaritmic (or logaritmic-looking, as my math in this regard isn't so good) curve, with a huge reduction over the two TLs past the introduction of each CPU type, and then a far lower rate of reduction over the following TLs; physical limits would probably be reached within 5 TLs past introduction (though Synaptic percentage would continue to increase past TL10).

And I might add a new TL16+ CPU type (How would I call it?) for the sake of Imperial Research prototypes and Ancients artifacts.
 
Back
Top