• Welcome to the new COTI server. We've moved the Citizens to a new server. Please let us know in the COTI Website issue forum if you find any problems.
  • We, the systems administration staff, apologize for this unexpected outage of the boards. We have resolved the root cause of the problem and there should be no further disruptions.

Computer technology

The "Bridge percentage" also includes avionics, basic sensors, radio systems and even the navigators emergency sextant and magnetic compass. That leaves little room for the computer racks.

And we have no information what the crew does their admin duties on. OTOH these are really primitive jobs. So why not simply assume that "computer" means that there is a secondary system as part of the whole setup. And I doubt we need rules for SAP/R406 ;)

And "Processing power" hasn't been the problem in Process Control since the 1980s. Our "boys, still brand-new back in 1987, had computing power roughly equal to 68030 equipped SUNs of the same time. But no SUN then (or now) can handle the immense number of Data inputs necessary since their bus would simply be swamped.

Displaying processing data is quite simple and can be handled by "smart" terminals instead of the main CPU. X-Windows Servers running on X-Terms do it as do/did our old SIECOMP graphics terminal. Engineering grafics don't need 30+ frames per second. We did real-time system displays of moving maschinery back in 1996
 
So what I am hearing is that the "computer" is, even at TL7, dwarfed by its periferals. But the inputs for flight/navigation and gunnery are very limited and basically are your avionics, and basic sensors, that is, your bridge percentage. And more powerful computers to allow longer jumps or more accurate fire control are a trivial addition.

The engineering section requires powerful process control equipment such as you are used to. This is not the same as the Traveller Ships (bridge) Computer. The Ship's Computer has no process control functions overtly in the rules, and since the size/complexity of Engineering has little effect on the size/power of the Computer required, none is implied.

If you want to write up your own rules defining how much process control procesing is needed for different engineering at diffeent tech levels, I shall read them with interest. But I think it is easier to assume that the process control is included in the listed mass/percentage for engineering components and navigation is included in the bridge percentage.
 
There is a couple more elements that no one is including.

To be fair the rules side step them too, but...

Pattern recognition has turned out to require far more computing power than was thought, even in the 70's

Voice recognition is still limited in vocabulary. It is relatively speaker independent, but dose require a lot of processing.

Try Dragon or the IBM versions of desktop voice recognition software, even on high end desktops.

And to recognize visual input is still far from ready for primetime.

We at least have a some ideas now of the requirements to make these systems availavle, and big commercial systems are already using these technologies to some extent.

Again the "rules" in traveler do not specifically account for these processes, (T20, and I think GT do talk about interfaces and include these items there.)

The other big one is artificial personalities and true AI. Avatars and personality driven surrogates and data miners are subsets of this.

Natural appearing artificial personalities are a ways off. While AI for specific discrete tasks, Deep Blue as the reigning world Chess Champion is a prime example, are practical, general purpose AI is still a few generations of hardware away.

Finally, Moore's law has hit a bump. We are reaching some basic physical limits to how much more we can shrink systems.

There are new technologies on the horizon, but any thing practical right now, only pushes the limits far enough to gain a couple of generations at best.

(Since generations in computer terms can be hard to project, my best understanding is the 18 months of Moore's law is a useful bench mark.

The next two rounds of development have already reached the theoretical limits of existing technology, or the devices we will see in 2-3 years are near or at the limit.

The next level, already in R&D, would reach it’s theoretical limit in 3-5 years after that, with a year or two for likely for major fabrication facilities to retool.

There are technologies in development that might push the limits, but they are years out before being practical.

The bottom line is after 30 years working with PCs, my gut still says Traveller computers are massively over sized, and under powered, but I can find enough justifications in including redundancies, massive inputs, and unknown requirements to have reached an uneasy peace with T20, and CT. These are the systems I use, so I don’t worry about the other sets.

(Remember, pocket TV communicators were the 50’s with Dick tracy, and every one expected them 20 years ago or more, and virtually every bit of SCI written seemed to put independent AI at no later than the turn of this century.)

A true rewrite of the computer rules would be something useful, but remember, seriously alter the compute rules and EVERYTHING else changes. AI fits on hardware small enough to fit in a robot chassis, and Robot populations explode, as well as automation.

Not far behind that is cybernetics. Allow them as practical tech and the Traveller universe becomes massively different than the one we love.
 
'Clutter expands to fill the available space.'

There is a tendency in computing to 'put it in there because we've developed it and there's room for it'.

XP and Office 2003, for example, do essentially the same jub as Win 95 and MS Works -WP,SS,DB etc. but for a massive increase in processing power. Sure, you can now use a computer without needing to spell - or even think - and you can now play music on your computer instead of your record player, but the power is not strictly needed for writing or calculating. Maybe it just makes it easier - project to voice interface and AI.
Also, you can no longer use dial-up effectively on the internet - not because your research is any more complex, but because the thousands of animated advertising graphics (that are put there because now they CAN be) slow your system to a crawl. You have to update because of the cluttered environment. (Perhaps starship communications will have to sift through commercial advertising junk!)

If Moore's Law hits the buffers shortly, the inexorable laws of supply and demand will make computers balloon in physical size. At present, we make our computers (at least personal computers) small so they fit on the desk. When computers are fitted into your house as standard, they could be refrigerator size or bigger, with a desk or hand terminal. I have no doubt that someone will develop a host of 'must-have' software/hardware to fill that refrigerator space in pretty short order.

Starships are even bigger, so the computer designers have even more space available to fill!

I think shipboard computers will be as big as they have room for, regardless of petty details such as what is actually needed to run the ship.
 
Originally posted by Mr TeK:
Try Dragon or the IBM versions of desktop voice recognition software, even on high end desktops.
I have a friend who runs a company setting this stuff up for people who need it for medical reasons. I have seen her sit across the room and command her laptop to do a variety of very useful things and the recognition rate is *very* high. But it was setup by someone who knows the software and who took the time to train it.

Dragon out of the box isn't anywhere near as good as it can get.

And to recognize visual input is still far from ready for primetime.
And by far from ready, you mean 'only a few years off in the commercial sector'. They've already got VWs that can drive themselves on roads and I've seen a DARPA Hummer do the same things. They have that annual robot vehicle race, and in a few short years, they've actually got vehicle finishing a tough and demanding course entirely on their own.

Figure within 10 years, unless there are legislation issues, you'll see auto-driven vehicles. The technology isn't far at all.


Finally, Moore's law has hit a bump. We are reaching some basic physical limits to how much more we can shrink systems.
Common sense tells us this has to be the case, but every time they say this, someone comes up with a new technology to beat the prediction of this law collapsing. The latest was a development of graphene transistors that can be realiably (not commercially yet) produced. They say by 2015-2025 period, these will be commercially viable. And they currently can get down to a small ring of atoms but they think they can get down to one atom. And then, on another flank, we have quantum computers with qubits.

I think we're a long way from the point where Moore's Law collapses.

The bottom line is after 30 years working with PCs, my gut still says Traveller computers are massively over sized, and under powered, but I can find enough justifications in including redundancies, massive inputs, and unknown requirements to have reached an uneasy peace with T20, and CT. These are the systems I use, so I don’t worry about the other sets.
The problem here is that I don't think anyone builds on the extra inputs, the redundancies, etc. at the expense of performance. They throw them on top of performance because they're nearly free to add on nowadays (small space, small overhead). But this is not true in your model of the Traveller computers. For their size, they *grossly* underperform.

And it gets even worse when you talk about portable computing, which isn't one of those places where you are going to stick massive redundancies, five thousand inputs, or so forth. Your primary concerns are a lightweight and durable enough. Traveller portable computing falls short of even what we can do today.

Traveller computing is broken. Badly.

The nice thing about fixing it is that doesn't necessarily mean you solve the AI problems in trivial amounts of space - you can still restrict your bots (at least the highly functional AIs).

But my hand computer ought to be a bit better than an abascus and my 80286 ought not to be able to run circles around my Model 1. :0)
 
I see nothing wrong with trav computers
a model 1.0 computer sin't just a box with cpus and memory and stuff
its all the controls spread throughout the ship...embedded controllers in nearly every bit of gear...cabling...power filters etc...cooling systems...

and so what an entire ship's control system doesn't match up in performance to an xbox, computers aren't very central to the game anyways...its only an issue in ship/vehicle design....which is so generalised and fuzzy in such details as power usage and waste heat and lots of other things that even a rivet counter like me shouldn't obsess about it...we don't even know for certain ALL of the functions such systems would be called on to do. Comparing fiction avionics to a single computer seems a waste IMHO.

Perhaps there are cultural reasons for not making things tinier...or ergonomic reasons...or financial reasons; we can make cell phones soooooo much tinier, but we don't...instead, we add silly features into phones like cameras and video games.

From what I see in computer stores over the years....software bloats up to demand more and more computer power to do the same job in the consumer market just to make it 'prettier'
I don't see why this trend would not apply to Trav...

Megacorp software company' upgrades *require* more powerful computers thus forcing users to purchase new hardware....I doubt that would stop in Trav either unless the megacorp wants to dry up its revenue stream. Perhaps adding a new *feature* that is incompatible with older versions ( which will become unsupported in time ) jsut to force users to upgrade software forcing users to upgrade hardware which only has bloated software capable of running on it?

out of curiousity...is there something like Moore's Law concerning the number of lines of code increasing every nuber of months?

hmmmmmmmm my rant has strayed from the main arguement....computer volumes and power don't matter so much except for ship design..and they are not *computers* as we think of them, but entire avionics packages including ( depending on ruleset, sensor packages ) and control communication systems between every piece of 'smart' equipment.

thats my opinion, anyways
 
Kaladorn:

Redundancy in Process Control/Ships computers is a lot more complex than redundancy in database or fileservers. And actually a lot different .

In File/Database servers it means you run x servers (x > 1) and if one drops dead the combination of redundant servers and the retry count build into the data protocols handel the rest.

In Process control the computers often make use of a "voting" mechanism where all system results get compared and the majority wins. They also require detection when the "leading" system dies.


======================

Driving a car is rather easy and has been done on Autobahns in the early 90s. The new systems add land-navigation, automatic route generation and detection of spontaniously appearing elements. But that is a far cry from complex optical recognition, the system only realises "something is there, size this and that", not "There is a cat". Detecting/identifying faces has still a lot of problems even under good circumstances (Cooperative user, fixed positions like in Biometrie) and is basically unworkabel currently.

Speaker-dependend, training-requiring Voice recognition has a 95-98 percent success rate, about the same as writing a text by an average user. Same for scanning printed text. Good enough for most jobs

Moores Law actually has collapsed. The latest set of Processors is cheating by using two independent cores. The manufacturers have run into a wall at around 3.2 Gigaherz and the current width of data pathes. This is not an Intel/AMD only problem, even capabel CPU designers like IBM (Power) and SUN (Sparc) have run into that problem and use multi-cores. Not that Multi-Cores and closely coupled processors are new. The transputer architecture from the 1980s had them and IBM mainframes in the late 90s used fault-tolerance multi-CPU boards
 
Hi !

Maybe its not a good idea anyway to compare real world computers and Traveller stuff.
Nearly all the Traveller computers are boxes used for controlling very fictional things, namely starships usually powered by even fictional fusion plants and moved thru space by magical devices like thrusters and jumpdrives.
As such its computing power, needed resources or any other properties are highly fictional, too.
If MWM would have defined computers to be as big as 25% of the drive section, thats just a way it would be in the game and there is hardly a reason to discuss it.

As such it was - IMHO - just very wise NOT to specify any "hard" computer properties in the rulesets, which would be comparable to some non-fictional realworld stuff.
Well, except of course volume and power requirements, but those are completely useless without an absolute measurement of computing power.

So, really as we have no absolute and comparable information about the computing power of Traveller computers, how could we ever discuss if those are "underpowered" ?

Once you start to deal with simulation models for partly chaotic processes (best example climate simulation) even todays supercomputers are just able to get along with available storage and performance by finding humble compromises between calculation time and problem resolution.
Solving a complex simulation problem (like climate ,supernovas or even controlled fusion reactions) e.g. in realtime is just science fiction....but here we're back in Traveller.

regards,

TE
 
Oh and hand computers:

The Nokia N770 and N800 series are rather close to Traveller "hand computers". They are currently where desktop systems where around 1998-2000, about the level of a PII/400 with 256MB RAM and a UNIX Operating system. Useful but not very powerful.
 
Back
Top