• Welcome to the new COTI server. We've moved the Citizens to a new server. Please let us know in the COTI Website issue forum if you find any problems.
  • We, the systems administration staff, apologize for this unexpected outage of the boards. We have resolved the root cause of the problem and there should be no further disruptions.

Data Storage Media?

EntilZha

SOC-10
One thing I'm not clear on and can't seem to find clarification on...what do the citizens of the Imperium (and elsewhere) use to store their data? A lot of beings in Traveller have hand computers, but what kind of removable media do they use? Is it a continuing evolution of CD/DVD technology? Microtapes like on Star Trek TOS or optical chips like on Star Trek TNG? Data crystals like on Babylon 5? Rectangular memory tablets like those in Arthur C. Clarke's "3001: The Final Odyssesy"? Or is it just an IMTU issue, with the referee using whatever he feels would work best?
 
Its something to do with holocrystals, something I thought in 1989 was bit cacky. Apparently, according to the DGP article on handcomputers a holocrystal has about the same storage space as a 1GB SD card.

Its one of those areas where Traveller has not kept up with technology - 1GB in 1977 and 1989 was, like, a phantasy for portable electronics.
 
The ultimate form of storage is with atoms representing information. One could for instance store information on a strand of DNA. DNA is composed of 4 amino acids and DNA molecules can be synthesized and inserted into living cells. You can't beat that for informaton storage.
 
p. 227 of the T20 THB lists "Experience Data Storage" which I would adapt for ship computers and robots. Don't have the other rules in front of me and IIRC they didn't specify data storage.

The earlier TL IMTU mostly follow Earth with punch card, tape, floppy disk, cd form factor, hotswappable memory etc. . After that start plundering from Sci-Fi including the memory for HAL and the Entperise-D, and holocrystals from B5 for example. I'm not 100% what synaptic memory would be aside from references to scientific models of how synapses work in humans. How it would differ in look I'm not sure on a busy Friday. >.<

As for actual storage capacity I'd stick more with game capability, i.e. number of programs or XP in the case of T20 able to be stored, than actual amounts.

Of course with all that Vilani influence it might all be 1970's faux wood-grain VAX terminals complete with choice of either green or orange text on black background, 1/2" reel-to-reel tape, and chiclet keyboards. Might explain the size of ship computers. ^_^

Casey (who suddenly feels like seeing if ye olde Microvision still works)

[EDIT]linkage, general cleanup[/EDIT]
 
Originally posted by Elliot:
Its something to do with holocrystals, something I thought in 1989 was bit cacky. Apparently, according to the DGP article on handcomputers a holocrystal has about the same storage space as a 1GB SD card.

Its one of those areas where Traveller has not kept up with technology - 1GB in 1977 and 1989 was, like, a phantasy for portable electronics.
I know what you mean. I have 128 MB SmartMedia card and it's smaller than a freakin' Triscuit.
 
As was said above, DNA is getting into the ball park of the theoretical limits. There is a proposal to use valence electron spins, that would be pretty much as dense as it gets. (I am real rusty on my micro electronics and nanotech rules, but I seem to recall, the "sense" factions, the "stuff needed to read and write the memory is roughly calculated as twice or three times the size of the actual medium used to record). So a one molecule "byte" would take on the order of three molecules to read it, or change it, and transmit it’s value out into a form that could be accessed by the outside world.

I do not think any sort of “holographic” memory would break that limit, and data crystals would almost certainly be based on that kind of technology.

(IBM is playing with making memory out of valance electron spins as we speak.) At a typical density of BILLIONS or TRILLIONS of atoms, per gram of a typical crystalline material All human knowledge to date, (written or recorded in any format) would fit into something on the order of a paperweight.
For reference, I believe every single know word written in classical Greek, along with translations and commentaries is sold as a set of a dozen CD Roms.

And yes, even at current computing power, ship board computers are dozens of orders of magnitude too large.

Cassini is the size of a school bus. It has nineteen industrial processors, (seven years ago, at the time of Launch, state of the art for industrial processors was roughly a 286 processor. This means it has significantly less computing power than the PC you are posting on this board with. With that limited amount of power, it was able to successfully navigate 900 million miles through several coarse corrections, including at least 4 slingshots, and arrive at its destination within one second. The course was pre-potted, but it had to be executed completely autonomously. It survived the asteroid belt, and Saturn’s rings.

Also remember that navigation was only a SMALL part of the job of those processors. They have to sequence, operate, data store and transmit the data, from all of the experiments as well as monitor the functioning of all systems.

Since all natural objects and even man made objects, until power is applied, are on courses that once computed, WILL NOT change, the navigation simply consists of knowing your position, and calculating a course around the natural objects to your destination. Then you must calculate the projected courses, (and danger zone courses, ala C J Cherryh.) Again, a real world example, in addition to all of its electronics to control it’s own navigation, radar, and other systems, the AWAC planes call track into the several hundred range of airborne and sea born targets, and calculate and display, position, course intercept, friend and foe, and transponder data simultaneously. Exactly the data that a star ship nav computer in a traveller setting needs, and I would suspect that if you got into a situation requiring the tracking of more than 10 to 20 thousand objects, you are probably in a more hostile environment than the human crew can comprehend, and are probably in extream danger anyway.

By moore’s law (computing power doubles every 18 months) the raw computing power to sit on top of you desk, or at least be no bigger than your desk is already on the drawing board and will be state of the art in 5-10 years.

AI is a whole other kettle of fish, but once the basic programming is worked out, the hardware should be state of the art and virtually anything but the simplest handheld devices should be capable of containing an AI.

So, your basic desktop in one HUNDRED years, should be capable of running ALL shipboard functions, before we even get to the several thousand years to the height of the third imperium.

Yes, I am a computer tech, and have programmed, and maintained computers for 25 Years, so I will stand behind these projects, as at least close to reality.

Peace

Mr Tek

(I need to learn not to write these long posts though....)
 
Mr Tek - long posts on this topic are what are needed - and being a Mr Tek in the field is exactly what the long running debate on computers in traveller needs. Keep up the good work.

By the way - is there any way in which can Traveller sized computers be justified given we are talking about the 57th century.
 
Depends what you mean by "Traveller"!

For a top-of-the-range TL15/fib box:

</font><blockquote>code:</font><hr /><pre style="font-size:x-small; font-family: monospace;">
Version Volume Mass Price
Bk5 351m3 - MCr200
MT 35m3 8.8t MCr43 (3 required)
TNE 14m3 2.8t MCr12 (" ")
T4 14m3 2.8t MCr1.12 (2 required) </pre>[/QUOTE]
 
Elliot asks
_______________________
By the way - is there any way in which can Traveller sized computers be justified given we are talking about the 57th century.
 
Originally posted by Elliot:
Mr Tek - long posts on this topic are what are needed - and being a Mr Tek in the field is exactly what the long running debate on computers in traveller needs. Keep up the good work.

By the way - is there any way in which can Traveller sized computers be justified given we are talking about the 57th century.
Sufficient refined handwavium can explain anything. ;)

But the purer the handwavium, the more likely people are to bust a gut laughing at the explanation.
file_21.gif



My personal take on the OTU Computers listed in CT:Book 2 and CT:Book 5 is that they are unbelieveable large, take way too much energy, and have far too little performance. A TL-15 computer the size of a Model 9/Fib would probably have one or two orders of magnitude (at least) more power than all current computing power on Earth (an amount of power that would, centralized as it would be in one system, be more than enough to drive a full-on AI program). In my view, the difference would be far greater than the difference between the best of what we have now, and the old Eniac ca. WWII (remember, Eniac = TL-6; our best equals early TL-9 (maybe); and TL-15 is twice as many TLs further along).


As a total segue, I notice that Crew Requirements don't require any level of administrative and technical computer staff (I guess they're a part of the bridge crew, though I'd imagine that the bridge crew figures given weren't calculated to include computer staff). I find it also interesting to note that despite various nasty problems with their computers systems throughout the various series of Star Trek (and the relative importance of those computers to the ships they drove), there were no apparent computer staff on board.
 
Elliot asks
By the way - is there any way in which can Traveller sized computers be justified given we are talking about the 57th century.
The only way I could see it is if the long night completely ended all computer storage. Computer research would have to never had been a priority, or for some reason, semi-conductors never caught on. Neither scenario seems very plausible to me, but they are the only explanations I can think of.

Both pretty much preclude robots, as a stand alone AI, (a REQUIRMENT to have an autonomous robot) would be far more computer intensive than coordinating FLEETS of starships.

The ONLY other possibility, is some sort of hand waving that the equations to calculate jump vectors are so bloody complicated that it takes all the power that such a monster complex can provide, but then cannon would require a rewrite that non jump ships don’t need that much electronics to function. (that extra few tons would tip the balance even more toward jump tenders and battle riders, for example.)

I don’t know if I am the most imaginative person, but given what we can do today, the potential in the near future, let alone the potential when we truly have computer designs designed not by humans, but by the first AIs, far outstips the sizes and and capacities of the Traveller CAnnon..

That makes one last possibility. Maybe there is a deliberate design limit to prevent the possibility of systems spontaneously becoming AI. (There are fringe theories that Self awareness is a simple property of enough memory crammed into a small enough physical space. Deep into quantum mechanics and pretty far out to the edge, but I recall somebody proposing such an idea.) in such a case computer capability would be SEVERLY limited, and yes it would take several systems working together, but Today’s off-the-shelf systems don’t have that problem, and any competent computer tech could assemble a system in the sizes Traveller uses that would be capable of all starship operations TODAY.

Radiation Hardening might be an issue, but again exploratory craft, without the bulk of a ship to provide a layer of natural hardening, still have room for the systems they require.

I am not comfortable with any of these solutions, but the other solution is to rewrite all design sequences and cannon dealing with computers. While many people see that as the best solution, you destroy virtually all existing ship designs, and deckplans, as well as confusing everybody that has been playing all these years.

Probably the best solution would be for an update to T20 that rewrote it, since it is at least does not gave quite the history, but then again, integrating pre T20 designs would be even more problematic.

This is my field, but I am far from on the cutting edge of design. (although the leading design experts would likely tell you that my ideas tend to still be conservatively LARGE, and even more compact designs than I have suggested are possible, maybe even within our lifetimes.)

Anyway, this is what I know, or can reasonably project from designs that have been published.

It should be enough to help people start to make sense of the issues. The theory is that in 78 and 79 when PCs were still a novelty, and Moore’s law had not yet been written, only those of us deep in the community had any idea things would move so quickly. Except for pattern recognition, (a problem that has been extremely more complex than imagined) increases in computing power routinely exceed even the wildest estimates of the experts. So the good folks designing Traveller just never realized that computing power would grow so rapidly, and once it was cannon, the changes that the system would suffer just never justified the effort to rewrite. (Sorry to try to analyze the motives of the designers, but that would be my guess.)

Long and short, any solution now is either going to involve a great deal of hand waving, or thirty years of work by a great many people gets flushed down the toilet. (by the way, virtually all science fiction suffers that same blindness. Ships computers are ALWAYS huge complexes that take up entire decks. 2001 and HAL, the computing cores in Star Trek, the computing cores in Aliens, C J Cherryh’s works, and any other Science fiction, movie, book or whatever where spacecraft are involved, the ships computers are HUGE.) Of course, NASA has always shied away from pushing the envelope in computing power. Except on the robot missions, where they just could not afford the space.

Peace

Mr TeK

(I have been looking for an opportunity to really discuss these issues. I think a rewrite is ultimatly what we need, but the rules don't belong to me, so I don't have to worry about the repercussions.)
 
Rainofsteel

Your desktop is as powerful as the ENTRE computing power of a major corporation in the 80s.

A tech-level 9 computer will have more raw computing power than exists on the earth at this moment. (that is all but FACT as such designs are already proposed and likely 10 to 15 years from market, TOPS.

As for computer techs on Star Trek, that was always relegated to the jack-o-trades engineering staff to handle.

BTW Early, (and extremely limited) AIs already exist. They are little more than curiosities, being too simple to do anything much, but some of the learning systems are approaching what would be popularly considered AI. (Computerized medical diagnostic systems, and Computerized troubleshooting systems.) Back to Cassini, it has to be aware enough to keep it’s self functional, so while not self aware, or an AI in the popular sense, by certain computing standards, it would be at the threshold. And they actually call the computerized opponents in computer games AIs. They learn from how you play and grow to stay challaging. (even the best still have to break a few rules that constrain the human players. They are not QUITE good enough to go head to head, but they are getting close, and that is a SUB program within a cheap commercial game.)

Also remember that deep blue, A room sized computer is the undisputed champion chess player in the world, and it is several years behind state-of-the art. There is ongoing work on systems that can manage to process ALL KNOWN DATA ON stellar and planetary systems, to create a model of the ENTIRE known universe, to study backtracking to understand the big bang, and forward tracking to try to settle if the universe will keep expanding or reach a limit and collapse.

These are systems that either already are up and running, or proposed for the next 10 years or so.

Now, DRIVE computers on the other hand…

I was reading about the current fusion projects. To control the beam alignments, and wave shapes of the lasers to run the fusion experiments takes an unbelievable complex to computers to handle in real time, but then those will only get smaller over time too.

Monitoring the complex interactions in a fusion plant, a high tech, (higher than ours,) manover drive, or especially Jump drives, and jump space will take lots of raw computing power, but as computers in general get smaller and more powerful, those system will too.

Mr TeK
 
Originally posted by Elliot:
By the way - is there any way in which can Traveller sized computers be justified given we are talking about the 57th century.
It's hard to argue against RainOfSteel, at least as far as the power requirements listed for Book 5 (Book 2 does not list power for computers, or weapons for that matter). Though in T20 the power requirement (and size) make a little more sense as it now includes the avionics, sensors and communications (which used to be part of the Bridge allotment*).

* Though that adds the wrinkle of just what is the Bridge allotment since that part is removed.

Size is a different story.

Traveller computers control ALL activity within the ship. Life support, mundane door whoosing, environmental systems, offensive and defensive enhancement, navigation, power routing, drive control, entertainment, etc. etc.

Add in the fact that it's a system that doesn't need any maintenance (i.e. never crashes or fails except when abused, like by firing a a ship mounted weapon at it) and it's easy to make an argument for it being too small.

IMTU the "actual" computer is about half the listed volume with the rest taken up by access to and around it and the various control circuit conduits throughout the ship.

Comparing a fib model, as RainOfSteel does, you need to recall that the size is doubled because you are adding a full redundant backup for everything, one that is resistant to ship size nuclear weapons going off just outside the ship.

As for computer techs in the crew, it's a good idea but not really needed on small ships (less than 1,000 tons). I figure the additional crew for the larger ships includes electronics technicians.
 
I think that the way T20 has explained CT starship computers works well enough, i.e. the CT computer was actually the computer core, sensors, avionics and commo all rolled into one.

There's something else that always bugged me though.
Why not build a fibre optic computer as your main computer in the first place? Why have it as a backup system? ;)
 
I am not comfortable with any of these solutions, but the other solution is to rewrite all design sequences and cannon dealing with computers.
another solution is to just ignore traveller computers. computers at the tech level of the shipyard simply are part of the brige outfit, and every ship has one. this results in no significant difference in either the large warships or the small PC-sized vessels. the only placed it becomes an issue is in fighters and other warships of <1000 dtons.
 
IMTU the vast bulk of the 'computer' is actually interface units to allow control of the drives, life support, etc. There is a minimum physical dimension to these components - a human being in a low-tech vacc suit needs to be able to remove/replace the units, and damaged units need to be able to be bypassed (yeah, the old ST gag of running the phasers through the photon torpedo controls...). This has the side effect of allowing the players to scavenge reudundant processor cores, memory, etc....
 
Originally posted by Mr TeK: Rainofsteel

Your desktop is as powerful as the ENTRE computing power of a major corporation in the 80s.
After having worked as a Computer Operator on 80s water cooled IBM mainframes for two years, I'd have to disagree with a blanket statement like that.

The DASD strings we had (in 1996, vintage 1984), provided about 600 gigs of space. It had an array of Channels attached (each one its own computer) to Storage Controllers (each one its own computer). The data transfer capabilities over this was, I believe, 4 MB/sec, times 64 Channels, or 256 MB/sec. My PC today may say it has ATA-100 drives, but it doesn't really get that full performance.

It also had three processors, and could get quite a bit done very efficiently.

While it's design would have made handling a GUI like Win2k an unbearable pain, for handling massed batched programming and about five hundred users on dumb terminals or pc-simulated dumb terminals, it was quite acceptable. My PC couldn't hope to accomplish the tasks it did.

Then, in 1996, we got a CMOS air-cooled mainframe. Oooh! It took up (if you factor in everything, like the removal of the controllers, electrical conditioners, pumps, water cooling systems), some 15-20 times less space. It was about the size of a family fridge, and was (with only 2 processors operating on a microcode CPU limiter, IBM was fond of schemes to tune the CPU to precise levels based on how much power it was leasing) 50% faster than old water cooled machine. We also went from $2000 a month in electrical costs (for cooling and running the water cooled mainframe), to something less than $200 a month (for the air cooled version).

The company I'm describing only employeed 18,000. There were, literally, hundreds of companies who, in the 1980s, had much better versions of that old, 1980s, water cooled mainframe (we only had a low-end version).

So, given that our low end water-cooled mainframe had, in many respects, far more power than my PC (a 533MHz with aprox 60 GB of storage on two drives), we can assume that a large number of corporations had considerably more processing power than my year 2000 Micron PC.


Originally posted by Mr TeK:
A tech-level 9 computer will have more raw computing power than exists on the earth at this moment. (that is all but FACT as such designs are already proposed and likely 10 to 15 years from market, TOPS.

As for computer techs on Star Trek, that was always relegated to the jack-o-trades engineering staff to handle.

<snip>

Also remember that deep blue, A room sized computer is the undisputed champion chess player in the world, and it is several years behind
Deep Blue II, which beat Mr. Kasparov in a later match, has considerably less processing power than Deep Blue. It was just designed better, and is considered to be better than Deep Blue at playing chess.

Let us also not forget, that Kasparov beat the original Deep Blue in their first match (in part due to some interference from the IBM team controlling it), only in a later match did the original (and somewhat improved) Deep Blue make a come back.
 
Originally posted by Zutroi:
IMTU the vast bulk of the 'computer' is actually interface units to allow control of the drives, life support, etc. There is a minimum physical dimension to these components - a human being in a low-tech vacc suit needs to be able to remove/replace the units, and damaged units need to be able to be bypassed (yeah, the old ST gag of running the phasers through the photon torpedo controls...). This has the side effect of allowing the players to scavenge reudundant processor cores, memory, etc....
That's one of the most reasonable explanations I've heard.

I've always assumed, though, that any particular device that is built to be controlled by a remote computer is going to come with it's own port (interface). In some ways I assume things like the main drives and other major components have their own computers built on to them, and that they have external I/O ports for connection to the central ship's computer. But I have no real basis for thinking that other than outright assumption.
 
Originally posted by far-trader:

<snip>

Add in the fact that it's a system that doesn't need any maintenance (i.e. never crashes or fails except when abused, like by firing a a ship mounted weapon at it) and it's easy to make an argument for it being too small.
Ah, massive redundancy . . . that's an interesting supporting idea.

Originally posted by far-trader:
IMTU the "actual" computer is about half the listed volume with the rest taken up by access to and around it and the various control circuit conduits throughout the ship.

Comparing a fib model, as RainOfSteel does, you need to recall that the size is doubled because you are adding a full redundant backup for everything, one that is resistant to ship size nuclear weapons going off just outside the ship.
I believe that silicon based computers would be quite vulnerable to EMP, and I'm familiar with the effect of radiation on computers.

However, at TL-11 and up, we have synaptic and positronic systems (both handwavium systems). Neither of these are semi-conductor systems (they're handwavium systems, AFAICT), so there's no reason to suspect they'd be vulnerable to radiation or EMP (other than an artificial one we create for it).

But I'm prepared to concede that redundancy and armoring may well require substatial extra mass. But not as much as is listed.

Additionally, I was always interested in the fact that the fib-side backup was "immune", effectively, to radiation damage. I couldn't figure out why both primary and backup weren't fib computers, especially since the full fib backup of Model 9 was only 60 MCr more . . .
A Model 9 Fib/Fib computer . . . wouldn't it cost only 120 MCr?

Originally posted by far-trader:
As for computer techs in the crew, it's a good idea but not really needed on small ships (less than 1,000 tons). I figure the additional crew for the larger ships includes electronics technicians.
On a small ship, no. On a big ship, yes.

An "electronics tech" is not a computer systems administrator or computer programmer. The OS systems that are deployed aboard such large systems often come with built-in bugs that even the systems programmers who came up with it can't track down (you should see the lists of obscure technical bugs and issues that affect the various editions and successors of the mainframe OS, MVS; I'm sure its only two pages shorter than the bugs list for Windows). An Electronics tech couldn't hope to track down many problems.

Imagine: The gunners of a capital ship file in and sit down in their assigned seats in a lengthy control room in the bowls of the vessel, ready to direct their batteries. The bridge signals that the excecise is about to get under way, and fire-control comes online in prepartion for the weapons-free order.

Now, the sensors in each battery's fire control feed data into the Ship's Computer, which in turn displays it at the gunner's workstations.

This time, the gunners take hold of their fire-direction controls, only to discover that the fire-control sensors won't track.

The Senior Electronics Tech of the ship spends a minute examining the diagnostics programs, all of which tell him nothing has gone wrong.

A team of Electronics Techs spends several hours tracing every electronics component and connection between the batteries and the computer and the gunner's workstations, but to no avail.

I hope one of them has Computer-3 or better (not a certainty in a ship that doesn't require that specialty; and even if there were some, if such persons weren't officially identified, they may not get the chance to do the job).

Answer. The driver responsible for allowing the ship's computer to communicate with the battery fire control sensors had a license expiration. There was no one on board assigned to or qualified to take care of it, and so an update was never installed. And that's an easy one. There can be much more complicated issues.

Perhaps a new set of drivers were installed, updates for the Jump Drive, which can't be backed out due, say, to an unacceptable danger arrising from a recently discovered flaw in the driver that raises misjump chances. And then these Jump Drive drivers turn out to be incompatible with the battery fire-control sensor drivers. Now what?

Trust me, the Captain will not sit still for, "It'll take twelve weeks round trip to get an update, plus a delay of between 1-2 weeks to program the solution."

Captain says, "You have twelve hours." I hope the hapless soul has Computer-6.

Anyway, what I'm really getting at is that a hugely complicated computer needs a dedicated professional computer staff on hand to make sure the above doesn't arise constantly (and even if they're there, if my existing experience is any indicator, it'll still happen, just less frequently . . . ok, it depends on who you have taking care of the systems). Only, if they are present, and professional, they'll have the skillset necessary to produce their own solutions.
 
IMTU the vast bulk of the 'computer' is actually interface units to allow control of the drives, life support, etc. There is a minimum physical dimension to these components - a human being in a low-tech vacc suit needs to be able to remove/replace the units, and damaged units need to be able to be bypassed (yeah, the old ST gag of running the phasers through the photon torpedo controls...).
the modern navy in fact takes exactly this approach in some aspects of engineering. many components are quite old fashioned, simply to make damage control repairs easier. not a bad rationalization. nevertheless a fully miniaturized system with multiple backups is likely to be cheaper and more reliable than any eniac.
 
Back
Top