• Welcome to the new COTI server. We've moved the Citizens to a new server. Please let us know in the COTI Website issue forum if you find any problems.
  • We, the systems administration staff, apologize for this unexpected outage of the boards. We have resolved the root cause of the problem and there should be no further disruptions.

Big computers are no problem

So essentially the lower tech computer requires more space and energy to run? Does something like its interface also potentially affect how well you can read info, or how much space that takes (since Cepheus Engine/Mongoose Trav 1e notes that more advanced terminals can have holographic interfaces that take up far less space than physical keyboards and CRT monitora)?

That relationship was established in MegaTraveller, and persists through TNE & T4.

It wasn't in CT nor T20.
 
That relationship was established in MegaTraveller, and persists through TNE & T4.

It wasn't in CT nor T20.

Not explicitly. But the Model/1 is available at TL5, and the LBB3 definition of TL5 (enabling "radio" but not "television") suggests that at that TL it won't be built using transistors, and that the user interface will probably involve Nixie Tubes and a Teletype printer. A library of printed manuals with tables and charts (microfiche, maybe) would be needed to decode the output.

Serious dieselpunk starship designers use mechanical/hydraulic computers, though. They were good enough for battleship gunlaying, why not use them to calculate orbits and Jump navigation? You'd need Computer Operators based on the computer tonnage, like Engineers but with Mechanical skill.

...and then cross this with the Flash Gordon thread. :)
 
Last edited:
I'm a bit late to the discussion, but here are my experiences with the one big machine I was a console operator on.

A sub-contractor hired some university students to help with monitoring a Cray YMP-2, and ther gear. While the Cray was the largest computer I ever dealt with, the tape drive data tower was substantionally larger, along with the multiple cabinets full of hard drives. It was also liquid cooled. The entire computer room, we were in a smaller room to one side with windows ketting us see all of the computers and assesories, was over 100' wide by over 100' deep.

It modeled tornados and hurricanes. Depending on the size of the data set pulled from the tapes, it could take a few hours to several days. Other data sets were kept on the hard drives.

I don't remember the specs for the I/O parts.
 
Cray YMP-2

I don't remember the specs for the I/O parts.

The YMP series debuted in 1988 - and while the laptop/desktop many of us uses to browse this board is almost certainly faster, our I/O throughput in far smaller, due to not having multiple I/O devices running in parallel, and our actual processing capacity is also smaller (due to not running multiple processors in parallel).

The sheer amount of data even the first Crays (1976 Cray-1, 1985 Cray-2) could crunch would still make even the best home computer burn up in a cloud of smoke - the entire hardware architecture, as well as the processing structure are completely different, with ours designed for pretty low data-processing volumes.

Its not just speed, nor memory capacity, it's how you access and work with it - and the kind of data-manipulation needed for interstellar jump calculations requires more energy, and thus generates more heat, and thus requires more cooling/heat sinks/etc than a PC does, by far.
 
The YMP series debuted in 1988 - and while the laptop/desktop many of us uses to browse this board is almost certainly faster, our I/O throughput in far smaller, due to not having multiple I/O devices running in parallel, and our actual processing capacity is also smaller (due to not running multiple processors in parallel).

That last part is not true, the Y-MP C90 (which was a later version of the original 1988 version) performed in the tens or very low twenties of double precision gflops. Modern game oriented GPUs are - even when cut down to 1:8 or 1:16 fp64:fp32 as they are - are in the hundreds of GFLOPS and have much higher bandwidth to their memory architecture. Professional GPUs which are usually 1:2 fp64:fp32 are getting close to 10 teraflops now.

I also do not think our I/O throughput is smaller, even with a smaller number of attached devices, at least if you are referring to a Y-MP C90 with the E backplane and IO subsystem
 
due to not having multiple I/O devices running in parallel, and our actual processing capacity is also smaller (due to not running multiple processors in parallel).

The sheer amount of data even the first Crays (1976 Cray-1, 1985 Cray-2) could crunch would still make even the best home computer burn up in a cloud of smoke - the entire hardware architecture, as well as the processing structure are completely different, with ours designed for pretty low data-processing volumes.
.

And to expand on that a little. I think the Y-MP era supercomputers got a reputation for strong IO because the systems had DMA back when it was not very common for PCs (or even workstations) to have a bunch of DMA-capable IO devices attached to the system. Nowadays, DMA is commonplace and replicates the functionality at much higher cycle times and bus speeds that the YMP had.

The Y-MP C90 could take up to 16 CPU though there were variants with just 2 CPUs (similar to an Apple Watch, though the Apple Watch has more available system memory and higher clock speeds, and may also be a vector machine). Even 16 CPUs is not really that many, there are consumer oriented PCs these days with 32 cpu / 64 threads available, and again, at much higher clock speeds.

It's hard to know how to tackle the "jump computations" comparison without knowing what those computations are. For targeting, I imagine it's solving a variant of Lambert's Problem, which is probably going to be far faster and more efficient on a 2019-era machine than it would be on a Y-MP.
 
How do modern supercomputer mainframes and such compare to those from the 1970s or 1980s?

The fastest as of November 2018 (DOE's Summit supercomputer) is 150-200 petaflops, or around 9 to 12 million times faster than a fully configured 1991-era Y-MP C90, or about 935 million to 1.25 billion times faster than the 1976-era Cray-1.
 
It depends a little on what "wasted" means, too. It can be very computationally intensive to do voice recognization and natural language processing and actually getting very little information to the computer to process. "What is the square root of 691?" takes orders of magnitude more to recognize the meaning of the voice saying that than it does to compute a square root.

That's not really universally true, though. Running post-Minkowski nbody at high precision (which would be either a nav or predict function) really would take much more computational power than 1980s systems had available for real time applications. Even decoding a single H.265 video frame is something that would take a 1980s era computer - if it were even possible given memory constraints - months to do, something that is done easily in real time now, and much of that is purely availability of computational resources. I think in general when the problem is a fundamentally computational one, the wasted resources aren't a major fact. Performing 100 billion floating point matrix inner products / matrix transform vector really are going to be millions of times faster on a modern computer (with GPU) than an 8087 equipped PC.

I'd expect that the performance critical parts of whatever software needs to be run wouldn't be that inefficient - but I think it's fair game to describe whatever limitation is needed to justify the larger physical computer.


Well what I mean by inefficiency is that programmers don't code 'close to the metal' like they used to in terms of optimizing their programs, because it's not an economically viable use of their time.


So you often don't get 100x performance just because the machine is 100x faster- the hardware allows much 'sloppier' code and the programmer/dev team to concentrate on complexity of process.




In game terms I'd model that by allowing for the same purchase of 'commodity' priced software vs. the MCR costing of CT programs being 'avionic ready' for no-fault ship critical execution.



Also works for the would-be software tycoon that thinks they will be selling programs for big bucks- I would strongly suggest the market would only support 4 to 5-digit pricing for home-grown ship programs at 1:100/1:1000 ratios- with similar chances for bug on execution.


As far as larger is concerned, I pretty much avoid getting into miniaturization/power/processing ratio minutiae that doesn't have gameplay value my players really don't care about by just chalking up the CT/HG EP and tonnage to sensor and interfacing with major ship systems.
 
Interesting.

The largest number of CPUs in the Cray computers I worked with was 8.

A couple of years after I left that job they pulled them out and used well, I can only remember the model name 'Violet'. It looked like a larger work station to me, but apparently it was considered better than the Crays we had.

I remember the YMP-2, but not the other two models.
 
Well what I mean by inefficiency is that programmers don't code 'close to the metal' like they used to in terms of optimizing their programs, because it's not an economically viable use of their time.


So you often don't get 100x performance just because the machine is 100x faster- the hardware allows much 'sloppier' code and the programmer/dev team to concentrate on complexity of process.
By the same token, more of the problems that need optimization have been optimized. Very fast linear algebra subroutines have been written and you don't need to know how to solve the problem "close to the metal", you only need to know the problem that you are trying to solve. Someone else has figured out a more rapid way to walk a volume partitioning tree or numerically evaluate ODEs just as fast as you would be able to if you spent a month writing shader code or assembler (or VHDL if that's what you mean by "close to the metal").

But, in Traveller terms it isn't worth debating, this is one of those fortunate problems that you can describe the economics that lead to big computers however you want and it is a plausible reason. My point here is simply pointing out that in many of the problems actually got faster than the relative hardware power would indicate, because these days the critical performance modules have been implemented in hardware - for example most PCs have video deblocking and motion compensation implemented in hardware. The problem has been solved so close to the metal that its in the metal itself.
 
The fastest as of November 2018 (DOE's Summit supercomputer) is 150-200 petaflops, or around 9 to 12 million times faster than a fully configured 1991-era Y-MP C90, or about 935 million to 1.25 billion times faster than the 1976-era Cray-1.
Does that mean these supercomputers are more what we should compare a ship mainframe computer to?

Also was reading this on wikipedia: https://en.wikipedia.org/wiki/Computer_terminal#Contemporary
Since the advent and subsequent popularization of the personal computer, few genuine hardware terminals are used to interface with computers today. Using the monitor and keyboard, modern operating systems like Linux and the BSD derivatives feature virtual consoles, which are mostly independent from the hardware used.

When using a graphical user interface (or GUI) like the X Window System, one's display is typically occupied by a collection of windows associated with various applications, rather than a single stream of text associated with a single process. In this case, one may use a terminal emulator application within the windowing environment. This arrangement permits terminal-like interaction with the computer (for running a command line interpreter, for example) without the need for a physical terminal device; it can even allow the running of multiple terminal emulators on the same device.
Do modern mainframes today rely on terminals, or do they us something else, to allow inputting and displaying of information?
 
Does that mean these supercomputers are more what we should compare a ship mainframe computer to?

It's difficult to imagine how much computing power will be available thousands of years from now. Something that would fit in a starship might have quintillions of times the processing power of all of the computers on earth in 2019 put together. Given the rate things are progressing, there should be enough computational resources available on any ship to run everything needed simultaneously.

Traveller is a bit retro, though. You could explain it as Earth being one of the very few worlds that experienced run-away computer technical development and most worlds peaked in the megaflops to gigaflops and megaword to gigaword levels, and for whatever reason Terran computers never gained an economic foothold in the Imperium.

I personally like the idea of several tons of computational equipment onboard a starship with the overall performance of a Vax 11/780, but it's not easy to explain that into the game.
 
Does that mean these supercomputers are more what we should compare a ship mainframe computer to?

Also was reading this on wikipedia: https://en.wikipedia.org/wiki/Computer_terminal#Contemporary

Do modern mainframes today rely on terminals, or do they us something else, to allow inputting and displaying of information?

The most modern supercomputers are actually beowulf clusters of high throughput multicore servers in a single rack, running as if a single mainframe, save that a bunch of the system busses are 1000-Base-T over twisted pair...

It's likely that that kind of function is likely to only grow in frequency.

The biggest limit (transistor density) has already been hit.
The next biggest (physical chip size) limit probably hasn't been hit, but is getting harder to increase.
The next limit is cooling. We're nowhere near the limit at the chip... but the solutions for increasing transistor density are making it harder to keep the chips themselves cool, as more and more 3-d is used in making the chip.
 
The most modern supercomputers are actually beowulf clusters of high throughput multicore servers in a single rack, running as if a single mainframe, save that a bunch of the system busses are 1000-Base-T over twisted pair...

It's likely that that kind of function is likely to only grow in frequency.

The biggest limit (transistor density) has already been hit.
The next biggest (physical chip size) limit probably hasn't been hit, but is getting harder to increase.
The next limit is cooling. We're nowhere near the limit at the chip... but the solutions for increasing transistor density are making it harder to keep the chips themselves cool, as more and more 3-d is used in making the chip.

I don't think we're at the limit of transistor density, despite the teething problems the 10nm intel node has had, there's a roadmap out to at least 3x the current density. Also, for the most part logic rests on a single metal layer, the remainder being interconnect, eventually the other layers will be logic (actual transistors) as well which will dramatically increase volumetric density.

Density isn't the only limit, the ability to reliably manufacture large surface area chips economically has been growing fairly dramatically also. Quadrillion gate ICs can be achieved with either a density increase or an area increase. I don't really think we're anywhere near the practical limit yet.
 
And of course, that's not even taking into account the improvements quantum computing could bring - it's still a young industry, but could potentially deliver a new breakthrough in computing speeds.
 
And of course, that's not even taking into account the improvements quantum computing could bring - it's still a young industry, but could potentially deliver a new breakthrough in computing speeds.

That is the problem with science fiction in general, it tends to dramatically overestimate the advances in some technologies, and dramatically underestimate others. Earth will probably be at grandfather/ancients level computation ability when we get to the 3I era in reality, and still be a single-star-system species with no interstellar travel at all.
 
Three nanometre factories are currently under construction in Taiwan.

I knew that carbon was considered as a cheaper substitute for silicon, what I think we used to jokingly call plastic chips; I believe they're now thinking of diamonds, which in that form can certainly tolerate high temperatures.

The performance of a computer certainly is effected by how many times it can cycle in a given period, how many instructions can be carried out per cycle, the byte size of the instructions, access to the data, and what other aspects can be tweaked; if the programme can be split up and channelled through multiple physical cores and virtual threads.

At the moment, there's a race to the smallest transistor; once that's reached, it will be cutting power usage and more sophisticated chip architecture, and then making the programmes themselves more efficient.
 
Here is something I wonder. How fast can these computers warm up or boot up?

Or even just switch from programs?

If they're 1970's-ish in style/substance, are they sorta slow?
 
Here is something I wonder. How fast can these computers warm up or boot up?

Or even just switch from programs?

If they're 1970's-ish in style/substance, are they sorta slow?

Considering the nature of large vehicle computers, they should be running at all times the ship is operating. Backups and program switching should be near instantaneous.

Also, consider the length of CT space turns at almost 17 minutes. For a computer, that's an eternity. Program switching should, at most, take a few minutes.

The way I see it, the boot process would be part of pre-flight checks if shut down during the stay, and be lengthy due to powering on and starting cooling systems. The system only gets shutdown when the crew needs to either go inside it to repair, inspect it, or upgrade the equipment. Back up subsystems would be in 'hot standby' during flight.
 
Back
Top