• Welcome to the new COTI server. We've moved the Citizens to a new server. Please let us know in the COTI Website issue forum if you find any problems.
  • We, the systems administration staff, apologize for this unexpected outage of the boards. We have resolved the root cause of the problem and there should be no further disruptions.

Big computers are no problem

As I am more than a bit tired of getting piled on, I will not post in this thread again. I have expressed my views and gotten hammered for it.

Edit Note: What all of you who are arguing for large computers are saying is that after 5000 more years of development, the computer of the Far Future will resemble those of the late 1960s, 1970s, and 1980s in size, bulkiness, and cost.

No one's picking upon you. We are having a group discussion. You happen to be on the minority view.

[m;]Your participation or the lack thereof is not something you should be discussing in thread. If you choose to bow out, just do it. [/m;]

I don't respond particularly well to attempts to hold a thread hostage.

There's no expectation of continued participation by any participant, and no ownership of the thread save that of the board.

CC: Staff, MWM
 
Edit Note: What all of you who are arguing for large computers are saying is that after 5000 more years of development, the computer of the Far Future will resemble those of the late 1960s, 1970s, and 1980s in size, bulkiness, and cost.

Several factors combine in several people's minds...
1) Moore's Law (The number of transistors on a single IC) doubling every 2 years broke around 2010.
2) Circuit sizes are reaching the physical limits of the materials involved. IC circuits cannot readily be shrunk any further than the current top end - because the top end already is becoming unstable due to electron tunneling.
3) bit-widths keep increasing, thus increasing the chip complexity.
4) faster means more power needed, and thus more heat generated. Consumer electronics already are heat-limited more often than calculation-locked.

Now, for ships... like the school, it's not going to be one big room, but lots of components distributed, plus some central architecture. Give the concerns above, we can expect more and more transistors per chip, but not my more chips per unit area, but more area at the same near-peak density.

We may see 3D printing make for 3D chip designs... which can increase throughput by better use of case space.

Likewise, the server farm for Google is capable of about 3x what it does calculate... except that it cannot sustain that due to overheating. A large portion of modern computer racking is guaranteeing airflow for cooling to each installed device, and thermal sinking to dsitribute the high temp of the chip to a lower temp but much larger radiator which conducts to air, which then convects to leave the case.

The total computational power of Google far exceeds everything the US Military had in 1980... counting all of NASA.
In 2013, Facebook alone served 750 petabytes daily.

The combination of distribution and physical limitations of the way we make semiconductor processors are limiting further growth. The actual area is increasing, not the power per unit area.

Past a certain point, it's going to be better to make the devices more 3-dimensional than currently. But the era of exponential growth in computing power is likely dead. (note also: Moore' Law was transistors per IC, not actual IOPS nor FLOPS)

Likewise, past a certain point, one's going to need to internally cool the chips, rather than relying upon heat transferral via exterior contact.

https://storageservers.wordpress.com/2013/07/17/facts-and-stats-of-worlds-largest-data-centers/
 
Yet it does less.

There's more to computing than raw CPU power.

My iPhone is a marvel. From its fluid gestures, on board image correction, the "AI Chip" doing 600 BILLION operations per second just so it can recognize that it's me trying to turn it on.

My phone made me a "movie" from the photos stored on it. It was a montage of stills and video of my cats. Remarkable since my photos are not part of the cloud, I don't store any of it online. So, I know that my phone didn't just go an upload my photos to Apple where acre feet of servers plodded on it to make me a kitty cat movie. The phone figured all that out on it's lonesome during downtime while just sitting on my desk.

But, from a utility point of view, it doesn't do as much as my desktop does. With it's large screen, big keyboard, the drive array, etc. I'm certainly not typing this post on it.

So, computing utility is not necessarily tied to the size of the microchip doing the work. There's all sorts of other stuff potentially involved.
Speaking of desktops and mainframes and all that, do you think terminals aboard ships would be flatscreens and such instead of say CRT monitor types?

Generally what do terminal interfaces look like in Traveller artwork?

For example, could interfaces also potentially increase the amount of volume a terminal occupies?
 
Edit Note: What all of you who are arguing for large computers are saying is that after 5000 more years of development, the computer of the Far Future will resemble those of the late 1960s, 1970s, and 1980s in size, bulkiness, and cost.

That was my first point. Projects expand to occupy the space they are given. If your computer has 16kB of ram, your programs will take 16kB of ram. If your computer has 16TB of ram, your program will take 16TB of ram. The larger program(s) will be hugely more capable. The same thing with the hardware.

I get the feeling however you are asking about simplifying the Cepheus starship design system. Of which removing the computer as a separate component is one idea.

I would say this is an interesting idea. You could also argue that life support system should take several tons of space, but is distributed through the ship. Same with the power distribution system. The power cables that get power from the fusion plant to the laser turret or sensor systems must be impressive.

So explore the idea that the "computer system" is another integrated, distributed component of the ship design rather than separate component.

There were two reasons why computers were a separate component.

First, the computer needed to be equal to or larger than the jump drive. This would argue for having the computer just be part of the Jump Drive. But there was a "bis" computer, where a 1bis computer was sufficient for jump-2. This was a design trade-off.

Second was the computers had a defined capability to run programs. The idea here was to add a level of tension during the ship combat for the players. You needed to run maneuver (to dodge missiles), gunnery (to fire back), navigate (to calculate jump), jump (to escape). Not all of which fit into the computer at the same time. Make a tactical choice.

And if you are building a new ship designed for dangerous territory, adding a larger computer at the expense of cargo space or additional crew was a design trade-off.

Neither of these requires you allocate a room with a large rack of computer equipment.

So, yes you can argue that, as part of the ship design, computers take no space because they are a distributed component. And there isn't a separate computer component (space or no), the ship has a computer capable of running the ship. And any upgrades (e.g. installing laser weapons in the turret) includes the computer upgrades.
 
There were two reasons why computers were a separate component.

First, the computer needed to be equal to or larger than the jump drive. This would argue for having the computer just be part of the Jump Drive. But there was a "bis" computer, where a 1bis computer was sufficient for jump-2. This was a design trade-off.

Second was the computers had a defined capability to run programs. The idea here was to add a level of tension during the ship combat for the players. You needed to run maneuver (to dodge missiles), gunnery (to fire back), navigate (to calculate jump), jump (to escape). Not all of which fit into the computer at the same time. Make a tactical choice.

I agree with this assessment.

The zeitgeist of Traveller is very much late 1970's computer. That's what they imagined... what they knew. Movies from the period reflect this outlook. So does much of the Keith Brothers art. Including round tape data rolls. Like old film reels. It was the look of the time.

I think it makes sense in a world of disparate tech levels and a need to use widely variant technology products. Assume a high end volume necessity before miniaturization shrinks things a bit. The argument about performance matching available space also seems consistent and logical in my subjective opinion. I mostly had experience with military computers in the early days.

I usually think of the Big Bopper from the film War Games, unless I am remembering incorrectly. Or Electric Dreams.

Shalom,
M.
 
I agree with this assessment.

The zeitgeist of Traveller is very much late 1970's computer.
Of course it is. Traveller is Shotguns in Space and extrapolations from that. The whole "super high tech" model of society never struck me as Traveller. It's not Star Trek.

That said, just because the model SEEMS very 70s "room filled with a mainframe", doesn't mean that it's necessarily so. Because, We Don't Know. We Don't Know what a "computer" needs to do or look like or be built that control a handwavium Maneuver Drive. Or a Jump Drive. Or a weapons system that can hit objects out to 100,000km in combat conditions.

Do you know how big a quantum computer framing complex is? I sure don't. Silicon may be old news. "What we lose in density we gain in performance." That low rent silicon really just peters out when the circuits get down below 3nm and 20GHz. It just doesn't scale anymore. But the Traveller tech quantum dynamic substrate, which is far less dense, but now were in to THz processing power -- FINALLY something that can make a Jump calculation within a human's lifetime!

See? We Don't Know. We Don't Know a lot of stuff in Traveller. Is that what Mr. Miller et all was visualize when they decided that computers needed to consume significant amount of volume? Who cares!

The book says Computers are XX dTons, so ya know what? Computers are XX dTons. They may be big, but they're so ubiquitous and ordinary, nobody actually cares what makes them work as long the guns shoot and the ships jump.

A little bit over kill to process the inventory for the Steward? Yea probably. But, hey, its the computer we got and we just can't buy the crummy slow ones any more -- they don't make them.

I usually think of the Big Bopper from the film War Games, unless I am remembering incorrectly. Or Electric Dreams.
War Games was W.O.P.R. Electric Dreams was a PC, named Edgar, in Moles's apartment.

"That's telling them, Howard!"

https://www.youtube.com/watch?v=f0tf63etIhE&frags=pl,wn

<3 Electric Dreams.
 
Wait do ship computers in Traveller only run one program at once? And you have to like physically swap out programs in order to do other stuff? Because that sounds a lot like how Vilani computers sort of work in GURPS Traveller: Interstellar wars.
 
Last edited:
As I am more than a bit tired of getting piled on, I will not post in this thread again. I have expressed my views and gotten hammered for it.

Edit Note: What all of you who are arguing for large computers are saying is that after 5000 more years of development, the computer of the Far Future will resemble those of the late 1960s, 1970s, and 1980s in size, bulkiness, and cost.


Functional forums of value do not exist for the ratification of opinions.



Posting is risk, there is risk of learning something new and/or contradictory opinions or facts presented and people can have an opinion differing from yours without any intent of 'piling on'.


Most of the value of any forum IMO is 'reality checking' or garnering new POVs that might open up the original poster's mind, or anyone else reading the thread even if the OP disagrees with the follow-on comments.
 
Wait do ship computers in Traveller only run one program at once? And you have to like physically swap out programs in order to do other stuff? Because that sounds a lot like how Vilani computers sort of work in GURPS Traveller: Interstellar wars.


In CT even the cheaper computers ran multi programs at a time, a careful reading of the computer combat rules shows the programs in CPU run during a phase and then swap out to storage while other programs in storage load up and run in the CPU during their phase.


So during combat the anti-missile program might load up during that phase, then Gunnery Target and Predict during the shooting phase, then Target and Launch during missile firing, etc.


Fair example of multiprogramming running for 70s computing, pretty much partitions (sort of a VM space for each program to run in).


It just doesn't scale with the radical increases in TL between TL6 and TL15, even without what we have experienced during the last 35+ years in computing increases.


One other thought about Moore's Law- this is all based on relatively cheap silicon. There are more expensive and rare metals to base our machines on that can switch faster and/or take more heat. Exploitation of other planets and/or asteroids may open up options. How much heat can an Iridium-based computer take?


Or perhaps we avoid a lot of heat limits by using mediums that do not require constant refreshes to store or process a value, and of course the human brain has some architectural surprises waiting IMO that might be applied to non-biological circuitry (especially the underlying math and likely what I call symbolic array logic routing).


Factor in whatever we can get out of quantum computing, and our salad days may not be over in terms of computing per square centimeter.


It will just likely be a lot more expensive to get to.
 
In CT even the cheaper computers ran multi programs at a time, a careful reading of the computer combat rules shows the programs in CPU run during a phase and then swap out to storage while other programs in storage load up and run in the CPU during their phase.

So during combat the anti-missile program might load up during that phase, then Gunnery Target and Predict during the shooting phase, then Target and Launch during missile firing, etc.

Fair example of multiprogramming running for 70s computing, pretty much partitions (sort of a VM space for each program to run in).


It just doesn't scale with the radical increases in TL between TL6 and TL15, even without what we have experienced during the last 35+ years in computing increases.

One other thought about Moore's Law- this is all based on relatively cheap silicon. There are more expensive and rare metals to base our machines on that can switch faster and/or take more heat. Exploitation of other planets and/or asteroids may open up options. How much heat can an Iridium-based computer take?

Or perhaps we avoid a lot of heat limits by using mediums that do not require constant refreshes to store or process a value, and of course the human brain has some architectural surprises waiting IMO that might be applied to non-biological circuitry (especially the underlying math and likely what I call symbolic array logic routing).

Factor in whatever we can get out of quantum computing, and our salad days may not be over in terms of computing per square centimeter.

It will just likely be a lot more expensive to get to.
So how would a more modern example/idea of how computers run programs today change Traveller combat, considering from what you said about 1970s computers running different programs at different times?
 
So how would a more modern example/idea of how computers run programs today change Traveller combat, considering from what you said about 1970s computers running different programs at different times?


Phew, that's part modeling computer operations and part game value/flow.


For one thing, the CT computers sort of model the small memory limitations of those machines. Nowadays both memory and storage is cheap- modern limitations are more a factor of creating incredibly complex processes that churn through massive data and/or is expected to all be realtime.


And doing something like analyzing a planetary survey data set is a different workload type then coordinating a battleship's power, fire control and maneuver, you would tend to have optimized machines for either.



The little snippets I saw of TNE computers in the catalog book is the sort of thing I would lean towards if you were wanting to be simulationist in that it deals with the time factor and greatly powerful machines per TL increase.


Back in the day, as noted the Complexity factor from Space Opera and Gurps Cyberpunk seemed to offer the best capture of how to differentiate capacity levels without getting into specific numbers that looked ridiculous within 5 years.


I always loved the fidgety detail of the LBB8 robotic computers, that seems to give a progression that while not capturing the exponential increase in power, at least gives an internally consistent roadmap to processing and OS increases.


From a game perspective the Mongoose computers are IMO 'good enough' for play purposes, especially giving a consistent look and interaction for ship, personal, ship/vehicle, robotic and cybernetic computing.


For retrofitting CT computers, I went with the sensor idea as mentioned before, with more complex sensor rules I'm not going into here.


I also did an option where the idea is the full cost computers are spaceworthy mission critical systems (evidenced by them functioning after taking 5 hits that would ruin most weapons or drive systems on an ACS), but you could buy cheaper ones that drop in price by a factor of 10 each that drops in reliability.



So ultimately you could run your ship with a commodity machine but one hit and your system is gone, and to get redundancy you will be using more space to house several computers.


Perhaps define each computer model level as a complexity capability and then matching workloads to them. Say, a Model/1 is a baseline and is functionally equivalent to the USN's early UYKs, good for navigation, basic sensors, fire solutions and engineering, and later ones are exponential power to that standard.


Maybe something like the model is both the base and the exponent, so 22, 33, 44,etc.


Probably need to figure out workloads based on ship systems, EW challenge (TL differences would probably be much larger), and analysis workload- something that bogs a Model/3 for days might be minutes for a Model/4 and seconds for a Model/5.


Finally, you could treat the Model number as a computery form of INT and do a lot of task rolling off that. Would give you fast shorthand as to the AI problem solving and 'understanding' the computer has- makes for a definitely clear delineation between TLs and what you can expect those machines to be able to do.


Just think, Watson at it's best now would be a Model/3, so INT3.



I know that breaks the traditional TL timeline, but heck RL computing broke that some time ago, IMO might as well get scifi and deal with the talking buggers.
 
One other thought about Moore's Law- this is all based on relatively cheap silicon. There are more expensive and rare metals to base our machines on that can switch faster and/or take more heat. Exploitation of other planets and/or asteroids may open up options. How much heat can an Iridium-based computer take?

"Moore's Law" as originally intended was not that transistor density would increase by a factor of two every 18 months, how it is usually quoted, but that the cost of producing a given density of transistors would decrease by a factor of two every 18 months. I think that was only ever intended to be an observation rather than trying to state a physical law (which of course it is not). The result of that at TL15 is that 7 or 5 nanometer is the smallest feature size available for CPUs, but such CPUs can be 3d printed on a home 3d printer for almost free.

The way this was explained in our own version of the traveller universe is that Earth had a particular set of circumstances and technological push that made the development of extremely high density computational (and communication) devices advantageous but very rare in the Imperium - it isn't a singular technology that got real-world Earth to where it is now, but devoting a surprising amount of the economic output of earth to developing a broad set of related technologies. It isn't that much of a stretch to imagine that Earth would develop this but the dominant spaceflight technology would be Vilani-style and that the Vilani simply never devoted such a fraction of their economic output to it.

That is quite a bit of economic handwavium, but no more so than jump drives.
 
Maybe something like the model is both the base and the exponent, so 22, 33, 44,etc.


Probably need to figure out workloads based on ship systems, EW challenge (TL differences would probably be much larger), and analysis workload- something that bogs a Model/3 for days might be minutes for a Model/4 and seconds for a Model/5.

The nn progression yields 1, 4, 27, 256, 3125, 46656, and if you take complexity to be represented by performance, Earth between 1980 and 2019 had a much larger jump than 1:46656 (that'd be 1 mflops to 46 gflops - and it was really more like 50 kflops to ~6 tflops if you consider double precision fp multiply-accumulates - that's 1:120 million, comparing an 8087 to a Volta GPU, or something like that). If we started in 1980 at model 1 for computers, we'd be at around model 8.6 now.
 
Last edited:
In most scenarios, and the rules, what is the biggest change between say a TL 7 computer vs say a TL 15 computer?

Like if you were to equip one ship with TL 15 computers and another with say TL 7 computers, what would be the difference between the capabilities of the computers be and how would it affect gameplay?

Is the biggest difference in how much space it takes and how much memory it can store or something?
 
The nn progression yields 1, 4, 27, 256, 3125, 46656, and if you take complexity to be represented by performance, Earth between 1980 and 2019 had a much larger jump than 1:46656 (that'd be 1 mflops to 46 gflops - and it was really more like 50 kflops to ~6 tflops if you consider double precision fp multiply-accumulates - that's 1:120 million, comparing an 8087 to a Volta GPU, or something like that). If we started in 1980 at model 1 for computers, we'd be at around model 8.6 now.


I'm aware of the scaling, I would nonetheless tend to think it's not all that far off as a LOT of modern computing is wasted.


The complexity and/or data loads they are asked to do are a lot less massaged/filtered/managed and cheaper in terms of programmer/hours and optimization, resulting in less real world performance per dollar/credit then a straight line approximation of processor capability would seem to suggest.
 
In most scenarios, and the rules, what is the biggest change between say a TL 7 computer vs say a TL 15 computer?

Like if you were to equip one ship with TL 15 computers and another with say TL 7 computers, what would be the difference between the capabilities of the computers be and how would it affect gameplay?

Is the biggest difference in how much space it takes and how much memory it can store or something?


Depends on which version of the game you play.


Biggest one that seems to go across all versions are limits to how big a jump program you can run.
 
I'm aware of the scaling, I would nonetheless tend to think it's not all that far off as a LOT of modern computing is wasted.


The complexity and/or data loads they are asked to do are a lot less massaged/filtered/managed and cheaper in terms of programmer/hours and optimization, resulting in less real world performance per dollar/credit then a straight line approximation of processor capability would seem to suggest.

It depends a little on what "wasted" means, too. It can be very computationally intensive to do voice recognization and natural language processing and actually getting very little information to the computer to process. "What is the square root of 691?" takes orders of magnitude more to recognize the meaning of the voice saying that than it does to compute a square root.

That's not really universally true, though. Running post-Minkowski nbody at high precision (which would be either a nav or predict function) really would take much more computational power than 1980s systems had available for real time applications. Even decoding a single H.265 video frame is something that would take a 1980s era computer - if it were even possible given memory constraints - months to do, something that is done easily in real time now, and much of that is purely availability of computational resources. I think in general when the problem is a fundamentally computational one, the wasted resources aren't a major fact. Performing 100 billion floating point matrix inner products / matrix transform vector really are going to be millions of times faster on a modern computer (with GPU) than an 8087 equipped PC.

I'd expect that the performance critical parts of whatever software needs to be run wouldn't be that inefficient - but I think it's fair game to describe whatever limitation is needed to justify the larger physical computer.
 
So how would a more modern example/idea of how computers run programs today change Traveller combat, considering from what you said about 1970s computers running different programs at different times?

well, MgT handles this by saying that the limiting factor of a computer (and the primary stat that increases with Model and TL) is "Bandwidth" or simultaneous processing power. data storage capacity is explicitly said to be "effectively unlimited" by TL9, and a Library program is included as standard on the ships computer.

basically, the ship can have as many programs installed as it wants, bit can only have so many running at the same time (for example, it might be able to run Fire control/2, or evade/2, but if it runs both at same time it can only run FC/1 and evade/1. this creates options and tactical choices for the players, in letting them decide what to run, or what not to run, etc.
 
In most scenarios, and the rules, what is the biggest change between say a TL 7 computer vs say a TL 15 computer?

Like if you were to equip one ship with TL 15 computers and another with say TL 7 computers, what would be the difference between the capabilities of the computers be and how would it affect gameplay?

Is the biggest difference in how much space it takes and how much memory it can store or something?
The Model/n rating is standardized across tech levels -- a Model/1 at TL15 has the same capabilities as a Model/1 at TL5. In practice, the TL5 version will be a multi-ton behemoth of vacuum tubes and magnetic-core memory, but it will do everything the TL15 version would. Might need a dedicated extra Energy Point from the power plant to drive it though...

The TL15 version of a Model/1 is an emulator app running on the captain's Space-iPhone.
 
The Model/n rating is standardized across tech levels -- a Model/1 at TL15 has the same capabilities as a Model/1 at TL5. In practice, the TL5 version will be a multi-ton behemoth of vacuum tubes and magnetic-core memory, but it will do everything the TL15 version would. Might need a dedicated extra Energy Point from the power plant to drive it though...

The TL15 version of a Model/1 is an emulator app running on the captain's Space-iPhone.
So essentially the lower tech computer requires more space and energy to run? Does something like its interface also potentially affect how well you can read info, or how much space that takes (since Cepheus Engine/Mongoose Trav 1e notes that more advanced terminals can have holographic interfaces that take up far less space than physical keyboards and CRT monitora)?
 
Back
Top