kilemall
SOC-14 5K
1970s machines, whether the very first personal computers or mainframes, took a few minutes to load programs especially if they were on some sort of tape media. Booting up also took a few minutes, and might be as slow as running through a card deck.
The swapping between active and 'storage' programs should be considered instant as that's really more like mulitple programs, all loaded per se but many 'swapped out' in virtual memory, ready to run the instant it's given enough resources to actually execute.
Of course this is about what the CT system was modeled on. I'd assume this is avionics or better grade machines (better be for millions of credits and risking the whole ship), so they likely have an instant-on capability, extensive self-diagnostics running on startup and doing internal resource swaps on error/damage detection.
Which gets to another point as to why they could be bulky and/or spread out, their damage resiliency.
By CT ship damage standards, the ship computer can largely shrug off damage that would cripple a Type A engineering section. By 7 hits it is a very flaky machine, but still serviceable whereas a smaller ACS would be dead in space and the larger ones painfully damaged.
HG doesn't really capture that, but the computers still take beatings that leaves weapons damaged and drives crippled.
So, a LOT of redundancy, distributed failover nodes, backup consoles etc. that take damage- and take space.
Also was reading an editorial from Grognard where Wiseman was answering the critics of the Big Machine by saying the large computer spec was also intended to cover crewing space for controlling the machine. So, that too, and the larger models may have 2 crew stations.
The 1990s mainframes I worked often did, and I saw earlier generation machines that had the computer equivalent of a flight engineer station, with direct external controls on the functioning of the machine. Even today mainframes will typically have hardware and a console that controls how the system boots with what resources separate from the control/execution main console.
The swapping between active and 'storage' programs should be considered instant as that's really more like mulitple programs, all loaded per se but many 'swapped out' in virtual memory, ready to run the instant it's given enough resources to actually execute.
Of course this is about what the CT system was modeled on. I'd assume this is avionics or better grade machines (better be for millions of credits and risking the whole ship), so they likely have an instant-on capability, extensive self-diagnostics running on startup and doing internal resource swaps on error/damage detection.
Which gets to another point as to why they could be bulky and/or spread out, their damage resiliency.
By CT ship damage standards, the ship computer can largely shrug off damage that would cripple a Type A engineering section. By 7 hits it is a very flaky machine, but still serviceable whereas a smaller ACS would be dead in space and the larger ones painfully damaged.
HG doesn't really capture that, but the computers still take beatings that leaves weapons damaged and drives crippled.
So, a LOT of redundancy, distributed failover nodes, backup consoles etc. that take damage- and take space.
Also was reading an editorial from Grognard where Wiseman was answering the critics of the Big Machine by saying the large computer spec was also intended to cover crewing space for controlling the machine. So, that too, and the larger models may have 2 crew stations.
The 1990s mainframes I worked often did, and I saw earlier generation machines that had the computer equivalent of a flight engineer station, with direct external controls on the functioning of the machine. Even today mainframes will typically have hardware and a console that controls how the system boots with what resources separate from the control/execution main console.