So how would a more modern example/idea of how computers run programs today change Traveller combat, considering from what you said about 1970s computers running different programs at different times?
Phew, that's part modeling computer operations and part game value/flow.
For one thing, the CT computers sort of model the small memory limitations of those machines. Nowadays both memory and storage is cheap- modern limitations are more a factor of creating incredibly complex processes that churn through massive data and/or is expected to all be realtime.
And doing something like analyzing a planetary survey data set is a different workload type then coordinating a battleship's power, fire control and maneuver, you would tend to have optimized machines for either.
The little snippets I saw of TNE computers in the catalog book is the sort of thing I would lean towards if you were wanting to be simulationist in that it deals with the time factor and greatly powerful machines per TL increase.
Back in the day, as noted the Complexity factor from Space Opera and Gurps Cyberpunk seemed to offer the best capture of how to differentiate capacity levels without getting into specific numbers that looked ridiculous within 5 years.
I always loved the fidgety detail of the LBB8 robotic computers, that seems to give a progression that while not capturing the exponential increase in power, at least gives an internally consistent roadmap to processing and OS increases.
From a game perspective the Mongoose computers are IMO 'good enough' for play purposes, especially giving a consistent look and interaction for ship, personal, ship/vehicle, robotic and cybernetic computing.
For retrofitting CT computers, I went with the sensor idea as mentioned before, with more complex sensor rules I'm not going into here.
I also did an option where the idea is the full cost computers are spaceworthy mission critical systems (evidenced by them functioning after taking 5 hits that would ruin most weapons or drive systems on an ACS), but you could buy cheaper ones that drop in price by a factor of 10 each that drops in reliability.
So ultimately you could run your ship with a commodity machine but one hit and your system is gone, and to get redundancy you will be using more space to house several computers.
Perhaps define each computer model level as a complexity capability and then matching workloads to them. Say, a Model/1 is a baseline and is functionally equivalent to the USN's early UYKs, good for navigation, basic sensors, fire solutions and engineering, and later ones are exponential power to that standard.
Maybe something like the model is both the base and the exponent, so 2
2, 3
3, 4
4,etc.
Probably need to figure out workloads based on ship systems, EW challenge (TL differences would probably be much larger), and analysis workload- something that bogs a Model/3 for days might be minutes for a Model/4 and seconds for a Model/5.
Finally, you could treat the Model number as a computery form of INT and do a lot of task rolling off that. Would give you fast shorthand as to the AI problem solving and 'understanding' the computer has- makes for a definitely clear delineation between TLs and what you can expect those machines to be able to do.
Just think, Watson at it's best now would be a Model/3, so INT3.
I know that breaks the traditional TL timeline, but heck RL computing broke that some time ago, IMO might as well get scifi and deal with the talking buggers.