You are missing some of the point of micro-services. Each part of the app is hosted separately, potentially by different providers. Micro-services are typically small, easily hosted on free host services (at small scales like for Traveller use).
It still has to be hosted. Someone has to choose to take on the burden of finding a provider, standing it up, and maintaining it. Which means that it has a lifespan directly correlated to however long the person wants to maintain it. Whether that be days, weeks, months, or years.
And it suffers from the singular attribute of networks that folks tend to always forget.
They're unreliable, and that's just the connectivity part.
Samsung is shutting down its SmartThing hub this month.
https://arstechnica.com/gadgets/2021/06/samsung-is-killing-the-first-gen-smartthings-hub-this-month/
Millions of dollars of hardware from a multi-billion dollar mega corp, going poof. Like Keyser Soze. "And like that...he is gone."
What happens when Marc gets tired of this bulletin board? Or aramis? or the person who runs Travellermap?
What have we learned about the always on, never forgetting internet? That's it's all a lie. An illusion. It's laced with bit rot, and weak broken dependencies. In some spaces, building 10 year old code is quite challenging. Entire communities vanish overnight.
Thank heavens for archive.org, as incomplete as it is.
And, frankly, I don't know how well something like a Docker image ages. Will one from 5 years ago still work? I have no idea. Plus we're in a growing age where Intel is fighting the rise of ARM infrastructure. Apple is switching over wholesale. Running a Docker image on a Mac is challenging enough, I don't know if you can run a Docker image on Windows, and will I be able to run a Docker image on a new M1 based Mac? I have no idea.
Then, of course, the client code has to be configured to route properly to these services. Say travellermap vanishes one day, but, since it's been open sourced, someone else decides to host it. Now all of the URLs change in the client code "that worked for years(tm)". Maybe the port has to change. Perhaps, especially in this modern environment, new certificates need to cut and and exchanged. All of that legacy code suddenly "broken".
Yet, the LBBs I have from 1977, scuffed, dog eared, and worn as they are. Those still work.
It's a fine intellectual exercise. Whether something as integrated as starship design can work with a coarse service framework. I honestly don't know, I don't think it can be done with just a bunch of individual services vs something that operates on the whole.
Simply changing the hull size recomputes almost every major formula on a ship, and potentially impacts most every major design rule. That means while you could have a service that given hull size and, say, TL, you might get back a price, or mass, or something else, that's not enough. The ripple effect is legion, and the burden of managing that ripple should not be on the client.
That's a contrived example, of course.
Finally, the true definition of interoperability is not implementations per se but documented protocols, format, and APIs. Obviously that has to start somewhere. And ad hoc can become de facto and eventually de rigueur.
At a logical level, you can design around such services without relying on a network infrastructure. RPC architectures have been hiding that stuff for years, from RPC, to CORBA, to SOAP being rendered as high level language API calls. But in the end, at the local code level, it doesn't look any different than a normal function or method invocation. So, then you go "hmm", what's really being achieved here.
I spin around on this myself. My current path is a local program feeding a SQL database that can be readily queried ad hoc as an integration point, rather than having the client program host services. I'm on the edge of publishing stored procedures in the database so to offer a more "API" experience.
That way, you have the client program, a database image, and then you can you access the db with a 3rd party program, and at least have access to the raw data, and perhaps use the stored procedures to manipulate it at a high level. (Starting up the DB is straightforward, we're not spooling up Postgres for SQLServer here...)
It's fair to say the difference between that and hosting web service calls in the client, at 30,000 feet, it's pretty much the same. The web service API may be better in the long run.
But short term, having access to the DB I think has high value and low cost on me as a developer.