• Welcome to the new COTI server. We've moved the Citizens to a new server. Please let us know in the COTI Website issue forum if you find any problems.
  • We, the systems administration staff, apologize for this unexpected outage of the boards. We have resolved the root cause of the problem and there should be no further disruptions.

The speed of light and jump drives.

Issues I have to deal with:
They have a kick ass sensor suite on board. Military style stuff.
Tricked out computer
The good sense to physically account for all that stuff black box stuff.

Their issues
A really tricked out ship with the low rider package.
The inability to pass up a sweet target like this.

Here is what happened. Their ship looks like a 200 ton far trader. Right. It has a 6G acceleration, hull armor and two triple turrets. One of their favorite weapons are EMP missiles purchased from the mob on the black market. (It is a long story. The campaign is almost two years old playing weekly.)

They lie in wait for weeks to take THIS ship (4000 ton Super freighter in a small ship universe). They wait until the right time and BAMO. The freighter fires off a few shots and so do they. They are on board fast enough and give the crew the option of fighting or taking the lifeboat. The crew abandons ship and the PCs take over with a prize crew. They come complete with a computer expert who is a retired Tureka station manager. Well they get the ship and jump out to an empty hex. There it sits.

I want to put the fear of god into them. The 41st Independent Squadron is on patrol in the area. They arrive too late to help but know what must have happened. They send a sensor ship out to “capture” the action. Now they will know a 200 ton far trader with really kick ass engines and armed to the teeth (for a 200 ton ship) did the deed and it had hull markings that look like this.

Of course they will change the markings but I want them to have to run and perhaps get caught. The point is to give them a real challenge with out making failure assured. I am anxious to find out what they will do. All four guys are quite creative and it will be really cool to hear their solution.

.

Surely once the Navy pick up the Tukera Shuttle then they have more than the crew's statemants they also have the ships downloaded logs and full sensor data. That is only for the shuttle and doesn't include any data transfer from the freighter itself.

The data will include any weapon/power and engine signatures as well as radio and comm chatter. Did the pirates comunicate with each other or make a point of maintaining comm discipline. did they use codenames or real names
Any one who did speak using codenames or whatever has also given the authorities voiceprints.
 
As soon as I read the OP a few minutes ago I thoght "nope, angular resolution and signal levels won't let it happen."

Aramis has covered signal levels well enough to make the point.

Angular resolution won't let you see any of what's going on. Assuming you can detect a signal, it's just going to be an undifferentiated mush at even a few light hours, never mind days.

Resolution is limited by wavelength. You can't beat it, even with super-sensors and a long baseline. Ships and their details are too small at any useful frequency. X-rays and gamma waves have short wavelengths, but interact poorly and don't provide an image.

Not gonna post all the math unless someone wants me to, but it comes down to the fact that even with perfect optics and a point source you don't get a point image. You get a disk with rings around it of an angular size ("Airy disk") that varies by wavelength, optical system effects (even the fact that there is an edge to the detector's area has deleterious effects), and so on. Anything that would produce an extended image, rather than just act as a point source, is even worse. Parts of the image interact and interfere with other parts, an extended image is self-destructive.

And I haven't even started on quantum effects.

Look up Rayleigh, Airy, and Planck for a primer.

I design instrumentation and sensor systems IRL, so I get to run headlong into this (and customers who don't believe in physics, as far as I can tell ;) ) all the time.
 
Last edited:
Interferometry can reduce the fuzzy-disk syndrome, and also reduces glare effect drastically. The TPF is planning a set of 4 scopes each 4m, which should give an angular resolution better than anything yet built, and also will reduce glare drastically, as well as the fuzzy disk issue.
 
Yes, but there are physical limits you can't overcome even with perfect optics. It's not just a matter of technology, it's the fineness with which anything can be observed. The closer you are in space, the less of a problem it is, but the closer you are in space, the closer you are in time, FTL travel or no. Once you get enough distance in space to get the distance in time you need to see what you want to see (unless you're only needing to look back a matter of light minutes) then you lose any ability to form an image of the source with a useful resolution. Even if you turn out the super spy ships with huge detectors on an optimized baseline for interferometry.

Space removes the information.

Now, gross measurements and observations are possible. FTL would be an astrophysicist's dream. To a certain degree, we have just started to do something that comes close: we've gotten useful data on a supernova's light curve through watching the reflection of that light off a nebula that's the right distance from the source to have the light from the longer path get here now, while the direct-path light went by too long ago to be observed with present instrumentation. IIRC, the delta t was something like 480 years.
 
yet another proposal of this "angular resolution" myth....you keep using this phrase....i do not think it means what you think it means.........
 
The angular resolution in question is "how much of our field of view does one pixel take" for most purposes. Interferometry reduces that quite a bit, when combined with big scopes and large baselines.

The issue of accuracy of aiming is a whole 'nother ball of ick!
 
yet another proposal of this "angular resolution" myth....you keep using this phrase....i do not think it means what you think it means.........

Saundby has it pretty much nailed as far as I can see, and he should know if he works in that field:

I design instrumentation and sensor systems IRL, so I get to run headlong into this (and customers who don't believe in physics, as far as I can tell ;) ) all the time.

But I'm curious - what do you think it means? You keep using the word 'myth'?
 
yet another proposal of this "angular resolution" myth....you keep using this phrase....i do not think it means what you think it means.........

You can only see so much detail. There are limits imposed by what you are seeing by (visible light, radio, electrons, gamma radiation, etc.) There are limits imposed by the nature of space.

You can build a literally perfect detector, put it above the atmosphere, but you're not going to get enough resolution to see detail below a certain size at a given distance even without worrying about effects of the medium through which you're looking. There is a lower limit of subtended angle at which details may be resolved. Below that limit, you can see emitted radiation but it might as well be a point source going flicker, flicker for all you can tell about it.

Let's say we've got an object that is a geometric point that emits light. When I point a perfect imaging system at it, I do not get an image of a point, even though my perfect imaging system has no limitations like pixel sizes, and it otherwise does nothing that it shouldn't when forming an image on my perfect detector. Instead of a point, I see a disk surrounded by a number of rings. I can see them very well, thanks to the perfect sensitivity of my detector. There's a disk that's substantially larger than a point that fades out at the edge, then there's a dimmer ring that fades in, then out, and another ring that is outside that one. Since my system is perfect, I can see rings across my entire field of view, each only a few percent of the brightness of the one inside it.

Add a second point source of light. So long as it is well away from the original source, I see it as the same sort of disk and rings image as the first point, but in a different place. When they get closer, but don't actually touch, the disks and rings begin to merge together. At a point, they cease to be distinguishable as two separate sources. They are not lined up with each other, they are still separated, but the image no longer shows this. They appear as a single disk and ring structured image with the combined brightness of both. Even my perfect sensor system can't tell the difference. If you want to know anything about the points individually, you can't. If one goes out, I can't tell you which one did. If they move with respect to each other within this distance, it can't be seen. The image stays the same.

The resolving power you have is limited by many things, but the biggies are wavelength of the radiation you're capturing and the effective aperture or baseline of your observations. Even if you have an effectively infinite aperture by surrounding the subject of observation with a sensor sphere, the effects of wavelength are still there (I won't get into the other problems you get into or can make for yourself by trying to do this--physics creeps in through all the cracks so when you try to push something to either zero or infinity in one part of the equation, you're going to get physics you never dreamed of squeezing in with infinite force somewhere else.)

Now, this is in a best case scenario. Only two perfect point sources of light. When you start imaging something that isn't just two points but is what is called an "extended object" you get an image that is a mosaic of these disk and ring patterns. These patterns interact with each other. Varying frequencies, brightness levels, and so on mean that with any object that is bigger than a point source cannot be observed at the theoretical maximum resolving power of your system. Your ability to see detail diminishes dramatically. You can't tell where things are with respect to each other at even further angular separations from each other than when working with just the two points of light.

Even with perfect optics and a perfect detector with no limit to its resolution (infinitely small pixels, if you will.)

So, your ship's captain happens to have the best sensors available picked out of a snapshot camera found in a rubble heap of the ancients. 3I science can't tell them from perfect. He stumbles on the site of an attack, the ship still outgassing. He microjumps to one light hour away, real close, a smidge over 670 million miles. At this distance a 500 foot long craft subtends an angle of about one half of one millionth of an arcsecond. I'm not here to do real work unless you pay me ;) , but I can tell you easily off the top of my head that we're way below the limits of resolution of even a perfect sensor system operating at any wavelength that's going to interact with the subject in a useful way or be emitted by it. The craft could be a mile long and any details of or about the object are going to be lost. If you walk around in space using FTL a bit you can watch the two craft approach and separate, but you won't see any details.

You'll have better means of investigating the crime in detail.
 
Last edited:
Just a thought. Air France Flight 447 sent a flurry of automatic distress signals before it went down. Even though they had no contact from the crew, they know when exactly the electrical system was starting to fail... What if your transport ship sent an auto distress signal before the EMP attack? I expect that radio signals can travel more than 3 light seconds (we send signals to the Mars rovers all the time,) and if video is included...
 
Back
Top