Time = 0.
1 Use these controls to select an object and the coordinates of its five points will be shown below.
This program allows you to create objects in a preferred frame of reference, then set a speed for them to travel at through that frame (expressed in a form such that the speed of light c=1, so half the speed of light is 0.5), and then you can view the scene as it would appear from any other frame of reference, this being done by having a virtual pixel at every location in space taking a picture of any part of an object that shares its location in space at a specific time. Simply by changing the synchronisation of the clocks kept at each pixel, you can take "photos" that show the scene as viewed from any frame of reference you wish, and you can also run through many such pictures taken from that frame to play events through as video. (Be aware that the keys "S" and "D" can be used for stop/start and changing direction, and on some computers it will be more convenient to use those to control the action than the buttons on screen.) These pictures give you the "God view" of the content of space, so there is no Doppler effect involved: you see the entire scene as it actually is at a single moment of time for the entire content, though what counts as a single moment depends on which frame of reference you've set it to use for each picture. The time shown underneath is the Preferred frame's time - events are all run by that frame before each picture is taken by the ref-frame camera for the target frame, but in the process of taking the photo, some objects are then moved forwards in time from there while others are moved back in time to get them all into the right places for the photo, all of that dependent on the required pattern of synchronisation.
The maths used to run the program works essentially as follows. When you change frame of reference, the program always starts from the Preferred frame's data, then it uses the speed of apparent propagation of simultaneity (which ranges between c and infinity: this is the reciprocal of the speed of the target frame relative to the preferred frame) in order to work out where objects would be in the preferred frame at the moment in time when they are photographed by a camera working from the target frame of reference. A further adjustment then needs to be made to account for the larger spacing between pixels in the direction of travel that would effectively happen if those pixels were at rest in the chosen frame: because the pixels are actually at rest in the preferred frame, that means we have to compress the resulting images by applying the length contraction for the chosen frame to them.
One of the important things to explain here is the speed of apparent propagation of simutaneity. To take "photos" showing things from the perspective of the preferred frame of reference, all pixels must record the scene simultaneously by the preferred frame's clocks, which means the speed of apparent propagation of simultaneity for the preferred frame is infinite: there is no delay before the next pixel along in any direction takes its part of the image. To take "photos" showing things from the perspective of other frames, there must be a delay between some pixels and their neighbours recording their part of the image, and the longest delay that can occur between two pixels taking their part of the snapshot is the time taken for light to make the trip from one of those pixels to the other, so the apparent propagation speed for simultaneity can never be slower than the speed of light.
This program only supports two space dimensions, so frames of reference are selected by setting two speeds: one for the target frame's velocity along the x-axis and the other for its velocity along the y-axis. The program uses these speeds as vectors to find the actual speed at which the target frame is moving through the preferred frame, and the reciprocal of that speed gives you the simultaneity propagation speed (although the program actually does this with the two vectors instead as it's more useful to work directly from those). In lines perpendicular to the direction the frame is moving in, all pixels must take their part of the "photo" simultaneously, and that occurs with all possible frames, but in the direction of travel there are delays, which means we have to run some objects forwards in time in the preferred frame to get them to the place where they will be photographed, while other objects need to be run back in time instead to photograph them correctly, all points then being taken simultaneously from the point of view of the time of the target frame of reference.
[For anyone who wants to check that the speed of synchronisity is indeed the reciprocal of the speed of the target frame through the preferred frame, we can imagine a distance separating two points on an object moving through the preferred frame at the speed of the target frame, both those points being on a line pointing in the direction of travel of the target frame. To synchronise clocks at those two points, we need to be able so send a signal at the speed of light from each of them towards the other and have them both reach the midway point in between at the same time. As the points are all moving, the one at the back needs to send its signal out first because that signal will take longer to reach the midway point. We need to find the distance that the signal from the point at the rear has traveled through space before moment when the signal from the leading point is sent back towards it, and if we then divide that distance into the distance between the two locations in space where those two signals are sent out from, we will have the speed of apparent propagation of synchronisity. By checking a range of speeds, you can see that the results of this calculation always match the reciprocal of the velocity of the frame. If you need help working through this, the first step is to work out how far the midway point moves through space between the leading point sending out its signal and the midway point receiving it. This is done simply by adding the speed of light (c=1) to the speed of the frame (ignoring any negative sign) before dividing the speed of light (c=1) by that total: the result will be used in a moment, so let's call it D1 so that I can refer to it again easily. The next task is to work out how far the signal from the rear point has to move before it catches up with the midway point which is racing away from it. To find that distance, we subtract the frame's speed from from the speed of light (c=1), then divide that into 1 (the initial separation). We will call this distance D2. We can now add D1 and D2 to get the distance between the two points where the signals were sent out, which we can call D3, and we can subtract D1 from D2 to find the distance the signal from the back point had already covered before the signal from the lead point was sent out, which we can call D4. All that's left to do is divide D3 by D4 to get our speed of apparent propagation of simultaneity, or we can divide D4 by D3 to get back to the speed of the frame.]
So, having worked out how to calculate the speed of propagation of synchronisity, all we need to do for each point on each object in the list of content of our 2D space is calculate how far we need to move it through that space in the preferred frame of reference until it hits our line of simultaneity which is moving towards it (either backwards or forwards in time depending on which way it has to go for them to meet). Where the point meets the line, that is where it must be when it is photographed, and the only other adjustment needed will be to apply length contraction to the image using the standard contraction used for the speed of the target frame.
The method for calculating where this synchronisity line and individual points collide involves working out equations for the synchronisity line (when it passes through the origin) and another line leading from a point needing to be photographed to the point where it will meet the synchronisity line: we transfer the vectors for the movement of the synchronisity line to the point, subtracting them the vectors for the point's own movement, and we do this because that makes it easier to find the place where those two lines intersect. Once we have the intersection point, we can adjust its position for where the lines would actually meet if both the synchronisity line and the point were moving (rather than just moving one of them). Having made that adjustment, we now apply the length contraction to move the new intersection point closer to the first intersection point (these are both on a line perpendicular to the line of synchronisity, so this is already the right direction for that contraction), and once that's done we will have the actual location for the point that's to appear in the photograph, showing it where observers at rest in the target frame would measure it to be. The program simply does this for every point listed as space content, and then the display part of the program runs though all the locations that have been worked out and creates an image out of them. Actual numbers for the locations of these objects are also printed out below in case they're needed. Further notes on the algorithm can be found in comments within the source code.
[For a proper understanding of relativity, see my page on the subject. Understanding Lorentz Ether Theory (LET) is crucial for getting to the truth behind Special Relativity, but it is rarely taught anywhere.]