I am trying to research (or find previous researches) on the idea of similarity between "Physical" Space-Time behavior and "Computational" Space-Time trade-off.
As you probably know, length contraction is the phenomenon that a moving object's length is measured to be shorter than its proper length, which is the length as measured in the object's own rest frame. This contraction is usually only noticeable at a substantial fraction of the speed of light. Length contraction is only in the direction in which the body is travelling. For standard objects, this effect is negligible at everyday speeds, and can be ignored for all regular purposes, only becoming significant as the object approaches the speed of light relative to the observer.
A space–time or time–memory trade-off in computer science is a case where an algorithm or program trades increased space usage with decreased time. Here, space refers to the data storage consumed in performing a given task (RAM, HDD, etc), and time refers to the time consumed in performing a given task (computation time or response time).
The similarity is exposed in the following way:
As seen by a stationary observer, the moving object's length is shorter. At the same time, the stationary observer's clock measures "more time" - the observer gets older faster than the moving object.
In computational complexity theory, algorithms which use more space typically require less time. For example, if some theoretical algorithm has a very large table with all the answers, it requires O(1) time to complete its computation.
Such strong correlation between physics and computation can definitely support the simulation hypothesis.
Any ideas are appreciated.