I always thought that we just aren’t supposed to know.

The MatrixAll the basic laws of physics are very simple, and almost anyone can understand them. Everything is made of tiny atoms that look like mini solar systems, and each of these atoms has an absolute mass, speed and location, just like the objects that they make up. We can push stuff around using forces, hindered by friction, reaction and gravity.

All of these concepts are simple.

Too simple…

When you start thinking at scales smaller than electrons and masses heavier than suns, things become a lot more complicated. At these immensely big or small sizes, the normal laws are not entirely correct anymore and have to let bent 4-dimensional time-space and the abstractness of quantum mechanics take place.

Location, mass, speed and even time become relative. Gravity doesn’t exist anymore: a straight line becomes bent through time and space and we follow these “straight” lines in free fall. Forget location and speed: all hail the waves of probability.

And the worst of all: Physicists and scientists are not at all sure about these theories. There are a thousand and one theories that try to explain the strange phenomena we see at these scales. And they are not compatible. Quantum mechanics, explaining physics at sub-atomic scale, is in direct contradiction with general relativity, giving accurate predictions about stars and black holes.

Seeing this complex mess, I thought we were just not meant to know.

I thought we were meant to stick to our little basic laws of physics, that are both very easy to understand and still give accurate results for all we would ever need to calculate.

But humans always want to raise the limit. They started thinking and experimenting with mind-blowing scales and had to come up with very far-fetched theories to keep up with the awkward results of their experiments.


Today I changed my mind.

I was reading about about the double-slit experiment, yet another experiment on very small scale (electrons) that produced unexplainable results. To keep this article short, I will explain the experiment only very briefly. If you want to learn more about the experiment, feel free to click the link above, which leads to a very long but complete article on Wikipedia.

In the double-slit experiment, a bunch of scientists fire a stream of electrons (or photons/atoms/molecules) through a wall with two small holes (called slits), onto a screen which registers where the electrons hit. Then they draw where all the electrons hit, which forms a pattern, a kind of image.

You would normally expect to see two spots where the electrons hit, as electrons fly in a straight line. Such a pattern I will call a linear pattern, in lack of a better name. You can see it below:

Linear Pattern

But the electrons do not form a linear pattern. They interfere with each other (bouncing against each other) when going through the slits and behave like waves of water would do when going through two slits. They form an interference pattern, like you can see in this image:

Double Slit Experiment

That is not very strange yet, only a lot of different words for such a silly thing. However, the experiment was repeated, firing electrons one by one, so they could not bounce against each other. Surprisingly, the result was still an interference pattern.

Quantum physicists tries to explain this by stating that the electron becomes a wave of probability (all possible locations where the electron could be, each with their own chance) so the electron would bounce against different possibilities of itself, thus forming an interference pattern.


Of course people tried to disprove that. They repeated the same experiment with detectors at each slit that detect which slit the electron went through, so they could prove that the electron was exactly at a certain location, and having no other possible locations it could not possibly bounce against itself.

However, in that experiment, it actually didn’t. The result was a linear pattern.

But when they carefully erased the detector’s data before looking at the data or the result, the result was an interference pattern again! No matter how much time they left between the experiment and the actual decision of erasing the data or not, the result was a linear pattern when keeping the data and an interference pattern when erasing it.

So deciding something one day decides the outcome of an experiment that already happened several days before.

Actually, quantum physics’ theory about the waves of probability is still right then. When having data about which way the electron went through, there is only one possible location where the electron could have been, so the electron couldn’t have bounced against other possible versions of itself, so the result should have been a linear pattern. True so far.

On the other hand, if you don’t have the detector’s data, or “evidence”, about the location of the electron at the time of the experiment, there were multiple possible locations for the electron to have been, and the electron could have interfered with itself. Therefore the experiment should have produced an interference pattern. Which it did.

So the theory of quantum mechanics creates correct predictions.

But it just doesn’t feel right, does it?


The Virtual Reality theory explains the same strangeness in quite a different way. It states that our universe is a simulation, like in the movie “The Matrix”. That may sound quite ridiculous but it’s an idea well worth considering.

Virtual Reality doesn’t say anything about a war of robots against humans or that our universe would be used as a ‘prison for the mind’. All it says is that something, maybe a computer-like thing in another universe, is calculating and simulating our universe. That simulation runs on the simple and basic laws of physics that we learn in high school.

Our entire universe is extremely huge, and an electron is incredibly small. So to simulate our universe, you would need an incredible amount of computing power. But computing power is not free, not even in another universe. To calculate something, you need something else. The heavier or larger the calculation, the more of that “else” you need.

That would be quite a problem, as it would be beyond our imagination what kind of power and memory a computer to simulate our entire universe.

But this problem can often be diminished by replacing many simple calculations with one complex calculation that has about the same effect but requires less computing power. For example, it would not be necessary to calculate every single photon in a ray of light, but the whole ray could be calculated at once with a rather complex ray function. That would, for the cost of some complexity, soak up far less processing power.

This is the case in the double slit experiment. When a stream of electrons is fired, not every electron is calculated one by one. Instead, the simulation waits until it’s necessary to calculate the electrons and then calculates all those electrons at once with one wave function.

But when would it stop waiting and execute that wave function?

That could be once every timespan of x nanoseconds. Or picoseconds. Or maybe even smaller. But that would make the electrons arrive in groups, and no experiment has ever measured that electrons or photons in a ray arrive in groups.

Instead, the simulation would only calculate the electrons when it is necessary to calculate them: when anyone looks at either the data or the result.

And that is how your decisions today can still influence the results of an experiment yesterday: the experiment wasn’t actually calculated yesterday. It was calculated the same moment that you looked at the data, even if the experiment actually happened the day before.

The programmers of our universe never expected that something so small and so far-fetched would ever be done and examined. Electrons and photons should always be fired in beams, never one by one. And they were right: never has such a far-fetched thing happened. Until evolution made humans capable of building machines to do it for them.


But simulations have another problem: every “variable” in a simulation requires a certain amount of memory. The larger the variable, the more memory required for it. That is why programmers put a limit on how large a variable can be. Just like processing power, memory is not free, whatever universe you’re in.

Speed is the clearest example of such a variable in our universe. Not a single object, wave or particle can go faster than light. Never can the “speed” variable exceed the speed of light.

The faster an object goes, the more area that has to be checked for collisions. If an object would go infinitely fast, an infinite amount of space would have to be checked if it is clear to fly through. The worst of all is that that infinite amount of space has to be checked instantly.

In an objective reality that forms no problem, but a simulation would get stuck in that never-ending calculation, in other words causing the simulation to get stuck in an infinite loop and crash.

So there had to be an upper limit for speed. And that became the speed of light. Nowhere can any object or any particle go faster than the speed of light.

Why else would light have an immense speed and not just be infinitely fast?

But that results in a problem. If something keeps accellerating, it will eventually reach the speed of light. But it can’t go any faster. There are 2 solutions for this: Make it impossible to accellerate once you reach the speed of light or gradually make it harder to accellerate, so nothing can actually even reach the speed of light.

The second option would, though far more complex, be a lot smoother. If programmed correctly, a modified time-frame and mass would make it impossible for the accellerating object to tell that it’s accelleration slows down. And that is exactly what the special theory of relativity tells us.


Scientists have always been bamboozled by the fact that Einstein’s theory of relativity and quantum mechanics are just not compatible. While quantum mechanics gives correct calculations and predictions about everything on submolecular scale, like the double-slit experiment, it is in direct contradiction with the theory of relativity that produces correct calculations everything at scales of the speed of light and supermassive black holes.

Both theories are perfectly testable and every calculation or implication they ever predicted are proven right and are used daily in many applications. But they contradict each other almost litterally. If quantum physics are true, the theory of relativity should be false, same goes for the other way round.

Of course people tried to combine them, but nobody really succeeded so far. The closest theory so far is the string theory, which says that our universe does not consist of the 3 dimensions up-down, left-right and forward-backward, but of 11-dimensional strings. The “exceptionally simple” runner-up requires no less than 258 dimensions. And none of these theories are even close to being proven.

It just seems like these two theories are not supposed to be combined.

Looking at Virtual Reality theory again, these are two different theories that describe unrelated issues. The only thing they have in common is they are both ways to reduce the required amount of processing power to calculate our universe, but combining simple calculations in a more complex function is not related to preventing variables from exceeding their limits in any other way.

So these two theories just shouldn’t be combined.


All these complex theories still produce accurate calculations, but that does not make them correct. They do not ‘prove’ a far-fetched way of what our universe actually is (*cough* bent time-space *cough* probability waves *cough*).

There is no way to fully prove Virtual Reality theory, because it isn’t possible to tell whether your universe is “real” – whatever that may be – or simulated. Or at least, it shouldn’t. A simulation can always contain imperfections.

Think about it: does it seem likely that black holes, invisible holes sucking up everything they cross including light and evaporating into nothing until all is gone, were even meant to exist?

They are just inevitable consequences of the limitations of our universe. If the speed of light was infinite, black holes would not exist.

In the popular term, black holes are a ‘glitch’ in our universe: unintentional side-effects of some of the program’s features.

Now if the speed of light were a power of 2 compared to the size of a proton, neutron, electron, photon, quark or something else at those scales, that would prove my theory.

Unfortunately, there is few data about the size of that stuff, so you’ll just have to read my words and make up your mind.

See for yourself.


One Response to “Universe.exe”

  1. wonderful work it helped me in project

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: