The theory, which I probably misunderstand because I have a similar level of education to a macaque, states that because a simulated world would eventually develop to the point where it creates its own simulations, it’s then just a matter of probability that we are in a simulation. That is, if there’s one real world, and a zillion simulated ones, it’s more likely that we’re in a simulated world. That’s probably an oversimplification, but it’s the gist I got from listening to people talk about the theory.

But if the real world sets up a simulated world which more or less perfectly simulates itself, the processing required to create a mirror sim-within-a-sim would need at least twice that much power/resources, no? How could the infinitely recursive simulations even begin to be set up unless more and more hardware is constantly being added by the real meat people to its initial simulation? It would be like that cartoon (or was it a silent movie?) of a guy laying down train track struts while sitting on the cowcatcher of a moving train. Except in this case the train would be moving at close to the speed of light.

Doesn’t this fact alone disprove the entire hypothesis? If I set up a 1:1 simulation of our universe, then just sit back and watch, any attempts by my simulant people to create something that would exhaust all of my hardware would just… not work? Blue screen? Crash the system? Crunching the numbers of a 1:1 sim within a 1:1 sim would not be physically possible for a processor that can just about handle the first simulation. The simulation’s own simulated processors would still need to have their processing done by Meat World, you’re essentially just passing the CPU-buck backwards like it’s a rugby ball until it lands in the lap of the real world.

And this is just if the simulated people create ONE simulation. If 10 people in that one world decide to set up similar simulations simultaneously, the hardware for the entire sim reality would be toast overnight.

What am I not getting about this?

Cheers!

  • Nibodhika@lemmy.world
    link
    fedilink
    arrow-up
    22
    ·
    4 months ago

    You are correct, but missed one important point, or actually made an important wrong assumption. You don’t simulate a 1:1 version of your universe.

    It’s impossible to simulate a universe the size of your own universe, but you can simulate smaller universes, or to be more accurately, simpler universes. Think on videogames, you don’t need to simulate everything, you just simulate some things, while the rest is just a static image until you get close. The cool thing about this hypothetical scenario is that you can think of how a simulated universe might be different from a real one, i.e. what shortcuts could we take to make our computers be able to simulate a complex universe (even if smaller than ours).

    For starters you don’t simulate everything, instead of every particle being a particle, which would be prohibitively expensive, particles smaller than a certain size don’t really exist, and instead you have a function that tells you where they are when you need them. For example simulating every electron would be a lot of work, but if instead of simulating them you can run a function that tells you where they are at a given frame of the simulation you can act accordingly without having to actually simulate them. This would cause weird behaviors inside the simulation, such as electrons popping in and out of existence and teleporting over gaps smaller than the radius of your spawn_electron function, which in turn would impose a limit to the size of transistors inside that universe. It would also cause it so that when you fire electrons through a double slit they would interact with one another, because they’re just a function until they hit anything, but if you try to measure which slit they go through then they’re forced to collapse before that and so they don’t interact with one another. But that’s all okay, because you care about macro stuff (otherwise you wouldn’t be simulating an entire universe).

    Another interesting thing is that you probably have several computers working on it, and you don’t really want loading screens or anything like that, so instead you impose a maximum speed inside the simulation, that way whenever something goes from one area of the simulation to the next it will take enough time for everything to be “ready”. It helps if you simulate a universe where gravity is not strong enough to cause a crunch (or your computers will all freeze trying to process it). So your simulated universe might have large empty spaces that don’t need that much computational power, and because traveling through them takes long enough it’s easy to synch the transition from one server to the next. If on the other hand maximum speed was infinite you could have objects teleporting from one server to the next causing a freeze on those two which would leave them out of synch with the rest.

    And that’s the cool thing about thinking how a simulated universe would work, our universe is weird as fuck, and a lot of those weirdness looks like the type of weirdness that would be introduced by someone trying to run their simulation cheaper.