Is our universe a simulation ?

Slashdot features an article today, about a rebound of the argument about the likeliness of living in a computer simultation. It made me react because I remember having used the very opposite argument to show that our universe being ruled by a god is less likeley than our universe not being ruled by one (in my definition of a god, this is very close to the « we live in a computer simulation » thing).

To keep it short, Bostrom assumes that at least one of these proposition is true :

  1. the human species is very likely to go extinct before reaching a “posthuman” stage;
  2. any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof);
  3. we are almost certainly living in a computer simulation.

He then reaches the conclusion that we are more likely to live in a subworld simulation of the evolutionary history of a species which probably emerged from a very similar environment, than we are to live in the real stuff.

I remember pondering about the probability of our world to be bootstrapped by a superpower vs. not being bootstrapped. Basically for an intelligent species to emerge from nothing, you have to wait a long time, and the probability is low : let’s say N after a given time. But then, if the universe is not restriced to a single environment, you multiply your chances by the number of environments E, produced by the universe (if our visible universe is a good image of a real universe, then this number is quite huge).

Then you have to take into account the probability that such an intelligent species can transition to a species able to run simulations of the universe with a size big enough for the simulation to be perceived as the real thing by its inhabitants. And also the probability that the energy required to run that simulation is available, and that it is interesting enough to be run in a way compatible with what we can observe.

To answer this, we should seriously think about what can be interesting enough for us to spend a large amount of computing power in running a simulation of our world : predict the future ? have infinite TV programs of primitives doing primitives things (remember that we would be in post-human state by then) ? be a placeholder for post-humans brain to play into ?

I don’t know the processing power currently spent in video games vs. universe simulation vs. small scale life simulation vs. spying on people, but my guess is that, if we’re in a post-human state, then people will crave for fun, because everything that could be understood will have been understood, and predicting the future in a post-human era would require to simulate a post-human society. So either we are a simulation for a TV network, or we are in a video game. But I don’t think a post-human civilization would like to spend energy in simulating many human worlds, unless it’s for the fun of playing gods in a society of the past.

Now, factor all that : E.N is the number of human-level intelligent societies in a big universe after a given time. Now multiply that by the probability P that we can transition to a post-human society with enough computing power and energy to run simulations. Then multiply by the number S of simulations that could be run by such a post-human to have the characteristics of a subpart of the universe without any visible intervention from almighty actors. The question is thus: is E.N.P.S > E.N, or is P.S > 1 ? I’m not so sure.

Actually my personal conclusion is : since we have not observed god-like people making fun of us, I would vote that we’re living in a non-simulated world. Because seriously, if you could play god in a simulation of the earth in 20th century, wouldn’t you ? Or would you rather just watch primitives live their boring lives ?

Talk at Paris Machine Learning Meetup

I’ll give a talk at the fantastic Meetup organized by Igor Carron and Franck Bardol ( here to the meetup ) about NCISC and the demo we are setting up on wikipedia data.

It’s tomorrow, Wednesday the 12th in Paris.

I’ll present NCISC, and some results on a few standard datasets (reuters, 20newsgroups, ohsumed) as well as a comparison with Deep-Learning inspired methods. I’ll also present the pipeline we’ve setup for analyzing Wikipedia from the text, links and category perspectives.

There’s a demo (with relatively old and buggy data) here : http://demowiki.exensa.net/

Also the talk will be live streamed with Google + Hangout and the slides will be available here