https://www.theguardian.com/technology/2016/oct/11/simulated-world-elon-musk-the-matrix?CMP=oth_b-aplnews_d-1
“Recognizing we live in a simulation is game-changing, like Copernicus realizing Earth was not the center of the universe”
https://www.theguardian.com/technology/2016/oct/11/simulated-world-elon-musk-the-matrix?CMP=oth_b-aplnews_d-1
“Recognizing we live in a simulation is game-changing, like Copernicus realizing Earth was not the center of the universe”
http://www.theverge.com/a/luka-artificial-intelligence-memorial-roman-mazurenko-bot?utm_campaign=theverge&utm_content=chorus&utm_medium=social&utm_source=twitter
http://motherboard.vice.com/read/we-dont-live-in-a-simulation?utm_source=mbtwitter
“In a nutshell, what could this simulation be made of? If the reality we see were a simulation, we should assume that the simulator is made of altogether different stuff that, by definition, we could not even conceive (it should be made of something completely different from everything we meet in our world). While the notion that there is a base reality and additional levels of reality is both appealing and enthralling, we have evidence of only one level of reality. The world we live in is just made of objects.”
I’m a little hesitant to publish this.. because this guy doesn’t seem to really know what he’s talking about.. or just is really lacking in imagination. To me it’s just more interesting that people are trying to refute Elon’s statement. There of course is no way it prove or disprove Elon Musk’s statement. I can claim there are unicorns living outside the Universe, but since we can’t get outside of our Universe there is no way to prove/disprove it. The author of this article doesn’t seem to understand the nature of simulation, and potential power of simulation. My understanding of his argument is basically, “The apple that looks so conceiving to Musk on a VR computer screen would be utterly disappointing for a butterfly looking for a home or for a Robin looking for a worm.” But.. that doesn’t really make any sense. The worm and/or robin would simulated as well.. as well as the inside of the apple.
Here is an example of what is Elon is saying:
(This is not a real house.. just CGI rendering in a new game engine)
I’m currently reading a chapter in Nick Bostrom’s book “Superintelligence: Paths, Dangers, Strategies” about giving AI values. While on the surface this may seem straightforward. It is anything but. Aside from the obvious questions like “who’s values?”.. where do you even start when it comes to programming them? One of Bostrom’s favorite illustrations about the dangers of AI is that if you instruct an AI to “make people happy” it will very recognize that the source of happiness is a chemical process in your brain and you’ll have wires sticking out of your skull and a silly grin on your face. What is happiness anyway? Love? Joy? If poets and authors can’t fully grasp these things how can an programmer possibly hope to build AI that can “maximize” these things. This is why more often then not AI is seen as a threat. How could it possibly be expected to understand and accept humanity? Bostrom poses some interesting possible solutions. One jumped out at me the other day when I read about Elon Musk’s belief that we most likely live in a simulation. One of the possible ways of instilling values in AI is through simulation. Basically you make millions, if not billions of versions of the AI and through a selection process.. or evolutionary process.. pick AI that exhibit traits that you want to keep.. and toss the rest.
The obvious question to Elon Musk about the simulation we are living in would be “Why?”. Who is running the simulation and what is the purpose? Well, if you view people as simulated bits of software.. perhaps those with desired traits are “harvested” while the others are tossed. Strangely or not so strangely this falls pretty close a Christian perspective that there are beings outside of our reality/simulation and when our software/hardware ends here.. it continues on either in a “better” reality or.. worse.
These thoughts apparently aren’t just mine as these seem to be echoed in this book: Your Digital Afterlives: Computational Theories of Life after Death (Palgrave Frontiers in Philosophy of Religion)
http://motherboard.vice.com/read/elon-musk-simulated-universe-hypothesis
“The strongest argument for us being in a simulation, probably being in a simulation is the following: 40 years ago, we had pong, two rectangles and a dot,” Musk said. “That is what games were. Now 40 years later we have photorealistic 3D simulations with millions of people playing simultaneously and it’s getting better every year. And soon we’ll have virtual reality, augmented reality, if you assume any rate of improvement at all, the games will become indistinguishable from reality.”
I would start with energy. With obscene amounts of energy a lot of other problems get solved relatively easily. It takes a lot of energy to create fresh water from saltwater… but with excess energy why not? So, AI is tasked with creating safer nuclear or more efficient solar (or something we can’t imagine). It builds an army of robots to manage the construction. Now we have gobs electricity flowing everywhere. Maybe it makes better batteries to store and move energy around without transmission loss. Sure, why not? Now, we use all the extra electricity that isn’t being used for run cat video servers to pump tons of fresh water out of the oceans. We could turn deserts into farms.. or just massive food forests. More armies of robots could be used to tend to and gather the crops. Now we’ve got more food and fresh water then we know what to do with.. and free electricity. Things are looking good. But people are still dying of disease etc. So, we are going to need some medical nanobots that are capable of maintaining human bodies. This isn’t that far fetched. Unfortunately, people are still dying of accidents.. which are now more tragic because death is becoming less inevitable since we’ve wiped out disease and starvation. So, there are a few options here. We could grow surrogate bodies in labs and then upload our consciousness to them. Or, we could upgrade human bodies a bit.. add some metal/plastic.. some emergency nano-repairbots. If things get really rough (asteroids?) we may need to look at putting backup copies of everything in the “cloud”. I’m leaving something out here though. Every time we see people coming to the rescue something always gets in the way.. other people. If suddenly infrastructure became unnecessary.. a lot of governments would very quickly lose power. If I have all the food/water/power I need (imagine everyone “off the grid”).. why would I need a powerful central government watching out for me? Transportation? Nah, I’ve got my electric self-repairing off road vehicle for that (heck, it might even fly). That just leaves.. policing. Will people still steal when they have everything they need? Yeah, probably. Will countries still find a reason to fight? Yeah, it’s hard to believe religious extremism and territorial disputes are just going to vanish over night after going on for thousands of years. Is there a superintelligence solution to this? It’s not hard to imagine a solution.. but not a good one. The scenarios here range from a “can my superintelligence beat yours” to a police state where a strong AI controls every aspect of human life.. making sure everyone follows every single law. There may be some complex middle ground where law breakers are arrested and then a jury of “real” people show up via skype (or something) to deliberate. The point though is that when it comes to people being bad.. there is no clear technological solution. We should definitely try and get to the point where that is the only real problem we are trying to deal with though.
In general, bad things happen when things lose their value. Humanity in particular. When a culture can devalue a certain segment of society to the point that their non-existence and more valuable then their existence… really really bad things happen (holocausts). Or on a more simpler level, if my desire for what you have is more valuable to me then your happiness.. then I am going to be inclined to take what you have. So, where does value come from? Is value intrinsic or is it derived from some external source? Babies for instance: A human baby by itself has extremely low value. As horrible as this sounds, it will cease to exist if left on it’s own. However, to it’s parents (most of the time) a baby can be the most valuable thing in the Universe. They would do anything for this completely useless creature. I suppose you could make the argument that is a product an evolutionary process. Parents that have this high value of their offspring will care for their children and therefore propagate. If a child has a low value, it’s unlikely to continue on. The question I am getting at is.. can this be applied to God? Supposing we “create” an AI god.. will it come to view us as children that while we provide it no real value.. like a baby to it’s parents.. will it value us? I suppose it could value us like we value our ancestors.. perhaps a nice zoo or museum? Who knows, maybe we are already in a nice enclosure. One of those nice enclosures that convinces the viewer that the creature inside has no idea they are actually trapped. It does seem “convenient” that we can’t go the speed of light and escape our galaxy. Here’s another way to think about it: If we created “virtual” people in a virtual Universe.. could you come to see them as your children? Maybe it would help to think of them as talking ants in an ant farm. After getting over the shock of talking to your ants you learn their names and their different personalities.. you come to love and care for them. Why not? We already do this with a lot of animals.. animals that don’t even approach the intellect of toddlers. What if your ants were brilliant? Our dog has roughly zero to 1% in value in all practical sense. It’s practical value comes from it’s ability to clean up after our kids.. but it’s really just moving the mess outside. Yet, to my children it’s extremely valuable. So, moving on. If one dog is immensely valuable to them.. then two dogs would be twice as valuable. How about 10 or 200 dogs. Considerably less valuable. They couldn’t even name 200 dogs let alone care for them. So, the amount of something clearly plays a factor in our sense of value. I would like to think that if I had 200 human babies they would all be equally valuable. What about 6 billion? The quantity of something definitely plays a roll in value. I think this is where we are start losing common ground with God. When flying over a vast expanse of urban sprawl it’s nearly impossible to look down and place value on the team sea of souls. But, each person is most likely of infinite worth to someone. That’s why after a tragedy, a large loss of life, in order to feel anything we need to see the loss to someone else.. the loved ones left behind. You think about losing someone you love and suddenly the real weight of it hits you. Back to the my virtual people.. supposing I had created dozens of AI souls within a virtual world. Maybe I virtually lived amongst them for years. Sure, I could never quite get them to believe they weren’t really “real”. Maybe I was working on a way to make them real. What if I found a way to back up their digital soul and move them out of the “fake” world and into my “real” world. Maybe I could clone a human body and then upload their digital soul to it? But, then, someone came a long and wiped out my harddrive.. or corrupted the “disc”. Would my loss be any less real then losing friends on the other side of the globe. A “being” snuffed out is still gone. Right? Stephen Hawking is physically almost non-existent. Yet his mind for all intents and purposes lives on. He’s as close to a living computer program as it gets. In fact, the computer he uses to speak may have taken over years ago for all we know. Is he still valuable? Would he be deeply missed if he was gone? What if it body was completely gone but his computery voice and brilliant mind lived on?