Categories
AI Eternity

The Universe as a self-learning computer

Physicists working with Microsoft think the universe is a self-learning computer (thenextweb.com)

“Part of the theory seems to indicate the universe is a learning computer, in that the laws it’s currently constrained by were not set in stone at its inception.

We can’t reverse the universe, as a process, because there exists no record internally verifiable record of its processes — unless there’s a cosmic hard disk floating around out there in space somewhere.”

Categories
AI Eternity God

AI will do what we ask it to (and that is bad)

https://www.quantamagazine.org/artificial-intelligence-will-do-what-we-ask-thats-a-problem-20200130/

‘Powerful artificial intelligence is like the proverbial genie in a bottle. A seemingly innocent wish — “make my home eco-friendly” — can lead to unintended consequences.’

Perhaps this is why many prayers go unanswered.. at least in the way we want them too. It reminds me of when the ancient Israelites asked for a king “like the other nations”. After a stern warning from God about how that was going to turn out for them.. they asked anyway. And God said, “Give the people what they want”… and of course it was a disaster for them.

How many people have asked for fame and riches only to be destroyed by them? Why would having an all powerful AI “genie” be any different? Cure all the diseases? Death by over population and war. Infinite life can easily be an infinite hell.

 

Categories
Eternity multiverse Physics

We are probably definitely living in a multiverse..

https://interestingengineering.com/we-probably-are-living-in-a-multiverse-heres-why

Probably.. the click-bait title was “definitely”. There really aren’t a lot of great alternatives apart from a creator. If AI villagers in a computer game developed “souls” they would say the same thing.. and they wouldn’t be wrong. There are an infinite number of Minecraft “universes”.

Categories
Eternity Physics

Why Black Holes Could Delete (or Save) The Universe

 

The headline leads with “delete” as that sounds more scary. I prefer his other options which include possibly storing ALL information leading to the potential for a reboot. Blackholes as hard drives. And the holographic theory.. which is really interesting (stick through to the end).

Categories
Eternity multiverse

Could Our Universe Have Arisen From A Black Hole?

http://www.forbes.com/sites/startswithabang/2016/10/20/could-our-universe-have-arisen-from-a-black-hole/#27a0467b74e0

Categories
Eternity multiverse

Is our world a simulation?

https://www.theguardian.com/technology/2016/oct/11/simulated-world-elon-musk-the-matrix?CMP=oth_b-aplnews_d-1

“Recognizing we live in a simulation is game-changing, like Copernicus realizing Earth was not the center of the universe”

Categories
Eternity

When her best friend died, she rebuilt him using artificial intelligence

http://www.theverge.com/a/luka-artificial-intelligence-memorial-roman-mazurenko-bot?utm_campaign=theverge&utm_content=chorus&utm_medium=social&utm_source=twitter

Categories
Eternity God multiverse Physics Superintelligence

Simulation, Consciousness, Existence

http://www.frc.ri.cmu.edu/~hpm/project.archive/general.articles/1998/SimConEx.98.html

 

 

Categories
AI Eternity God multiverse

Giving values to AI

I’m currently reading a chapter in Nick Bostrom’s book “Superintelligence: Paths, Dangers, Strategies” about giving AI values. While on the surface this may seem straightforward. It is anything but. Aside from the obvious questions like “who’s values?”.. where do you even start when it comes to programming them? One of Bostrom’s favorite illustrations about the dangers of AI is that if you instruct an AI to “make people happy” it will very recognize that the source of happiness is a chemical process in your brain and you’ll have wires sticking out of your skull and a silly grin on your face. What is happiness anyway? Love? Joy? If poets and authors can’t fully grasp these things how can an programmer possibly hope to build AI that can “maximize” these things. This is why more often then not AI is seen as a threat. How could it possibly be expected to understand and accept humanity? Bostrom poses some interesting possible solutions. One jumped out at me the other day when I read about Elon Musk’s belief that we most likely live in a simulation. One of the possible ways of instilling values in AI is through simulation. Basically you make millions, if not billions of versions of the AI and through a selection process.. or evolutionary process.. pick AI that exhibit traits that you want to keep.. and toss the rest.

The obvious question to Elon Musk about the simulation we are living in would be “Why?”. Who is running the simulation and what is the purpose? Well, if you view people as simulated bits of software.. perhaps those with desired traits are “harvested” while the others are tossed. Strangely or not so strangely this falls pretty close a Christian perspective that there are beings outside of our reality/simulation and when our software/hardware ends here.. it continues on either in a “better” reality or.. worse.

These thoughts apparently aren’t just mine as these seem to be echoed in this book: Your Digital Afterlives: Computational Theories of Life after Death (Palgrave Frontiers in Philosophy of Religion)

Categories
Eternity

Value

In general, bad things happen when things lose their value. Humanity in particular. When a culture can devalue a certain segment of society to the point that their non-existence and more valuable then their existence… really really bad things happen (holocausts). Or on a more simpler level, if my desire for what you have is more valuable to me then your happiness.. then I am going to be inclined to take what you have. So, where does value come from? Is value intrinsic or is it derived from some external source? Babies for instance: A human baby by itself has extremely low value. As horrible as this sounds, it will cease to exist if left on it’s own. However, to it’s parents (most of the time) a baby can be the most valuable thing in the Universe. They would do anything for this completely useless creature. I suppose you could make the argument that is a product an evolutionary process. Parents that have this high value of their offspring will care for their children and therefore propagate. If a child has a low value, it’s unlikely to continue on. The question I am getting at is.. can this be applied to God? Supposing we “create” an AI god.. will it come to view us as children that while we provide it no real value.. like a baby to it’s parents.. will it value us? I suppose it could value us like we value our ancestors.. perhaps a nice zoo or museum? Who knows, maybe we are already in a nice enclosure. One of those nice enclosures that convinces the viewer that the creature inside has no idea they are actually trapped. It does seem “convenient” that we can’t go the speed of light and escape our galaxy. Here’s another way to think about it: If we created “virtual” people in a virtual Universe.. could you come to see them as your children? Maybe it would help to think of them as talking ants in an ant farm. After getting over the shock of talking to your ants you learn their names and their different personalities.. you come to love and care for them. Why not? We already do this with a lot of animals.. animals that don’t even approach the intellect of toddlers. What if your ants were brilliant? Our dog has roughly zero to 1% in value in all practical sense. It’s practical value comes from it’s ability to clean up after our kids.. but it’s really just moving the mess outside. Yet, to my children it’s extremely valuable. So, moving on. If one dog is immensely valuable to them.. then two dogs would be twice as valuable. How about 10 or 200 dogs. Considerably less valuable. They couldn’t even name 200 dogs let alone care for them. So, the amount of something clearly plays a factor in our sense of value. I would like to think that if I had 200 human babies they would all be equally valuable. What about 6 billion? The quantity of something definitely plays a roll in value. I think this is where we are start losing common ground with God. When flying over a vast expanse of urban sprawl it’s nearly impossible to look down and place value on the team sea of souls. But, each person is most likely of infinite worth to someone. That’s why after a tragedy, a large loss of life, in order to feel anything we need to see the loss to someone else.. the loved ones left behind. You think about losing someone you love and suddenly the real weight of it hits you. Back to the my virtual people.. supposing I had created dozens of AI souls within a virtual world. Maybe I virtually lived amongst them for years. Sure, I could never quite get them to believe they weren’t really “real”. Maybe I was working on a way to make them real. What if I found a way to back up their digital soul and move them out of the “fake” world and into my “real” world. Maybe I could clone a human body and then upload their digital soul to it? But, then, someone came a long and wiped out my harddrive.. or corrupted the “disc”. Would my loss be any less real then losing friends on the other side of the globe. A “being” snuffed out is still gone. Right? Stephen Hawking is physically almost non-existent. Yet his mind for all intents and purposes lives on. He’s as close to a living computer program as it gets. In fact, the computer he uses to speak may have taken over years ago for all we know. Is he still valuable? Would he be deeply missed if he was gone? What if it body was completely gone but his computery voice and brilliant mind lived on?