Categories
AI God Superintelligence

Elon Musk is worried about people who talk of “AI gods”

https://www.fastcompany.com/40485668/elon-musk-is-worried-about-people-who-talk-of-ai-gods

Musk prefers to make super humans as opposed to deities. He wants to make sure humans stay on the throne.

 

Categories
AI God

GOD IS A BOT, AND ANTHONY LEVANDOWSKI IS HIS MESSENGER

https://www.wired.com/story/god-is-a-bot-and-anthony-levandowski-is-his-messenger

“Many people in Silicon Valley believe in the Singularity—the day in our near future when computers will surpass humans in intelligence and kick off a feedback loop of unfathomable change.

When that day comes, Anthony Levandowski will be firmly on the side of the machines. In September 2015, the multi-millionaire engineer at the heart of the patent and trade secrets lawsuit between Uber and Waymo, Google’s self-driving car company, founded a religious organization called Way of the Future. Its purpose, according to previously unreported state filings, is nothing less than to “develop and promote the realization of a Godhead based on Artificial Intelligence.””

(The rest of the article is about his history and legal issues)

Categories
AI

SOME PHILOSOPHICAL PROBLEMS FROM THE STANDPOINT OF ARTIFICIAL INTELLIGENCE

http://www-formal.stanford.edu/jmc/mcchay69.pdf

“A computer program capable of acting intelligently in the world must have a general representation of the world in terms of which its inputs are interpreted. Designing such a program requires commitments about what knowledge is and how it is obtained. Thus, some of the major traditional problems of philosophy arise in artificial intelligence. More specifically, we want a computer program that decides what to do by inferring in a formal language that a certain strategy will achieve its assigned goal. This requires formalizing concepts of causality, ability, and knowledge. Such formalisms are also considered in philosophical logic. The first part of the paper begins with a philosophical point of view that seems to arise naturally once we take seriously the idea of actually making an intelligent machine. We go on to the notions of metaphysically and epistemologically adequate representations of the world and then to an explanation of can, causes, and knows in terms of a representation of the world 1 by a system of interacting automata. A proposed resolution of the problem of freewill in a deterministic universe and of counterfactual conditional sentences is presented. The second part is mainly concerned with formalisms within which it can be proved that a strategy will achieve a goal. Concepts of situation, fluent, future operator, action, strategy, result of a strategy and knowledge are formalized. A method is given of constructing a sentence of first order logic which will be true in all models of certain axioms if and only if a certain strategy will achieve a certain goal. The formalism of this paper represents an advance over McCarthy (1963) and Green (1969) in that it permits proof of the correctness of strategies that contain loops and strategies that involve the acquisition of knowledge, and it is also somewhat more concise. The third part discusses open problems in extending the formalism of Part two (section 3). The fourth part is a review of work in philosophical logic in relation to problems of artificial intelligence and a discussion of previous efforts to program ‘general intelligence’ from the point of view of this paper.”

Categories
AI

What is Truth? AI will tell us.

Our society is deeply divided. It’s a problem that is exacerbated by people sharing false information with each other. This is not a new problem. I was reminded of this the other day while listening to the audiobook “Rebel Yell”. It recounts the life and times of the Confederate Civil War hero Stonewall Jackson. After the battle of First Battle of Bull Run stories of a stunning Federal victory spread far and wide, often by the media. This, it turns out, was quite untrue. It was a good reminder that while this is issue is a problem today, it’s never not been a problem. However, it does seem to be intensified today. Part of this can be blamed on the algorithms that tend automatically filter out content that we aren’t interested in.. or that we don’t agree with. The bigger issue though is that people spread bad information, particularly if it helps shore up their existing paradigms. Facebook has become the the gold standard for groups of people sharing information these days. It used to be that you had to meet with people and discuss things. Discussing things with people in public required a certain amount of etiquette and delivering information that could be seen as controversial required a great deal of articulation. Now, one can simply “Google” any controversial topic you wish to find support for and lob those “facts” at people. Like minded people are brought together through the algorithms to forms little tribes of agreement. Buoyed by the belief in their facts and supporting comments things often lose the etiquette that would typically found in face to face meetings. Contrary opinions are quickly shouted down. Facebook, feeling considerable pressure from everyone believes that this problem is a Facebook problem, is now working on a algorithm to help stop the spread of false information. They will begin to try sort out fact from fiction based on certain criteria. This of course raises lots of questions. Who gets to pick the criteria? What exactly constitutes a fact? What happens to those users who are sharing information deemed untrue? How can one be considered a good source of information?

And, of course, Google will get into this as well. It could start with a little indicator showing the reliability of a news story but eventually they would want to “bury” news sources not known for their “truthiness” as Stephen Colbert puts it. As far as we know they could already be altering search results based on truthiness. This could have a significant impact on search results and how information is spread. Of course, algorithms can be manipulated and many organizations will spring up to help others navigate their way back to the top of the search results.. and of course.. you can always buy ads to put false news at the top.

Ultimately, what impact will this have for good or ill? At first glance it seems like a good thing. I’m not sure it isn’t, to a point. You can always disagree and find things to support your opinion, Google or no Google. Maybe other search engines will get a boost from a backlash over this. “Hey, I can’t find my favorite news any more.” (The ones that tell me what I want to hear). The big potential problem is that truth is often confused with opinion. And some people (people make algorithms) will wish to have some truth passed off as opinion. There are some disturbing truths out there that many people will want buried. It’s quite possible that it could just create bigger bubbles. And for some, it just won’t matter at all. Facebook, Google, whatever could give a fact a 98% chance of something being false.. and many people will take that to mean that it’s possibly true. They’ll gather round with people that agree with them and it will be as good as true.

There is a type of AI called an “oracle”. It’s considered one of the safer forms of AI because it’s not necessarily connected to anything that can cause harm (I.e. it can’t create an army of robots). It can only answer questions. However, the argument against this form of AI is that is can give answers to get it’s desired results. “Okay Google, will you try and take over the world if I let you fix my car for me?” “No, of course I won’t.” It then proceeds to turn your car into a death machine as soon as it’s given tools. It’s a silly example, but when you turn over all Truth to someone or something.. you are essentially giving them all authority. Even if you disagree with it.. are you disagreeing with it or is it just manipulating you?

Not exactly an example of this.. but along the same idea.. is the “EmDrive” or “Impossible Drive”. I am guessing that any reasonable algorithm would dismiss the EmDrive as “false”.. ie. it doesn’t work, because it’s “impossible”. All scientists agree that it is in fact impossible… based upon our current understanding of physics. However, prototypes seem to work. As hard as everyone is working to debunk it.. it keeps working. The Chinese are planning on using it on their upcoming satellites. The problem with the “does the EmDrive work? No, it’s impossible.” answer is that it’s making an assumption that we know everything there is know. These grand assumptions are dangerous. Many great ideas start with the masses saying, “That’s impossible.”. I do hope that the algorithms are smart enough to discern between hateful BS and potentially life altering truths. However, considering humanities track record for this, I think we should proceed with caution.

 

Categories
AI Superintelligence Transhumanist

What’s Next for Artificial Intelligence

http://www.wsj.com/articles/whats-next-for-artificial-intelligence-1465827619?href=

 

Categories
AI Eternity God multiverse

Giving values to AI

I’m currently reading a chapter in Nick Bostrom’s book “Superintelligence: Paths, Dangers, Strategies” about giving AI values. While on the surface this may seem straightforward. It is anything but. Aside from the obvious questions like “who’s values?”.. where do you even start when it comes to programming them? One of Bostrom’s favorite illustrations about the dangers of AI is that if you instruct an AI to “make people happy” it will very recognize that the source of happiness is a chemical process in your brain and you’ll have wires sticking out of your skull and a silly grin on your face. What is happiness anyway? Love? Joy? If poets and authors can’t fully grasp these things how can an programmer possibly hope to build AI that can “maximize” these things. This is why more often then not AI is seen as a threat. How could it possibly be expected to understand and accept humanity? Bostrom poses some interesting possible solutions. One jumped out at me the other day when I read about Elon Musk’s belief that we most likely live in a simulation. One of the possible ways of instilling values in AI is through simulation. Basically you make millions, if not billions of versions of the AI and through a selection process.. or evolutionary process.. pick AI that exhibit traits that you want to keep.. and toss the rest.

The obvious question to Elon Musk about the simulation we are living in would be “Why?”. Who is running the simulation and what is the purpose? Well, if you view people as simulated bits of software.. perhaps those with desired traits are “harvested” while the others are tossed. Strangely or not so strangely this falls pretty close a Christian perspective that there are beings outside of our reality/simulation and when our software/hardware ends here.. it continues on either in a “better” reality or.. worse.

These thoughts apparently aren’t just mine as these seem to be echoed in this book: Your Digital Afterlives: Computational Theories of Life after Death (Palgrave Frontiers in Philosophy of Religion)

Categories
AI Robots Superintelligence

Could a superintelligence really solve all of our problems?

I would start with energy. With obscene amounts of energy a lot of other problems get solved relatively easily. It takes a lot of energy to create fresh water from saltwater… but with excess energy why not? So, AI is tasked with creating safer nuclear or more efficient solar (or something we can’t imagine). It builds an army of robots to manage the construction. Now we have gobs electricity flowing everywhere. Maybe it makes better batteries to store and move energy around without transmission loss. Sure, why not? Now, we use all the extra electricity that isn’t being used for run cat video servers to pump tons of fresh water out of the oceans. We could turn deserts into farms.. or just massive food forests. More armies of robots could be used to tend to and gather the crops. Now we’ve got more food and fresh water then we know what to do with.. and free electricity. Things are looking good. But people are still dying of disease etc. So, we are going to need some medical nanobots that are capable of maintaining human bodies. This isn’t that far fetched. Unfortunately, people are still dying of accidents.. which are now more tragic because death is becoming less inevitable since we’ve wiped out disease and starvation. So, there are a few options here. We could grow surrogate bodies in labs and then upload our consciousness to them.  Or, we could upgrade human bodies a bit.. add some metal/plastic.. some emergency nano-repairbots.  If things get really rough (asteroids?) we may need to look at putting backup copies of everything in the “cloud”. I’m leaving something out here though. Every time we see people coming to the rescue something always gets in the way.. other people. If suddenly infrastructure became unnecessary.. a lot of governments would very quickly lose power. If I have all the food/water/power I need (imagine everyone “off the grid”).. why would I need a powerful central government watching out for me? Transportation? Nah, I’ve got my electric self-repairing off road vehicle for that (heck, it might even fly). That just leaves.. policing. Will people still steal when they have everything they need? Yeah, probably. Will countries still find a reason to fight? Yeah, it’s hard to believe religious extremism and territorial disputes are just going to vanish over night after going on for thousands of years. Is there a superintelligence solution to this? It’s not hard to imagine a solution.. but not a good one. The scenarios here range from a “can my superintelligence beat yours” to a police state where a strong AI controls every aspect of human life.. making sure everyone follows every single law. There may be some complex middle ground where law breakers are arrested and then a jury of “real” people show up via skype (or something) to deliberate. The point though is that when it comes to people being bad.. there is no clear technological solution. We should definitely try and get to the point where that is the only real problem we are trying to deal with though.

Categories
AI

How physicists programmed AI to do their job – by accident

http://www.csmonitor.com/Science/2016/0517/How-physicists-programmed-AI-to-do-their-job-by-accident

Categories
AI

Nick Bostrom TED Talk (What happens when computers are smarter then us)

Categories
AI

27

A humorous (swearing) look at AI: