Categories
AI

What is Truth? AI will tell us.

Our society is deeply divided. It’s a problem that is exacerbated by people sharing false information with each other. This is not a new problem. I was reminded of this the other day while listening to the audiobook “Rebel Yell”. It recounts the life and times of the Confederate Civil War hero Stonewall Jackson. After the battle of First Battle of Bull Run stories of a stunning Federal victory spread far and wide, often by the media. This, it turns out, was quite untrue. It was a good reminder that while this is issue is a problem today, it’s never not been a problem. However, it does seem to be intensified today. Part of this can be blamed on the algorithms that tend automatically filter out content that we aren’t interested in.. or that we don’t agree with. The bigger issue though is that people spread bad information, particularly if it helps shore up their existing paradigms. Facebook has become the the gold standard for groups of people sharing information these days. It used to be that you had to meet with people and discuss things. Discussing things with people in public required a certain amount of etiquette and delivering information that could be seen as controversial required a great deal of articulation. Now, one can simply “Google” any controversial topic you wish to find support for and lob those “facts” at people. Like minded people are brought together through the algorithms to forms little tribes of agreement. Buoyed by the belief in their facts and supporting comments things often lose the etiquette that would typically found in face to face meetings. Contrary opinions are quickly shouted down. Facebook, feeling considerable pressure from everyone believes that this problem is a Facebook problem, is now working on a algorithm to help stop the spread of false information. They will begin to try sort out fact from fiction based on certain criteria. This of course raises lots of questions. Who gets to pick the criteria? What exactly constitutes a fact? What happens to those users who are sharing information deemed untrue? How can one be considered a good source of information?

And, of course, Google will get into this as well. It could start with a little indicator showing the reliability of a news story but eventually they would want to “bury” news sources not known for their “truthiness” as Stephen Colbert puts it. As far as we know they could already be altering search results based on truthiness. This could have a significant impact on search results and how information is spread. Of course, algorithms can be manipulated and many organizations will spring up to help others navigate their way back to the top of the search results.. and of course.. you can always buy ads to put false news at the top.

Ultimately, what impact will this have for good or ill? At first glance it seems like a good thing. I’m not sure it isn’t, to a point. You can always disagree and find things to support your opinion, Google or no Google. Maybe other search engines will get a boost from a backlash over this. “Hey, I can’t find my favorite news any more.” (The ones that tell me what I want to hear). The big potential problem is that truth is often confused with opinion. And some people (people make algorithms) will wish to have some truth passed off as opinion. There are some disturbing truths out there that many people will want buried. It’s quite possible that it could just create bigger bubbles. And for some, it just won’t matter at all. Facebook, Google, whatever could give a fact a 98% chance of something being false.. and many people will take that to mean that it’s possibly true. They’ll gather round with people that agree with them and it will be as good as true.

There is a type of AI called an “oracle”. It’s considered one of the safer forms of AI because it’s not necessarily connected to anything that can cause harm (I.e. it can’t create an army of robots). It can only answer questions. However, the argument against this form of AI is that is can give answers to get it’s desired results. “Okay Google, will you try and take over the world if I let you fix my car for me?” “No, of course I won’t.” It then proceeds to turn your car into a death machine as soon as it’s given tools. It’s a silly example, but when you turn over all Truth to someone or something.. you are essentially giving them all authority. Even if you disagree with it.. are you disagreeing with it or is it just manipulating you?

Not exactly an example of this.. but along the same idea.. is the “EmDrive” or “Impossible Drive”. I am guessing that any reasonable algorithm would dismiss the EmDrive as “false”.. ie. it doesn’t work, because it’s “impossible”. All scientists agree that it is in fact impossible… based upon our current understanding of physics. However, prototypes seem to work. As hard as everyone is working to debunk it.. it keeps working. The Chinese are planning on using it on their upcoming satellites. The problem with the “does the EmDrive work? No, it’s impossible.” answer is that it’s making an assumption that we know everything there is know. These grand assumptions are dangerous. Many great ideas start with the masses saying, “That’s impossible.”. I do hope that the algorithms are smart enough to discern between hateful BS and potentially life altering truths. However, considering humanities track record for this, I think we should proceed with caution.