Where do I stand on the state of AI
One of the most recent questions I have been getting is what do I think about AI or where is it gonna go? I think there are a few misconceptions into what AI actually is and isn’t. And I think the term is used very loosely. Before we get started, I am by no means an AI developer or AI scientists. I have messed around with a few AI Frameworks but haven’t built anything that would actually be good. So anywho lets get started.
“Bring Yourself Back Online”
Thanks to shows like Westworld, Black Mirror and even video games like Mass Effect, and System Shock. I get asked on quite a number of occasions “could AI be like that?” Most of the time I say no. I have yet to see something that really stands out and actually puts me in the shoes of what an actual intelligence would be. Lets start with that, most of these AI’s operate within a set bounds of what they are doing. Westworld the AI is contained to a park. Mass Effect, the concept of EDI isn’t 100% there for an AI or even halo. There are bounds to them.
“What Happens if I fail your test?”
One of the many things that I notice companies throwing around is “we are building AI” well no not really. Most of these AI systems like Cortana, Watson, and Siri couldn’t make it past a simple Turing test. Maybe a text based turing test but an actual conversation it where it would break down. Now why do I think these AI’s would fail? Simply put we as humans don’t understand emotions or even how to program emotions. If we build something we tell it to operate within a certain set of parameters, and to answer with a certain set of perimeters. Such as if I ask Siri “Give me the weather pattern for the next hour” she replys with the entire day cycle. Or if I was tell cortana “what movies are coming out this weekend.” she will reply with “Here is a list of the upcoming movies.” Instead an AI might answer with “do you want your usual spot?” or how about “Infinity War just came out, and there is a center seat at your favorite theater for brunch.”
All current AI built isn’t really an intelligence more along the lines of a virtual intelligence. Its virtual, its built to respond within a certain boundary where as an actual intelligence builds itself. I have noticed that many companies say nowadays they use “AI” to build and analyze tools. I would say that they are using advance analytics tools that can learn within the bounds of analytics but not within the bounds of becoming something more.
“My logic is undeniable”
Now this is one question I get constantly…will robots overthrow us? Could we have something like in Terminator, Eagle Eye, or Ex Machina ? Honestly I don’t see something like that happening for at least a century. Until we get it down what human emotion actually is and are able to fully program human emotion into machines or having built a type of portable quantum computer. I think we have been conditioned to be fearfull of robots or the creation of robots for fear we will create the next skynet or the next David or Eva. But I think we have a lot way to go before we completely build an AI that will understand and learn. I think we could possibly build a very capable machine that could breach security systems and defend against them but it will become battle of the VI not AI.
Another question I get asked is what movie gets it closet to realistic. These are very much my opinion, I feel the movies that always take me back to good AI vs bad AI. The idea of good AI would be something along the lines of Her. Having a computer that fully analyzes and understands emotion and ways to speak and is able to learn and adapt more and more. As for the ending…thats very much up in the air with an AI “ascending” I don’t know what will happen when an AI gains a certain amount of knowledge. Will it ascend or will it go crazy and become rampant like AI’s do in Halo?
Another movie I look back at is 2001: A Space Odyssey. Why is Hal so legit considering it was years ago? It never strays from its programming. It always stays within the bounds of its programming but it can be taken differently. Hal very much leaves a lot of vague answers and ideas when he is asked about emotion and feelings and if he is he doesn’t let the user know. Its left up to interpretation.
Finally one thing that definitely bothers me about movies, is this single genius that figures out the “AI Question” and programs it. Kind of like what happens in Chappie, Ex Machina, where they spend a few months trying to build it. I think its going to come down to a team of people and years and years of coding and research. And it won’t just be developers, but I think it could be psychologists and sociologists working together among other types of scientists to help achieve a true AI.
As I said in the beginning I am not a AI developer, but this is what I think of what we have with the current state of AI and I know companies are working to achieve that level of AI where it is self learning but currently everything we have is a VI. I think we will eventually get there but not till we have the technology to get there like a portable quantum chip. Until then we will just be left to speculate.