Musings of a Chief Analytics Officer: All Things AI and The Philosophy of AI
It’s hard to not notice that in almost everything (starting from our mundane day to day activities to esoteric thinking), we have developed a habit to bring up AI, one way or other. Almost everyone in the industry believes it to be a revolution, yet there are divided opinions about when we will get to “All Things AI”, and what that will ultimately mean for us. Some believe it will be our greatest invention and others are apprehensive that it may lead to our doom.
Few weeks back, stories of Facebook’s AI creating its own language caused quite an uproar. While this appeared to be alarming, the AI practitioners were vocal that this behavior is fairly normal. How so?
While developing negotiating chatbot agents, researchers at the Facebook Artificial Intelligence Research (FAIR) lab noticed that the bots had spontaneously developed their own communication language (definitely English, but not the English that we humans use to communicate). In a report explaining their research, they noted that this development spawned from the agent’s goal of improving their negotiation strategies to maximize the efficiency of their communication. Although the bots started out speaking English, they had no reason to stick to English as it didn’t contribute to their end goal of becoming more efficient negotiators. The agents had multiple incentives to move away from the language, the same way humans develop shorthand notations and acronyms to discuss complex ideas more quickly or efficiently. In that sense, the behavior exhibited by the bots, in some sense, is a very human adaptation scenario as it was designed to enhance performance and minimize effort.
What was absolutely rational for the AI agents was irrational for human! That brings us to the important point – the philosophy of AI.
Is it possible to create an AI system that can solve all the problems humans solve using their intelligence? To answer this question, we need to first clearly define “intelligence”. In AI terminologies, intelligence is referred to as intelligent “agents”. An “agent” acts in an environment with a desire to meet or exceed the performance measure. A “performance measure” defines how the agent will be rewarded for meeting or exceeding the defined success criteria. Thus, if an agent acts so as to maximize the expected value of a performance measure based on past experience and knowledge, then it is intelligent.
Turing’s “polite convention” says, if a machine behaves as intelligently as a human being, then it is as intelligent as a human being. Similarly, Dartmouth proposal mentions, every aspect of learning, or any other feature of intelligence can be so precisely described that a machine can be made to simulate it.
There are numerous viewpoints around what constitutes as “Broad AI” and “Narrow AI”. However, there are in general agreements that developing a working AI system that meets the criteria laid out by Turing and Dartmouth is impossible, because there are practical limits to the abilities of machines to truly reflect the special quality of the human mind (empathy, emotions, compassion, adaptiveness, etc.) that is necessary for thinking that simply cannot be duplicated by a machine.
There are also other wish list items that may sound from funny to downright hilarious – a machine should be able to do things, such as: be resourceful, have an initiative, have a sense of humor, tell right from wrong, make mistakes, learn from experience, use words properly, be the subject of its own thought, have as much diversity of behavior as a human, etc. This pretty much sums up the year-end “appraisal discussion” scenario as well, pun intended.
We have already seen breakthroughs in AI (Alexa, Siri, self-driving cars, and many more); however the real barriers still to be crossed are –
Can a machine have emotions? A mechanism that an intelligent agent uses to maximize the utility of its actions. This will happen when in spoken or written conversation you can’t tell whether it is a human or a machine. We will know we are on the way when our machines start conversations with us rather than just responding to us.
Can a machine be self-aware? A mechanism that demonstrates a machine starts taking actions independently without any human input. This would eventually include setting its own goals which would in essence mean having an independent will.
Can a machine be creative? A mechanism by which it starts questioning its own existence, and wondering if there aren’t better ways to get things done. With enough storage capacity and processing power, a computer today can churn out astronomical number of different ideas, which was not possible earlier. Let’s take the example of online retailer eBay, it is rolling out “conversational commerce”. The intent is to provide shoppers with a personalized experience that compels them to buy. This is a transformative step to drive discoverability of eBay’s vast inventory, insights into supply and demand, pricing trends, and creating a shopping experience that is tailored to each eBay user’s interests, passions and shopping history. Personalization is not the only tool eBay has been working on in order to increase conversions. Last year, eBay introduced a Facebook Messenger bot that can browse the billions of items in the inventory for you and recommend items that matches your preferences in a conversational way.
Given the way chatbots are entering into every aspect of business and our daily lives, I am sure we will soon be coining another buzzword “conversational everything”!
There are numerous other examples across industry sectors, ranging from cognitive-led process automation to knowledge-based reasoning, and response mechanisms.
These kinds of dramatic shifts in thinking are called “Singularity” – a phrase that is originally derived from mathematics, and describes a point where we are incapable of deciphering its exact properties. The self-learning aspect and adaptive capabilities of AI makes it standout, as opposed to earlier attempts of systems that were primarily designed taking heuristics and rules to make decisions or to act. What this means is – AI systems by design, improves itself and performs better; and it seems pretty obvious that once we have the right performance measures, we will have a super-intelligent AI that will keep creating a better version of itself.
If this articulation scares you, you’re in good company. Few of the most widely regarded scientists, thinkers and inventors, like Steven Hawking and Elon Musk, have already expressed their concerns that super-intelligent AI could escape our control and move against us. Whereas, the AI communities are thrilled at the great opportunities that such a singularity holds for us. They believe that a super-intelligent AI, if kept under the right governance, could actually make our planet a better place to live in.
You are thinking I am kidding. No, not really. In a recent TED Talk, Tom Gruber, co-creator of Siri, literally stretched our thinking to imagining a “humanistic AI” that argues and collaborates with us, instead of competing with or replacing us. Tom’s compelling arguments and equally vivid vision outlines a reality, where future AI systems will help us achieve superhuman performances in perception, creativity and cognitive functions – every time a machine gets smarter, we get smarter.
More from Soumendra Mohanty
Last week, I was in Johannesburg meeting some clients, and the conversation turned toward a…
AI (Artificial Intelligence) will make up for the lack of data scientists and the next frontier…
In the recently concluded “Gartner Data Analytics Summit 2017”, there was an interesting…
The infinite monkey theorem states that a monkey hitting keys at random, on a typewriter keyboard…
Latest Blogs
Introduction to RAG To truly understand Graph RAG implementation, it’s essential to first…
Welcome to our discussion on responsible AI —a transformative subject that is reshaping technology’s…
Introduction In today’s evolving technological landscape, Generative AI (GenAI) is revolutionizing…
At our recent roundtable event in Copenhagen, we hosted engaging discussions on accelerating…