program rudimentary? You bet. Was this machine learning? Almost.
After running again and again, the game could guess exactly what animal you had in mind after only a few questions. It was impressive, but it was just following programmed logic. It was not learning. Guess the Animal could update its rules‐based database and appear to be getting smarter as it went along, but it did not change how it made decisions.
The Machine that Programs Itself
Machine learning systems look for patterns and try to make sense of them. It all starts with the question: What problem are you trying to solve?
Let's say you want the machine to recognize a picture of a cat. Feed it all the pictures of cats you can get your hands on and tell it, “These are cats.” The machine looks through all of them, looking for patterns. It sees that cats have fur, pointy ears, tails, and so on, and waits for you to ask a question.
“How many paws does a cat have?”
“On average, 3.24.”
That's a good, solid answer from a regular database. It looks at all the photos, adds up the paws, and divides by the number of pictures.
But a machine learning system is designed to learn. When you tell the machine that most cats have four paws, it can “realize” that it cannot see all of the paws. So when you ask,
“How many ears does a cat have?”
“No more than two.”
the machine has learned something from its experience with paws and can apply that learning to counting ears.
The magic of machine learning is building systems that build themselves. We teach the machine to learn how to learn. We build systems that can write their own algorithms, their own architecture. Rather than learn more information, they are able to change their minds about the data they acquire. They alter the way they perceive. They learn.
The code is unreadable to humans. The machine writes its own code. You can't fix it; you can only try to correct its behavior.
It's troublesome that we cannot backtrack and find out where a machine learning system went off the rails if things come out wrong. That makes us decidedly uncomfortable. It is also likely to be illegal, especially in Europe.
“The EU General Data Protection Regulation (GDPR) is the most important change in data privacy regulation in 20 years” says the homepage of the EU GDPR Portal.11 Article 5, Principles Relating to Personal Data Processing, starts right out with:
Personal Data must be:
* processed lawfully, fairly, and in a manner transparent to the data subject
* collected for specified, explicit purposes and only those purposes
* limited to the minimum amount of personal data necessary for a given situation
* accurate and where necessary, up to date
* kept in a form that permits identification of the data subject for only as long as is necessary, with the only exceptions being statistical or scientific research purposes pursuant to article 83a
* Parliament adds that the data must be processed in a manner allowing the data subject to exercise his/her rights and protects the integrity of the data
* Council adds that the data must be processed in a manner that ensures the security of the data processed under the responsibility and liability of the data controller
Imagine sitting in a bolted‐to‐the‐floor chair in a small room at a heavily scarred table with a single, bright spotlight overhead and a detective leaning in asking, “So how did your system screw this up so badly and how are you going to fix it? Show me the decision‐making process!”
This is a murky area at the moment, and one that is being reviewed and pursued. Machine learning systems will have to come with tools that allow a decision to be explored and explained.
ARE WE THERE YET?
Most of this sounds a little over‐the‐horizon and science‐fiction‐ish, and it is. But it's only just over the horizon. (Quick – check the publication date at the front of this book!) The capabilities have been in the lab for a while now. Examples are in the field. AI and machine learning are being used in advertising, marketing, and customer service, and they don't seem to be slowing down.
But there are some projections that this is all coming at an alarming rate.12
According to researcher Gartner, AI bots will power 85 % of all customer service interactions by the year 2020. Given Facebook and other messaging platforms have already seen significant adoption of customer service bots on their chat apps, this shouldn't necessarily come as a huge surprise. Since this use of AI can help reduce wait times for many types of interactions, this trend sounds like a win for businesses and customers alike.
The White House says it's time to get ready. In a report called “Preparing for the Future of Artificial Intelligence” (October 2016),13 the Executive Office of the President National Science and Technology Council Committee on Technology said:
The current wave of progress and enthusiasm for AI began around 2010, driven by three factors that built upon each other: the availability of big data from sources including e‐commerce, businesses, social media, science, and government; which provided raw material for dramatically improved Machine Learning approaches and algorithms; which in turn relied on the capabilities of more powerful computers. During this period, the pace of improvement surprised AI experts. For example, on a popular image recognition challenge14 that has a 5 percent human error rate according to one error measure, the best AI result improved from a 26 percent error rate in 2011 to 3.5 percent in 2015.
Simultaneously, industry has been increasing its investment in AI. In 2016, Google Chief Executive Officer (CEO) Sundar Pichai said, “Machine Learning [a subfield of AI] is a core, transformative way by which we're rethinking how we're doing everything. We are thoughtfully applying it across all our products, be it search, ads, YouTube, or Play. And we're in early days, but you will see us – in a systematic way – apply Machine Learning in all these areas.” This view of AI broadly impacting how software is created and delivered was widely shared by CEOs in the technology industry, including Ginni Rometty of IBM, who has said that her organization is betting the company on AI.
The commercial growth in AI is surprising to those of little faith and not at all surprising to true believers. IDC Research “predicts that spending on AI software for marketing and related function businesses will grow at an exceptionally fast cumulative average growth rate (CAGR) of 54 percent worldwide, from around $360 million in 2016 to over $2 billion in 2020, due to the attractiveness of this technology to both sell‐side suppliers and buy‐side end‐user customers.”15
Best to be prepared for the “ketchup effect,” as Mattias Östmar called it: “First nothing, then nothing, then a drip and then all of a sudden – splash!”
You might call it hype, crystal‐balling, or wishful thinking, but the best minds of our time are taking it very seriously. The White House's primary recommendation from the above report is to “examine whether and how (private and public institutions) can responsibly leverage AI and Machine Learning in ways that will benefit society.”
Can you responsibly leverage AI and machine learning in ways that will benefit society? What happens if you don't? What could possibly go wrong?
AI‐POCALYPSE
Cyberdyne will become the largest supplier of military computer systems. All stealth bombers are upgraded with Cyberdyne computers, becoming fully unmanned. Afterwards, they fly with a perfect operational record. The Skynet Funding Bill is passed. The system goes online August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric