time to ask ourselves where, if at all, we can use humans in the cycle of algorithms that we are creating?
As indicated earlier, for some jobs (e.g. financial industry, health care) automation seems to be rapidly becoming the dominant voice. But, towards the future, it will not only be in those industries where humans will become inferior to algorithms. Telling in this respect is the 2018 Deloitte Global Human Capital Trends survey and report of business and HR leaders. This survey found that 72% of leaders indicated that AI, robots, and automation are quickly becoming the most important investment areas.
When innovating becomes leading
If body and mind can be replaced, man itself should be replaced. It sounds like science fiction, but all the signs seem to be there. So, if this is really happening, the question of whether we submit to the machine and corresponding technology will be the next one to answer.
In the volatile and uncertain business environment of today, this idea may not sound too crazy. Hasn’t it been suggested that the kind of leader needed to survive such circumstances is one who has superior data management and utilization skills? One who is able to produce specific cost-saving recommendations, and enables organizational efficiency and productivity? And, most importantly, is able to deliver all of this at lightning speed! Yes, from this point of view, ladies and gentlemen, we could argue that the demand for a new leader has arrived and it is not the human kind. In fact, as a society we have landed in a new industrial revolution – and this one is led by algorithms. Human leadership may not even survive the impact of AI. If so, will this change of leadership happen smoothly and without opposition?
Given all the benefits that our new automated leader brings us, resistance may not only be futile, but even non-existent. It should be, if we as humans react rationally. As rational beings we should strive for maximizing our own interests. And, as we can see it now, all the benefits coming along with the increase of automation can only create more efficient lives for us. So, our rationality says a big yes to this new leadership situation.
But it is not only our rationality that is at play. Emotions are likely to play a role as well. All the benefits also create a comfortable situation that humans will easily adjust to and may even become addicted to. And, once we become addicted to it, we will comply with it because it makes us happy. As a matter of fact, research shows that machines can trigger the reward centers in our brain (one of the reasons why humans have become so addicted to continuously checking their smartphones). The reward center releases the hormone dopamine, which creates a feeling of happiness. But, as with any addiction, humans will run the risk of looking for these rewards more often. They want to maintain this feeling of happiness, so they will increasingly feel a need for more automation. Since our automated leader seems to be able to give us what we want, and as such make us addicted, human compliance is likely to follow. OK, it is clear humans will surrender. Autonomous algorithms are here to stay and – could it really be true? – will lead us.
But, before you close this book and accept the idea of an algorithm telling you tomorrow what to do, might I introduce you to another reality? A reality that brings a more complex view on leadership and the potential role that algorithms will play. Allow me to start with a first request. Think about the question of whether an optimizing leader really constitutes leadership? Is a leader simply the combination of being a strong and smart person? Is leadership something that can be achieved by the body and mind combined into one role? If so, then the smart machine of today is truly the winner. But, I do beg to differ. For the sake of the argument, let us take a quick look at how exactly algorithms learn and whether this fits the leadership process as we know it in today’s (human) society.
Do limits exist for self-learning machines?
To understand how algorithms learn, it is necessary to introduce the English mathematician Alan Turing. Depicted by actor Benedict Cumberbatch in the movie The Imitation Game, Alan Turing is best known for his accomplishment of deciphering the Enigma code used by the Germans during the second world war. To achieve this, he developed an electro-mechanical computer, which was called the Bombe. The fact that the Bombe achieved something that no human was capable of led Turing to think about the intelligence of the machine.
This led to his 1950 article, ‘Computing Machinery and Intelligence,’ in which he introduced the now-famous Alan Turing test, which is today still considered the crucial test to determine whether a machine is truly intelligent. In the test, a human interacts with another human and a machine. The participant cannot see the other human or the machine and can only use information on how the other unseen party behaves. If the human is not able to distinguish between the behavior of another human and the behavior of a machine, it follows that we can call the machine intelligent. It is these behavioral ideas of Alan Turing that are today still significantly influencing the development of learning algorithms.
The fact that observable behaviors form the input to learning is not a surprise as in the time of Turing behavioral science was dominating. This stream within psychology refrained from looking inside the mind of humans. The mind was considered the black box of humans (interestingly enough the same is being said of AI nowadays), as it was not directly observable. For that reason, scientists back then suggested that the mind should not be studied. Only behaviors could be considered the true indicators of what humans felt and thought.
To illustrate the dominance of this way of thinking, consider the following joke: Two behaviorists walk into a bar. One says to the other: “You’re fine. How am I?” In a similar vein, today we assume that algorithms can learn by analysing data in ways that identify observable patterns. And those patterns teach algorithms the rules of the game. Based on these rules they make inferences and construct models that guide predictions. Thus, in a way, we could say that algorithms decide and advise strategies based on the patterns observed in data. These patterns inform the algorithm what the common behavior is (the rule of the context of the data) and subsequently the algorithm adjusts to it.
Algorithms thus act in line with the data holding observable patterns with which they are being fed. These observable patterns (which reflect the behaviors Turing referred to), however, do not lead algorithms to learn what lies behind these patterns. Or, in other words, they do not allow algorithms to understand the feelings and deeper level of thinking, reflection and pondering that hide beneath the observable behaviors. This reality means that algorithms can perfectly imitate (hence, the title of the movie) and pretend to be human, but can they really be human in being able to function in relationships in the manner of leaders? Can algorithms, which supposedly display human (learned) behaviors, really survive and function in human social relationships?
Consider the following example. Google Duplex recently demonstrated AI having a flawless conversation over the phone when making an appointment for a dinner.35 The restaurant owner did not have a clue he was talking to AI making the reservation. But imagine what would happen if unexpected events occurred during such a conversation? (Note, the mere fact that you are able to imagine such a scenario makes you already different from the algorithm who would never consider this scenario.) What if the restaurant owner suddenly had a change of heart and told AI that he does not want to work that evening, despite the fact that it is mentioned online that the restaurant will be open that same evening? Will AI be able to take perspective and give a reasonable (human) response?
In all honesty, this may be less likely. It is one thing for an algorithm to know the behaviors that humans usually show and based on those observations develop a behavioral repertoire to deal with most situations. It is, however, another thing to understand the meaning behind human behaviors and respond to it in an equally meaningful way. And here lies the potential limitation for the algorithm as a leader. At this moment, an algorithm cannot understand the meaning of behavior in a given context. AI learns and operates in a context-free way, whereas humans have the ability to account for the situation when behaviors are shown – and, importantly, we expect this skill from leaders. It is as Melanie Mitchell noted in her book Artificial Intelligence: A Guide for Thinking Humans: “Even today’s most capable AI systems have crucial limitations. They are good only at narrowly defined tasks and utterly clueless about the world beyond.”
As a side note, this logic of meaning and taking perspective is something that unfortunately seems to be forgotten by those