David De Cremer

Leadership by Algorithm


Скачать книгу

terms of its functionality for us as human beings, rather than on maximizing the abilities of the technology itself. One speaker responded loudly with the comment that AI should definitely tackle humanity’s problems (e.g. climate change, population size, food scarcity and so forth), but its development should not be slowed down by anticipatory thoughts on how it would impact humanity itself. As you can imagine, the debate became suddenly much more heated. Two camps formed relatively quickly. One camp advocated a focus on a race to the bottom to maximize AI abilities as fast as possible (and thus discounting long-term consequences for humanity), whereas the other camp advocated the necessity of social responsibility in favor of maximizing technology employment.

      Who is right? In my view, both perspectives make sense. On the one hand, we do want to have the best technology and maximize its effectiveness. On the other hand, we also want to ensure that the technology being developed will serve humanity in its existence, rather than potentially undermining it.

      So, how to solve this dilemma?

      In this book, I want to delve deeper into this question and see how it may impact the way we run our teams, institutes and organizations, and what the choices will be that we have to make. It is my belief that in order to address the question of how to proceed in the development and application of algorithms in our daily activities, we need to agree on the purpose of the technology development itself. What purpose does AI serve for humanity and how will this impact the shaping of it? This kind of exercise is necessary to avoid two possible outcomes that I have been thinking about for years.

      First, we do not want to run the risk that the rapid development of AI technologies creates a future where our human identity is slowly removed and a humane society becomes something of the past. Like Connor Mason’s time machine that altered human history, mindless development of AI technology, with little awareness of its consequences for humanity, may run the same risks.

      Second, we push the limits of technology advancement with the aim for AI to augment our abilities and thus to serve the development of a more (and not less) humane society. From that point of view, the development of AI should not be seen as a way to solve the mess we create today, but rather as a means of creating opportunities that will improve the human condition. As the executive I met as a young scholar proclaimed that technology is developed to deal with the problems that we create, AI technology developed with the sole aim of maximizing efficiency and minimizing errors will reduce the human presence rather than augment its ability.

      Putting these two possible outcomes together made me realize that the purpose served by investing so much in AI technology advancement should not be to make our society less humane and more efficient in eliminating mistakes and failures. This would result in humankind having to remove itself from its place in the world to be replaced by another type of intelligence not burdened by human flaws. If this were to happen, our organizations and society would ultimately be run by technology. What will our place in society be then?

      In this book, I will address these questions by unravelling the complex relationship that exists between on the one hand our human desire to constantly evolve, and the drive for fairness and co-operation on the other hand. Humans have an innate motivation to go where no man has gone before. The risk associated with this motivation is that at some point we may lose control of the technology we are building and the consequence will be that we will submit to it.

      Will this ever be a reality? Humans as subordinates of the almighty machine? Some signs indicate that it may well happen. Take the example of the South Korean Lee Sedol, who was the world champion at the ancient Chinese board game Go. This board game is highly complex and was considered for a long time beyond the reach of machines. All that changed in 2016 when the computer program AlphaGO beat Lee Sedol four matches to one. The loss against AI made him doubt his own (human) qualities so much that he decided to retire in 2019. So, if even the world champion admits defeat, why would we not expect that one day machines will develop to the point where they run our organizations?

      To tackle this question, I will start from the premise that the leadership we need in a humane society is likely not to emerge through more sophisticated technology. Rather, enlightened leadership will emerge by becoming more sophisticated about human nature and our own unique abilities to design better technology that is used in wise (and not smart) ways.

      Let me take you on a journey, where we will look at what exactly is happening today with AI in our organizations; what we can expect from moving into a new era where algorithms are developed for each task; what kind of influence it will have on how we will run our organizations in the future; and how we should best approach such radical transformation.

      The time machine is waiting, but this time with the aim to inform us and make us smarter about the ways in which we can design technology to improve humanity.

      Chapter 1: Entering a New Era

      In 1985, Mark Knopfler and his band Dire Straits released a song about a boy who got the action, got the motion and did the walk of life. This boy became the hero in many a young kid’s fantasy. In the 21st century, we have another kind of hero, something that is not human. Now, we admire the use of algorithms in all walks of life.

      However, it is also important to note that AI is not some new phenomenon that has only arrived in the last few years. In fact, the notion of AI was used for the first time in 1956. At that time, the eight-week long Dartmouth Summer Research project on AI at Dartmouth College in New Hampshire was organized. The project included names like Marvin Minsky, John McCarthy and Nathaniel Rochester, who would later become known as the founding fathers of AI.

      So, early on in the second half of last century, the belief in the super power of AI was already very much present. Consider, for example, the quote of Herbert A. Simon, Nobel laureate in economics, who wrote in 1965: “machines will be capable, within 20 years, of doing any work a man can do.” However, researchers failed to deliver on these lofty promises. Since the 1970s, AI projects have been heavily criticized for being too expensive and using too formalized, top-down approaches which fail to replicate human intelligence. And as a result, AI research was partly frozen, with no real progress being made. Until now!

      AI witnessed a comeback in the last decade, primarily because the world woke up to the realization that deep learning by machines is possible to the level where they can actually perform many tasks better than humans. Where did this wake-up call come from? From a simple game called Go.

      In 2016, AlphaGo, a program developed by Google DeepMind, beat the human world champion in the Chinese board game, Go. This was a surprise to many, as Go – because of its complexity – was considered the territory of human, not AI, victors. In a decade where our human desire to connect globally, execute tasks faster, and accumulate massive amounts of data, was omnipresent, such deep learning capabilities were, of course, quickly embraced.

      As a result, we are now witnessing an almost obsessive focus on AI and the benefits it can bring to our society, organizations and people. This obsessive focus, combined with an exponential increase in AI applications, has resulted in a certain fear that human intelligence may well be on the verge of being challenged in all facets of our lives. Or, to be more precise, a fear has emerged in society that we, as humans, may have entered an era where we will be replaced by machines (for real, this time!).

      However, before we address the challenge (some may even call it a threat) to our authentic sense of the human self and intelligence, we need to make clear what we are talking about when we talk about AI. Although the purpose of this book is not to present a technical manual to work with AI, or to teach you how to become a coder, I do feel that we first need to familiarize ourselves with a brief definition of AI.

      In its simplest form, AI can be seen as a system that employs techniques to make external data – available everywhere in our organizations and society – as a whole more transparent. Making data more transparent allows for interpreting data more accurately. This allows us to learn from these interpretations and subsequently act upon them to promote more optimal ways of achieving our goals.

      The technique that is known to all and drives our learning from data is called machine learning. It is machine learning that creates algorithms that are