David De Cremer

Leadership by Algorithm


Скачать книгу

Arts and Social Sciences, Singapore University of Technology and Design

      “Like the long-discussed dispute over efficiency and equality, the rapid development of AI gives rise to the discussion whether the pursuit of maximizing efficiency at the cost of a less humane society is acceptable? Leadership by Algorithm echoes the importance of using AI in wise ways to improve the human condition—a purpose that technology is originally developed to serve. Thought-provoking!”

      Frederick Shen, CIO, Aeternam Stella Investment Group

      For Hannah – with the aspiration for her to live an authentic life in a nearly automated future!

      Prologue

      I’m seated at a round table where I am being introduced to several conference attendees. Our table is not the only one in the room. Many other round tables fill up the ballroom and have people seated in nice suits and dresses. After being introduced to my neighbors, I sit down and look around for a moment to make myself familiar with the context.

      It is 7pm on a Thursday evening. I am a young scholar, only having received my PhD a few years ago, and I find myself in the midst of a fancy business event. When I was invited by a colleague, I was unsure about whether to go, not knowing how it could be relevant for my research. Did I have anything in common with these executives? It took some persuasion, but eventually my colleague convinced me and here I was. So, to make the best out of it, I started talking to my neighbor.

      He was a young, ambitious person, who seemed to have it all figured out. He was recently promoted to an executive position and had a clear idea about what success was and how to achieve it. Clearly someone who knew what he was doing. I became intrigued with his acute drive to talk about his successes and his conviction that you have to push limits until you get what you want.

      After listening for a while, I managed to ask him a question. My question, which must have sounded quite naïve to those sitting at my table, was how he was so convinced that a business world where everyone would be pushing the limits continuously could survive. Wouldn’t it be the case that such behavior, shown by all, would create problems and maybe damage or even destroy the system that had been built?

      As I expected, he was surprised, and for a second it almost looked like he didn’t know what to say. However, he quickly overcame his surprise and simply responded that such a situation would never happen. If there was any risk that our behavior would lead to threats to our organizations or society, he was convinced that science and technology would solve it. In his view, technology allowed us to push beyond our human limits and helped to overcome any challenges that we may encounter.

      Somewhat taken by his answer, I followed up with another question, asking him whether such a belief in the almost superpower of technology would not make him too dependent on that same technology. Wouldn’t that make him surplus to requirements in the long term? He looked at me in disbelief and said with a grin on his face that I should not worry about that, because it would never be an issue. He then turned his attention to the neighbor on his other side, which made clear to me that our conversation was finished.

      As a young scholar, but also as a person, this conversation made a deep impression on me. The story stayed with me for many years, but eventually I forgot about it. Until a few years ago! When I started working on questions addressing the existential drive of humans in developing technology the story came back to me. And, this time, two thoughts kept flashing through my head.

      First, why was it that my companion at the dinner didn’t seem to be aware that his own behavior was leading to problems that could only be solved if the science of technology made sufficient progress? Second, where did he find that sense of confidence that technology would solve it all for him to remain in charge and to keep doing what he was doing?

      Both questions are important to ask, but I was particularly intrigued by the thought that someone could be so confident in technology innovation. It made me curious as to the kind of future that awaits us when technology will have the potential to impact on our lives in such a significant way. What kind of technology would that be and how would it affect us?

      Well, as you are probably all aware, today we are living in an era where exactly this kind of technology innovation is knocking loudly on all of our doors. It is a strong and confident knock from a technology ready to take its place in human society. What am I talking about? Clearly, I am talking about artificial intelligence (AI).

      Today, AI is beyond cool! Every advancement that is made in the field of technology is hailed as a great triumph by many. And with that triumph its impact becomes visible and that impact is recognized as significant. Indeed, AI brings the message that our world will change fundamentally.

      In a sense, the rapid development of AI and its many applications gives us a peek into a future where our society will function in a completely different way. With the arrival of AI, we can already see a future in place that forces all of us to act now. AI is the kind of technology innovation that is so disruptive that if you do not start changing your ways of working today, there may not even be a future for you tomorrow.

      While this may come across as somewhat threatening, it is a future that we have to be serious about. If Moore’s law – the idea that the overall processing power of computers will double every two years – is applicable, then in the next decade we should be ready to witness dramatic changes in how we live and work together. All of this buzz has made me – just as when I met the very ambitious executive – curious about a technology-driven future. For me, AI is acting as a time machine, helping us to see what could be, but at a moment in time that we actually still have to build it. And, this is an interesting thought.

      Why?

      Well, if we consider AI as a kind of time machine, giving us a peek into the future, we should use it to our benefit. Use it in a way that can help us to be conscious and careful about how we design, develop and apply AI. Because once the future sets in, the past may be remembered, but it will be gone.

      Today, we still live in a time where we can have an impact on technology. Why am I saying this? Let me respond to this question by referring to a series on Netflix that I very much enjoyed watching. The series is called Timeless and describes the adventures of a team that wants to stop a mysterious organization, called Rittenhouse, from changing history by making use of a time machine.

      In the first episode, the relevance to our discussion in this book is obvious right away. There, one of the main characters, Lucy Preston, a history professor, is introduced to Connor Mason, who is the inventor of a time machine. Mason explains that certain individuals have taken control of a time machine, called the Lifeboat, and gone back in time. With a certain weight in his voice, he makes clear that “history will change”. Everyone in the room is aware of the magnitude of his words and realizes the consequences that this will have on the world, society and maybe even their own lives.

      Lucy Preston responds emotionally by asking why he would be so stupid as to invent something so dangerous. Why invent technology that could hurt the human race in such significant ways (i.e. changing its own history)? The answer from Mason is as clear as it is simple: he didn’t count on this happening. And, isn’t this how it usually goes with significant technological innovations? Blinded by the endless opportunities, we don’t want to waste any time and only look at what technology may be capable of. The consequences of an unchecked technology revolution for humanity are usually not addressed.

      Can we expect the same thing with AI? Are we fully aware of the implications for humanity if society becomes smart and automated? Are we focusing too much on developing a human-like intelligence that can surpass real human intelligence in both specific and general ways? And, are we doing so without fully considering the development and application dangers of AI?

      As with every significant change, there are pros and cons. Not too long ago, I attended a debate where the prospects of a smart society were discussed. Initially the focus was entirely on the cost recommendations and efficiencies that AI applications would bring. Everyone was happy so far.

      At one point in the debate, however, someone in the audience asked whether we shouldn’t evaluate AI