learning computer.
We are very quick to understand categories and are able to grasp the relationship between words, objects, and actions almost immediately. You don’t believe me? Do you still think it’s only possible to effectively learn something by repetition and practice? Then allow me to give you a counterexample: How long did it take you to understand a newly coined word like “selfie”? A single experience of seeing four posing teenagers snapping a photo of themselves on a smartphone should have been enough. How quickly were you able to understand the invented word “Brexit”? You probably figured it out fairly quickly. We often understand the world at first glance, but there’s more. Once you’ve understood something, not only can you reproduce it, you can also make something new from it. If Brexit describes the exit of Great Britain from the European Union (EU), what would “Swexit,” “Spaxit,” or “Itaxit” indicate? Or from the opposite direction, what would “Bremain” or a “Breturn” mean? It’s a piece of cake for you to grasp all of the new words because you already understand the fundamental categories of thought. You are able to take these and immediately generate a new piece of knowledge, even if you’ve never heard of “Spaxit” before in your life!
So much for the topic of frequent repetition and “deep learning.” Merely memorizing a bunch of facts is no great art. Understanding them, on the other hand, is. In the future, computers might be able to “learn” about objects and pictures more quickly, but they will never be able to understand them. In order to learn, computers use very basic algorithms to analyze an enormous amount of data. Humans do the opposite. We save much less data but are therefore able to process exceedingly more. Knowing something does not mean having a lot of information. It rather means being able to grasp something with the information in hand. Deep learning is all well and good, but “deep understanding” is better. Computers do not understand what it is that they are recognizing. One interesting indication of this followed from an experiment conducted in 2015. Researchers studied artificial neural networks that had trained themselves to recognize objects (such as screwdrivers, school buses, or guitars). The network was analyzed to find out what, in fact, it had recognized. For example, what would a picture of a robin have to look like in order for the computer program to be able to respond with 100 percent certainty that it was indeed a “robin”? If anyone had expected that a perfect prototype image of a robin would pop out, a sort of “best of” from all the robin images in the world, they would have been disappointed. The resulting image was a total chaos of pixels.12 No human would be able to identify even a very rudimentary robin in such a pixelated mess. But the computer could, because it recognized the robin only as a graphic representation of pixels and did not understand that it was a living creature. If one taught a computer that Brexit refers to the exit of Great Britain from the EU, the computer would never be able to independently draw the conclusion that Swexit means the Swedes waving goodbye.
Our ability to learn extremely quickly, or we had better say, to understand things, is only possible if we do not “learn” facts and information separately, in a way that is sterile and detached, but rather by creating a category correlation that embeds things and, thereby, leads us to understand them. Computers do exactly the opposite. They are very good at saving data quickly, but they are just as dumb as they were thirty years ago. Only now, they are dumb a little faster. This is because they never take time to reflect on all of the data they have gathered. They don’t treat themselves to a break. Computers always work at full blast until they run out (or have their power switched off). But if you never take a break, you cannot ever put the information that you possess to any use, and thus you cannot acquire any knowledge. In order to generate concepts, it is essential to have a stimulus-free space (during sleep). We are able to recognize something at first glance because we don’t allow ourselves to be flooded with facts and data but, instead, make ourselves take a break. This may initially seem to be inefficient and perhaps to smack of weakness, but it is actually highly effective. In fact, this is the only way that we are able to comprehend the world, instead of merely memorizing it.
Learning power reloaded
WE SHOULD THUS not treat the brain as though it were an information machine since the most valuable learning processes of the future don’t call for us to have flawless memories (that this isn’t even possible is touched on in the next chapter), but rather for us to adjust rapidly to new situations. If we start competing with computers, trying to use learning tricks to memorize more facts, telephone numbers, and shopping lists, we are certainly going to lose. Maybe we should let algorithms take over these kinds of basic tasks for us.
Trying to develop the latest learning techniques in order to remember more information isn’t what’s important. It’s much more valuable to improve our ability to think conceptually and to understand. The brain is not a data storage device. It’s a knowledge organizer whose major talent can only be actualized once we stop treating it like an imbecile—the way that I did with you at the beginning of the chapter. Sorry about that.
You’ve now learned the most important ingredients for improving conceptual thinking. Stress is only helpful to learning when it is positive, short term, and surprising. Long-term stress should be minimized by reinterpreting it. Students who are aware of what stress is, for example, have been shown to exhibit better coping techniques in response to stress and are thus less prone to tense up while learning.13
When are we best able to learn? When we are excited about it, of course. Facts are not that important. It’s the feelings that stick with us. Best is if the feeling is positive. Positive feelings should thus be conveyed at school, university, or in work environments by the teachers, lecturers, or team leaders to promote the best learning. This is much more crucial than the factual content that is taught. My best teacher (my chemistry teacher that I mentioned in the introduction) didn’t keep a stockpile of modern PowerPoint presentations on hand, but he was very enthusiastic about his discipline. And when someone is so passionate about the citric acid cycle, then there has to be something to it. That’s why I decided to study biochemistry. Not because I found the factual content to be so captivating (that came later), but because I was entranced by his excitement for the topic. It is only when something impacts us emotionally that we never forget it—even if it is a form of positive stress.
Learning is all well and good—but understanding is better. And in order to understand, we need a context. Even small children are able to comprehend the world at an incredible pace if given examples and concrete applications to figure out the “why” of things. This happens, not by dumping data and facts on their heads, but by allowing them to construct meaningful correlations for themselves. If you want students to learn new vocabulary, you could give them a list of words. Or you could encourage the children to come up with a personal story to incorporate the new words. Children will quickly gain the individual context that they need to remember the words. I have long forgotten every word I’ve ever been given on a vocabulary list. But there are other words that I have only heard once, such as when I lived in California, that I have immediately used and adopted into my vocabulary. At the same time, we should avoid getting drawn into the temptation of trying to compete with computer software and artificial intelligence. When it comes to speed, accuracy, and efficiency, we are definitely going to lose every time. It is much more valuable to remember our human weakness, er, I mean strength. Namely, that we are able, sometimes even at first glance, when we take regular breaks, to happily absorb useless knowledge. Yes, of course it is important that we make good use of computer science and modern technological media in school, as we are going to need these skills to function in the future world. But we shouldn’t attempt to think like an algorithm. Subjects such as history, natural science, languages, or philosophy, and a good general, well-rounded education are what empower us to establish ideas and conceptual correlations. If you are digging through Shakespeare’s masterpieces and chance across the line, “To be, or not to be? That is the question,” you could choose to learn it by heart, copy and paste it as a cool meme on Facebook, or save it onto your flash drive. The latter takes up 42 bytes but means nothing. Alternatively, you could go to the theatre and enjoy watching Hamlet—and the phrase will suddenly take on meaning.
In this chapter, we have been able to unpack how to build up much more effective categories of thought. By taking regular small breaks, for instance, one is able to produce a thought