Mike Bursell

Trust in Computer Systems and the Cloud


Скачать книгу

includes a bar that goes by the name of Cognitive Dissidents. I noticed this a few months ago when I was reading the book in bed, and it seemed apposite because I wanted to write a blog article about cognitive bias and cognitive dissonance (on which the bar's name is a play on words) as a form of cognitive bias. What is more, the fact that the bar's name had struck me so forcibly was, in fact, exactly that: a form of cognitive bias. In this case, it is an example of the frequency illusion, or the Baader-Meinhof effect, where a name or word that has recently come to your attention suddenly seems to be everywhere you turn. Like such words, cognitive biases are everywhere: there are more of them than we might expect, and they are more dangerous than we might initially realise.

      Wikipedia provides many examples of cognitive bias, and they may seem irrelevant to our quest. However, as we consider risk—which, as discussed in Chapter 1, is what we are trying to manage when we build trust relationships to other entities—there needs to be a better understanding of our own cognitive biases and those of the people around us. We like to believe that we and they make decisions and recommendations rationally, but the study of cognitive bias provides ample evidence that:

       We generally do not.

       Even if we do, we should not expect those to whom we present them to consider them entirely rationally.

      There are opportunities for abuse here. There are techniques beloved of advertisers and the media to manipulate our thinking to their ends, which we could use to our advantage and to try to manipulate others. One example is the framing effect. If you do not want your management to fund a new anti-virus product because you have other ideas for the currently ear-marked funding, you might say:

       “Our current product is 80% effective!”

      Whereas if you do want them to fund it, you might say:

       “Our current product is 20% ineffective!”

      Three further examples of cognitive bias serve to show how risk calculations may be manipulated, either by presentation or just by changing the thought processes of those making calculations:

       Irrational Escalation, or the Fallacy of Sunk Costs This is the tendency for people to keep throwing money or resources at a project, vendor, or product when it is clear that it is no longer worth it, with the rationale that to stop spending money (or resources) now would waste what has already been spent—despite the fact that those resources have already been consumed. This often comes over as misplaced pride or people not wishing to let go of a pet project because they have become attached to it, but it is really dangerous for security. If something clearly is not effective, it should be thrown out, rather than good money being sent after bad.

       Normalcy Bias This is the refusal to address a risk because the event associated with it has never happened before. It is an interesting one when considering security and risk, for the simple reason that so many products and vendors are predicated on exactly that: protecting organisations from events that have so far not occurred. The appropriate response is to perform a thorough risk analysis and then put measures in place to deal with those risks that are truly high priority, not those that may not happen or that do not seem likely at first glance.

       Observer-Expectancy Effect This is when people who are looking for a particular result find it because they have (consciously or unconsciously) misused the data. It is common in situations such as those where there is a belief that a particular attack or threat is likely, and the data available (log files, for instance) are used in a way that confirms this expectation rather than analysed and presented in ways that are more neutral. Clearly, such manipulation of data will alter risk calculations, as it will alter the probability assigned to particular events, skewing the results.

       Misconceptions of Regression Regression suggests that if a particular sample is above a mean, the next sample is likely to be below the mean. This can lead to misconceptions such that punishing a bad event is more effective—as it leads to a (supposedly causal) improvement—than rewarding a good event—as this leads to a (supposedly causal) deterioration. The failure to understand this effect tends to lead people to “overestimate the effectiveness of punishment and to underestimate the effectiveness of reward”. This feels like a particularly relevant piece of knowledge to take into a series of games like the Prisoner's Dilemma, as one is most likely to be rewarded for punishing others, yet most likely to be punished for rewarding them.39 More generally, in a system where trust is important, and needs to be encouraged, trying to avoid this bias may be a core goal in the design of the system.

       Biases on the Evaluation of Conjunctive and Disjunctive Events People are bad at realising that a number of improbable but conjunctive events are likely to yield a bad outcome. This is important to us when we consider chains of trust, which we will examine in Chapter 3, “Trust Operations and Alternatives”, as the chance of having a single broken chain of trust where all the links in the chain enjoy a high probability of trustworthiness may still in reality be fairly high.

      How relevant is this to us? For any trust relationships where humans are involved as the trustors, it is immediately clear that we need to be careful. There are multiple ways in which the trustor may misunderstand what is really going on or just be fooled by their cognitive biases. We have talked several times in this chapter about humans' continued problems with making rational choices in, for instance, game theoretical situations. The same goes for economic or purchasing decisions and a wide variety of other spheres. An understanding—or at least awareness of—cognitive biases can go a long way in trying to help humans to make more rational decisions. Sadly, while many of us involved with computing, IT security, and related fields would like to think of ourselves as fully rational and immune to cognitive biases, the truth is that we are as prone to them as all other humans, as noted in our examples of normalcy bias and observer-expectancy effect. We need to remember that when we consider the systems we are designing to be trustors and trustees, our unconscious biases are bound to come into play—a typical example being that we tend to assume that a system that we design will be more secure than a system that someone else designs.