In particular, Pearl’s graph theory terminology corresponds almost exactly to the belief system terminology introduced in the previous sections. This shows that, although cognitive maps are considered belief systems rather than graphs, it is possible to think of them as DAGs. Table 8 gives an overview. Figure 10 visualizes these elements. The upper part is a cognitive map, the lower part is a DAG.
Figure 9. Structure of a directed acyclical graph.
Table 8: Compatibility of DAGs and Cognitive Maps
Cycles and Self-Loops
In spite of these similarities, there is a feature of DAGs that does not necessarily correspond to cognitive maps in particular, or to the nature of reasoning processes more generally: the absence of cycles or self-loops. Specifically, humans may reconsider certain beliefs before/if reaching a decision. Such reconsiderations may be represented as cycles or self-loops. Nevertheless, recall that all reasoning processes represented by cognitive maps end in decisions. Because of this, they are directed toward decisions, even if they contain cycles or self-loops. Cycles or self-loops in cognitive maps therefore represent reconsiderations only within reasoning processes that end in decisions. They do not change decisions. Based on this, it is possible to formalize cognitive maps into DAGs.29
Figure 10. Compatibility of directed acyclical graphs and cognitive maps. (2 graphs)
Counterfactuals
Following Pearl, formalizing cognitive maps into DAGs allows the researcher to intervene on the actors’ belief systems and explore when they would not have made certain decisions. In Chapter 6, I use this approach to study worlds in which the individuals I interviewed for this research would not have decided to take up arms.
Studies exploring whether people would have behaved differently had the reality been different are called counterfactual studies. In political science, counterfactuals30 have been defined as “subjunctive conditionals in which the antecedent is known or supposed for purposes of argument to be wrong” (Brian Skyrms, quoted in Tetlock and Belkin 1996: 4).31 They are considered to offer a convenient tool to explore whether “things could have turned out differently” (7).
There is a general consensus among researchers from various fields that counterfactual analysis is “unavoidable” to explain phenomena that cannot be studied by controlled experiments that randomize the initial conditions (Tetlock and Belkin 1996: 6). There is, however, no consensus about how to engage in counterfactual analysis.32 Formalizing cognitive maps into DAGs provides a new approach to study counterfactuals.33 Specifically, it allows the researcher to intervene on the actors’ belief systems and test when they would have made different decisions had they held different beliefs. This bridges the gap between actors and structures by intervening on beliefs about the world, rather than on the world itself.
Modeling Change in the External World
External Interventions
To model change in the world, Pearl introduces external interventions. To illustrate this, Pearl draws on a simple DAG, shown below. This DAG represents relationships between the seasons of the year (A), the falling of rain (B), the sprinkler being turned on (C), the pavement being wet (D), and the pavement being slippery (E) (15). Specifically, the DAG shows a directed order from A to E in which the season influences the falling of rain (A → B) and the turning on of the sprinkler (A → C); the falling of rain and turning on of the sprinkler in turn influence the pavement being wet (B → D and C → D); and the pavement being wet in turn influences the pavement being slippery (D → E).
Figure 11. Example of a directed acyclical graph. Pearl 2000: 15.
The directed order from A to E may be described as dependency (e.g., Spirtes 1995). It differs from other orders that do not address directed relationships. For example, consider flipping a coin multiple times: the result of one toss does not depend on the result of the previous toss.
Specifically, there are two types of dependency conditions: (1) conditional dependencies between particular vertices connected by an edge, and (2) conditional in dependence between vertices that are not connected by an edge. For example, given three variables A, B, and C, one can say that A and B are independent if knowing A remains unchanged by knowing B. Formally this can be expressed as a conditional probability statement: P(A|B, C) = P(A|C). On the other hand, one can say that A is conditionally dependent on B if knowing B influences knowledge of A. Formally, this can be expressed as a conditional probability statement: P(A|B) = P(A,B)|P(B).
The DAG above then illustrates the following condition of independence. Knowing that the pavement is wet (D) makes knowing that the pavement is slippery (E) independent of knowing the season (A), whether it rains (B), or whether the sprinkler is turned on (C). In short, knowledge of D establishes independence between E and A, B, C. On the other hand, knowing the season (A), whether it rains (B), or whether the sprinkler is turned on (C) does not make knowing the pavement is slippery (E) independent of knowing the pavement is wet (D). In short, E is conditionally dependent on D. This is the case because knowing the pavement is slippery (E) is directly dependent on knowing that the pavement is wet (E), but only indirectly dependent on knowing the season (A), whether it rains (B), or whether the sprinkler is turned on (C). In Pearl’s (2000: 21) vocabulary, the pavement’s being wet (B) “mediates” between the pavement’s being slippery (D) and whether it rains (B), the sprinkler is turned on (C), and the season (A).
Figure 12. Example of an intervention. Pearl 2000: 23.
Given these observations, Pearl models an external intervention in which the vertex representing knowledge about whether the sprinkler is on is defined as “SPRINKLER = ON.” This is visualized by Figure 12.
This figure shows that intervening on C so that it is known that the sprinkler is on makes it possible to consider the effect of “SPRINKLER = ON” without considering A → C. In the figure, this is shown by the deletion of the arrow between A and C. Formally, this can be expressed by a change in the probability distributions representing this DAG. The probability distribution of this DAG before the intervention (Figure 11) can be represented as
P(A, B, C, D, E) = P(A) P(B|A) P(C|A) P(D|B, C) P(E|D).
The probability distribution of this DAG after the intervention (Figure 12) lacks P(C|A) due to knowledge of C and can be represented as
PC=On(A, B, D, E) = P(A) P(B|A) P(D|B, C=On) P(E|D).
The removal of A → C [P(C|A)] from the probability function is possible, because knowing C (that the sprinkler is on) makes it unnecessary to consider whether C and A (season) had an influence on C, as indicated by A → C. In Pearl’s words, “Once we physically turn the sprinkler on and keep it on, a new mechanism (in which the season has no say) determines the state of the sprinkler” (23).
Drawing on such interventions,