refers to the development of new theories.
Discovery is the first step toward realizing the aim of science. The problem of scientific discovery for contemporary pragmatist philosophers of science is to describe and to proceduralize the development of universally quantified statements for empirical testing with nonfalsifying test outcomes, thereby making laws for use in explanations and test designs.
Much has already been said in the above discussions of philosophy of scientific language in Chapter 3 about the pragmatic basis for the definition of theory language, about the semantic basis for the individuation of theories, and about state descriptions. Those discussions will be assumed in the following comments about the mechanized development of new theories.
4.11 Discovery Systems
The discovery system produces a transition from an input-language state description containing currently available information to an output-language state description containing the generated and tested new theories.
In the “Introduction” to his Models of Discovery 1978 Nobel-laureate Herbert Simon, one of the founders of artificial intelligence, writes that dense mists of romanticism and downright knownothingness have always surrounded the subject of scientific discovery and creativity. Therefore the most significant development addressing the problem of scientific discovery has been the relatively recent mechanized discovery systems in computational philosophy of science.
The ultimate aim of the computational philosopher of science is to facilitate the advancement of contemporary sciences by participating in and contributing to the successful basic-research work of the scientist. The contemporary pragmatist philosophy of science thus carries forward John Dewey’s emphasis on participation. But few academic philosophers have the requisite computer skills much less a working knowledge of any empirical science for participation in basic research.
Every useful discovery system to date has contained procedures both for constructional theory creation and for critical theory evaluation for quality control of the output and for restricting the size of the system’s otherwise unmanageably large output. Theory creation introduces new language into the current state description to produce a new state description, while falsification eliminates language from the current state description to produce a new state description. Thus both theory development and theory testing enable a discovery system to offer a specific and productive diachronic dynamic procedure for linguistic change in empirical science.
The discovery systems do not merely implement an inductivist strategy of searching for repetitions of individual instances, notwithstanding that statistical inference is employed in some system designs. The system designs are mechanized procedural strategies that search for patterns in the input information. Thus they implement Hanson’s thesis in Patterns of Discovery that in a growing research discipline inquiry seeks the discovery of new patterns in data. They also implement Feyerabend’s “plea for hedonism” in Criticism and the Growth of Knowledge to produce a proliferation of theories. But while many are made, few are chosen due to the empirical testing routines in the systems.
4.12 Types of Theory Development
In his Introduction to Metascience (1976) Hickey distinguishes three types of theory development, which he calls extension, elaboration and revision.
Theory extension is the use of a currently tested and nonfalsified explanation to address a new scientific problem. The extension could be as simple as adding hypothetical statements to make a general explanation more specific for the problem at hand.
A more complex strategy for theory extension is analogy. In his Computational Philosophy of Science (1988) Thagard describes his strategy for mechanized theory development, which consists in the patterning of a proposed solution to a new problem by analogy with an existing explanation for a different subject. Using his system design based on this strategy his discovery system called PI (an acronym for “Process of Induction”) reconstructed development of the theory of sound waves by analogy with the description of water waves. The system was his Ph.D. dissertation.
In his Mental Leaps: Analogy in Creative Thought (1995) Thagard further explains that analogy is a kind of nondeductive logic, which he calls “analogic”. It firstly involves the “source analogue”, which is the known domain that the investigator already understands in terms of familiar patterns, and secondly involves the “target analogue”, which is the unfamiliar domain that the investigator is trying to understand. Analogic is the strategy whereby the investigator understands the targeted domain by seeing it in terms of the source domain. Analogic involves a “mental leap”, because the two analogues may initially seem unrelated. And the mental leap is called a “leap”, because analogic is not conclusive like deductive logic.
It may be noted that if the output state description generated by analogy such as the PI system is radically different from anything previously seen by the affected scientific profession containing the target analogue, then the members of that profession may experience the communication constraint to the high degree that is usually associated with a theory revision. The communication constraint is discussed below (Section 4.26).
Theory elaboration is the correction of a currently falsified theory to create a new theory by adding new factors or variables that correct the falsified universally quantified statements and erroneous predictions of the old theory. The new theory has the same test design as the old theory. The correction is not merely ad hoc excluding individual exceptional cases, but rather is a change in the universally quantified statements. This process is often misrepresented as “saving” a falsified theory, but in fact it creates a new one.
For example the introduction of a variable for the volume quantity and development of a constant coefficient for the particular gas could elaborate Gay-Lussac’s law for gasses into the combined Gay-Lussac’s law, Boyle’s law and Charles’ law. Similarly Friedman’s macroeconomic quantity theory might be elaborated into a Keynesian liquidity-preference function by the introduction of an interest rate, to account for the cyclicality manifest in an annual time series describing the calculated velocity parameter and to display the liquidity trap phenomenon.
Pat Langley’s BACON discovery system implements theory elaboration. It is named after the English philosopher Francis Bacon (1561-1626) who thought that scientific discovery can be routinized. BACON is a set of successive and increasingly sophisticated discovery systems that make quantitative laws and theories from input measurements. Langley designed and implemented BACON in 1979 as the thesis for his Ph.D. dissertation written in the Carnegie-Mellon department of psychology under the direction of Simon. A description of the system is in Simon’s Scientific Discovery: Computational Explorations of the Creative Processes (1987).
BACON uses Simon’s heuristic-search design strategy, which may be construed as a sequential application of theory elaboration. Given sets of observation measurements for two or more variables, BACON searches for functional relations among the variables. BACON has simulated the discovery of several historically significant empirical laws including Boyle’s law of gases, Kepler’s third planetary law, Galileo’s law of motion of objects on inclined planes, and Ohm’s law of electrical current.
Theory revision is a reorganization of currently existing information to create a new theory. In his Origins of Modern Science 1300-1800 Herbert Butterfield wrote that in both celestial and terrestrial physics the historic scientific revolution was brought about not by new observations or by additional evidence, but by transpositions that took place inside the minds of the scientists (P. 1). The results of theory revision may be radically different, so revision might be undertaken after repeated attempts at both theory extension and theory elaborations have failed to correct a previously falsified theory. The source for the input state description for mechanized theory revision consists of the descriptive vocabulary from the currently untested theories addressing the problem at hand. The descriptive vocabulary from previously falsified theories may also be included as inputs to make an accumulative state description, because the vocabularies in rejected theories can be productively cannibalized for their scrap value. The new theory is most likely to be called revolutionary if the revision