was ‘the effect of its past and the cause of its future’. If an intellect were powerful enough to calculate every effect of every cause, then ‘nothing would be uncertain and the future just like the past would be present before its eyes’. By mathematically showing that there was no need in the astronomical world even for Newton’s Nudge God to intervene to keep the solar system stable, Laplace took away that skyhook. ‘I had no need of that hypothesis,’ he told Napoleon.
The certainty of Laplace’s determinism eventually crumbled in the twentieth century under assault from two directions – quantum mechanics and chaos theory. At the subatomic level, the world turned out to be very far from Newtonian, with uncertainty built into the very fabric of matter. Even at the astronomical scale, Henri Poincaré discovered that some arrangements of heavenly bodies resulted in perpetual instability. And as the meteorologist Edward Lorenz realised, exquisite sensitivity to initial conditions meant that weather systems were inherently unpredictable, asking, famously, in the title of a lecture in 1972: ‘Does the flap of a butterfly’s wings in Brazil set off a tornado in Texas?’
But here’s the thing. These assaults on determinism came from below, not above; from within, not without. If anything they made the world a still more Lucretian place. The impossibility of forecasting the position of an electron, or the weather a year ahead, made the world proof against the confidence of prognosticators and experts and planners.
The puddle that fits its pothole
Briefly in the late twentieth century, some astronomers bought into a new skyhook called the ‘anthropic principle’. In various forms, this argued that the conditions of the universe, and the particular values of certain parameters, seemed ideally suited to the emergence of life. In other words, if things had been just a little bit different, then stable suns, watery worlds and polymerised carbon would not be possible, so life could never get started. This stroke of cosmic luck implied that we lived in some kind of privileged universe uncannily suitable for us, and this was somehow spooky and cool.
Certainly, there do seem to be some remarkably fortuitous features of our own universe without which life would be impossible. If the cosmological constant were any larger, the pressure of antigravity would be greater and the universe would have blown itself to smithereens long before galaxies, stars and planets could have evolved. Electrical and nuclear forces are just the right strength for carbon to be one of the most common elements, and carbon is vital to life because of its capacity to form multiple bonds. Molecular bonds are just the right strength to be stable but breakable at the sort of temperatures found at the typical distance of a planet from a star: any weaker and the universe would be too hot for chemistry, any stronger and it would be too cold.
True, but to anybody outside a small clique of cosmologists who had spent too long with their telescopes, the idea of the anthropic principle was either banal or barmy, depending on how seriously you take it. It so obviously confuses cause and effect. Life adapted to the laws of physics, not vice versa. In a world where water is liquid, carbon can polymerise and solar systems last for billions of years, then life emerged as a carbon-based system with water-soluble proteins in fluid-filled cells. In a different world, a different kind of life might emerge, if it could. As David Waltham puts it in his book Lucky Planet, ‘It is all but inevitable that we occupy a favoured location, one of the rare neighbourhoods where by-laws allow the emergence of intelligent life.’ No anthropic principle needed.
Waltham himself goes on to make the argument that the earth may be rare or even unique because of the string of ridiculous coincidences required to produce a planet with a stable temperature with liquid water on it for four billion years. The moon was a particular stroke of luck, having been formed by an interplanetary collision and having then withdrawn slowly into space as a result of the earth’s tides (it is now ten times as far away as when it first formed). Had the moon been a tiny bit bigger or smaller, and the earth’s day a tiny bit longer or shorter after the collision, then we would have had an unstable axis and a tendency to periodic life-destroying climate catastrophes that would have precluded the emergence of intelligent life. God might claim credit for this lunar coincidence, but Gaia – James Lovelock’s theory that life itself controls the climate – cannot. So we may be extraordinarily lucky and vanishingly rare. But that does not make us special: we would not be here if it had not worked out so far.
Leave the last word on the anthropic principle to Douglas Adams: ‘Imagine a puddle waking up one morning and thinking, “This is an interesting world I find myself in – an interesting hole I find myself in – fits me rather neatly, doesn’t it? In fact it fits me staggeringly well, may have been made to have me in it!”’
Thinking for ourselves
It is no accident that political and economic enlightenment came in the wake of Newton and his followers. As David Bodanis argues in his biography of Voltaire and his mistress, Passionate Minds, people would be inspired by Newton’s example to question traditions around them that had apparently been accepted since time immemorial. ‘Authority no longer had to come from what you were told by a priest or a royal official, and the whole establishment of the established church or the state behind them. It could come, dangerously, from small, portable books – and even from ideas you came to yourself.’
Gradually, by reading Lucretius and by experiment and thought, the Enlightenment embraced the idea that you could explain astronomy, biology and society without recourse to intelligent design. Nikolaus Copernicus, Galileo Galilei, Baruch Spinoza and Isaac Newton made their tentative steps away from top–down thinking and into the bottom–up world. Then, with gathering excitement, Locke and Montesquieu, Voltaire and Diderot, Hume and Smith, Franklin and Jefferson, Darwin and Wallace, would commit similar heresies against design. Natural explanations displaced supernatural ones. The emergent world emerged.
O miserable minds of men! O hearts that cannot see!
Beset by such great dangers and in such obscurity
You spend your lot of life! Don’t you know it’s plain
That all your nature yelps for is a body free from pain,
And, to enjoy pleasure, a mind removed from fear and care?
Lucretius, De Rerum Natura, Book 2, lines 1–5
Soon a far more subversive thought evolved from the followers of Lucretius and Newton. What if morality itself was not handed down from the Judeo-Christian God as a prescription? And was not even the imitation of a Platonic ideal, but was a spontaneous thing produced by social interaction among people seeking to find ways to get along? In 1689, John Locke argued for religious tolerance – though not for atheists or Catholics – and brought a storm of protest down upon his head from those who saw government enforcement of religious orthodoxy as the only thing that prevented society from descending into chaos. But the idea of spontaneous morality did not die out, and some time later David Hume and then Adam Smith began to dust it off and show it to the world: morality as a spontaneous phenomenon. Hume realised that it was good for society if people were nice to each other, so he thought that rational calculation, rather than moral instruction, lay behind social cohesion. Smith went one step further, and suggested that morality emerged unbidden and unplanned from a peculiar feature of human nature: sympathy.
Quite how a shy, awkward, unmarried professor from Kirkcaldy who lived with his mother and ended his life as a customs inspector came to have such piercing insights into human nature is one of history’s great mysteries. But Adam Smith was lucky in his friends. Being taught by the brilliant Irish lecturer Francis Hutcheson, talking regularly with David Hume, and reading Denis Diderot’s new Encyclopédie, with its relentless interest in bottom–up explanations, gave him plenty with which to get started. At Balliol College, Oxford, he found the lecturers ‘had altogether given up even the pretence