Andrew Zimmerman Jones

String Theory For Dummies


Скачать книгу

a physics degree from a top college without once being asked a question about the scientific method in a physics course. (Though it may come up if you dabble a bit more in the philosophy of science or similar courses.)

      Turns out, there’s no single scientific method that all scientists follow. Scientists don’t look at a list and think, “Well, I’ve observed my phenomenon for the day. Time to formulate my hypothesis.” Instead, science is a dynamic activity that involves a continuous, active analysis of the world. It’s an interplay between the world we observe and the world we conceptualize. Science is a translation between observations, experimental evidence, and the hypotheses and theoretical frameworks that are built to explain and expand on those observations.

      Still, the basic ideas of the scientific method do tend to hold. They aren’t so much hard-and-fast rules, but they’re guiding principles that can be combined in different ways depending on what’s being studied.

      One of the best situations for a scientist to be in is to observe a pattern or trend in phenomena, and then use that to make a precise prediction about some other phenomenon that hasn’t yet been observed. This provides the basis for a new experiment or observation that, if it matches the prediction, provides an excellent foundation for thinking the line of reasoning that led to the prediction was probably on the right track.

      The ideas of the scientific method are often traced back to Sir Francis Bacon’s 1620 book, Novum Organum, and to Galileo Galilei’s works in the 1630s. Broadly speaking, the main idea is that reductionism and inductive reasoning could be used to arrive at fundamental truths about the causes of natural events, which can then be compared with experience and experiment. This was quite a revolutionary idea because, at that time, your best bet of convincing anybody of your ideas would have been to argue that they matched Aristotle’s theories written 2,000 years earlier!

      In the Baconian model, the scientist breaks natural phenomena down into component parts that are then compared to other components based on common themes. These reduced categories are then analyzed using principles of inductive reasoning. Inductive reasoning is a logical system of analysis where you start with specific true statements and work to create generalized laws that would apply to all situations by finding commonalities between the observed truths.

      The need for experimental falsifiability

      Traditionally, the idea has been that an experiment can either confirm or refute a theory. An experimental result yields positive evidence if it supports the theory, while a result that contradicts the hypothesis is negative evidence.

      In the 20th century, a notion arose that the key to a theory — the thing that makes it scientific — is whether it can in some way be shown to be false. This principle of falsifiability can be controversial when applied to string theory, which theoretically explores energy levels that can’t at present (or possibly ever) be directly explored experimentally. Some claim that because string theory currently fails the test of falsifiability, it’s somehow not “real science.” (Check out Chapter 18 for more on this idea.)

      The focus on this falsifiability is traced back to philosopher Karl Popper’s 1934 book, The Logic of Scientific Discovery. He was opposed to the reductionist and inductive methods that Francis Bacon had popularized three centuries earlier. In a time that was characterized by the rise of modern physics, it appeared that the old rules no longer applied.

      Popper reasoned that the principles of physics arose not merely by viewing little chunks of information, but by creating theories that were tested and repeatedly failed to be proved false. Observation alone could not have led to these insights, he claimed, if they’d never been put in positions to be proven false. In the most extreme form, this emphasis on falsifiability states that scientific theories don’t tell you anything definite about the world, but are only the best guesses about the future based on past experience.

      For example, if you predict that the sun will rise every morning, you can test your prediction by looking out the window every morning for 50 days. If the sun is there every day, you have not proved that the sun will be there on the 51st day. After you actually observe it on the 51st day, you’ll know that your prediction worked out again, but you haven’t proved anything about the 52nd day, the 53rd day, and so on.

      

No matter how good a scientific prediction is, if you can run a test that shows that it’s false, you have to throw out the idea (or at least modify your theory to explain the new data). This led the 19th century biologist Thomas Henry Huxley to define the great tragedy of science as “the slaying of a beautiful hypothesis by an ugly fact.”

      To Popper, falsifiability was far from tragic, but was instead the brilliance of science. The defining component of a scientific theory, the thing that separates it from mere speculation, is that it makes a falsifiable claim.

      It’s also worth noting that mathematics as a whole doesn’t really follow this idea of falsifiability. Instead, it invents some more or less reasonable starting points (axioms) and, through a series of deductions, provides theorems that establish precise relations between mathematical objects. Although math itself is not falsifiable, it cannot be denied that it is incredibly useful to understanding nature, and we’d be hard-pressed to do without it when building a house or flying a plane.

      Popper’s claim is sometimes controversial, especially when it’s being used by one scientist (or philosopher) to discredit an entire field of science. Many still believe that reductionism and inductive reasoning can lead to the creation of meaningful theoretical frameworks that represent reality as it is, even if there’s no claim that’s expressly falsifiable. The central element of this belief is the idea of confirmation — direct positive evidence for a theory, rather than just a lack of negative evidence against it.

      String theorist Leonard Susskind and physicist Lee Smolin amicably clashed over exactly this point online in 2004 (you can view the debate at www.edge.org/3rd_culture/smolin_susskind04/smolin_susskind.html). To support the idea of confirmation, Susskind lists several theories that have been denounced as unfalsifiable: behaviorism in psychology along with quark models and inflationary theory in physics. Scientists initially believed that certain traits in these areas couldn’t be examined, though methods were later developed that allowed them to be tested.

      

There’s a difference between being unable to falsify a theory in practice and being unable to falsify it in principle.

      It may seem as if the debate over confirmation and falsifiability is academic. That’s probably true, but some critics of string theory frame the debate around this theory as a battle over the very meaning of physics. Many string theory critics believe that it’s inherently unfalsifiable, while some string theorists believe a mechanism to test (and falsify) the prediction of string theory may someday be found.

      The foundation of theory is mathematics

      Galileo famously wrote that the universe is a book, and the language in which it is written is mathematics. Since his time, we have developed more and more powerful mathematical models that represent the underlying physical