Waiting For The Revolution In Physics

As I indicated in The String Theory Calamity, I am extremely skeptical that string theory will eventually turn out to have been the road to unification of gravitation and quantum theory (as well as deeply cynical about the sociology of the field).  The reasons for that skepticism are in part the “technical” reasons concerning the development of the theory, which are discussed very knowledgeably in two books, “The Trouble With Physics”, by Lee Smolin, and “Not Even Wrong”, by Peter Woit — two books that I recommend to anyone interested in understanding how high-energy theory came to this pass.

However, I have another set of reasons for being skeptical, which have not really been set forth anywhere else, so far as I am aware.  These are not so much technical, as historical/philosophical, and have to do with patterns in how crises in science tend to be resolved by the Kuhnian revolutions that they unleash.

Kuhn’s Crises and Revolutions

Thomas Kuhn’s “Structure Of Scientific Revolutions” set out a historical pattern in the development of modern science that is rather different from the impression, sometimes imparted by scientific textbooks, that scientific progress comprises steady, cumulative advance of ideas and understanding, driven by methodical investigation. Rather, Kuhn’s now-famous thesis is that the cumulative, steady, “normal” science advance by “puzzle-solving” is often observed to be punctuated by “crises”, wherein the existing set of ideas in a field appear to lose explanatory power. These crises are typically resolved by a “scientific revolution”1, in which major, disruptive adjustments occur to the intellectual framework of the science in crisis. After the revolution, it is frequently the case that the post- and pre-crisis theories are “incommensurate”, and that there exists a translation problem in relating conceptions from the later state of the theory to analogous conceptions from the earlier theory.

People accustomed to thinking of scientific advance in terms of positive cumulation have sometimes found Kuhn’s thesis disquieting, as if, by focusing attention on the sociologically-contingent aspects of scientific advance it somehow calls into question the objective nature of scientific knowledge.  Kuhn himself has been read as both a subjectivist and as a realist on the issue of scientific truth, and in his later writings, he has seemed (to me) to vacillate on the issue, or at least to regard it as not terribly important compared to the sociological aspects of science that interested him.  However, to me, as a practicing scientist and a hard-line “realist” as far as scientific truth is concerned, Kuhn’s thesis on scientific crises and revolutions says something both profound and true about science, both personally experienced and as history.

The Crisis In High-Energy Physics

These sorts of reflections are required by those concerned about the state of high-energy particle theory, nowadays, because there can be no sort of doubt that the field is in crisis. It is known (in fact, it is a commonplace) that the two main edifices of modern physics, quantum theory and general relativity, are not on speaking terms with each other. Both are “true” within their typical regimes of validity (short length scales and times, and weak gravitational fields for quantum theory, astronomical length scales and times and strong gravity fields for GR). Nonetheless it is easy to make statements that are perfectly sensible in the context of either theory that are arrant nonsense in the context of the other, so wrong as to be “not even wrong”.  And when the theories are forced into close contact, by consideration of physical circumstances of intense gravity and short distances, there result  absurdities such as the Big Bang and black-hole singularities2, failures of GR that quantum theory is powerless to address, despite its clear duty to do so.   We thus have two “correct” theories don’t even speak the same language, and haven’t learned to communicate meaningfully with each other in 85 years and counting. Yep, that would be a crisis alright.

The crisis is aggravated by the fact that we really know of no experimental or observational fact that contradicts either theory. If some experimental evidence were at hand that flatly contradicted some prediction of either (or both) theories, that would be promising. It would constitute a wedge that might be used to force open a gap, allowing us to peer through our haze of ideas and perceive what we’ve been missing. Unfortunately, we have a standard model of particle physics which, at the cost of about 20 free parameters, can explain pretty much all the particle experiments we know how to perform. And GR, with one free parameter (maybe two, if we take the cosmological constant seriously) has pretty much ascended from a mere theory to the status of an organizing principle.

The crisis can be seen to be driven entirely by inner contradiction, rather than by contradiction with data. This is interesting and significant, as a review of some previous scientific crises can help to illustrate.

Crises In Physics, And Their Resolutions

For the sake of organizing the discussion, it is helpful to make some finer distinctions about crises and revolutions in the physical sciences than Kuhn required for his general thesis about science. It appears that crises in science fall into one of two broad categories:

  • “Internal”, Theory-Driven Crises: these occur when, as with the Quantum/GR unification crisis, the problem manifests itself as an internal inconsistency in the theory, wherein a physical regime is recognized such that the theory simply breaks down or becomes incoherent;
  • “External”, Data-Driven Crises: these occur when new observations and experimental results flatly contradict existing theories, and the need to explain these new phenomena calls new theories into existence.

At the same time, it is possible to make a somewhat more subtle, but nonetheless clear distinction between two broad categories of “revolutionary” strategy that succesfully resolve scientific crises:

  • The Conceptual Readjustment Strategy: this is the “conservative” strategy.  No new concepts are imported into the theory.  Instead, existing conceptions have their meaning altered and their mutual relations adjusted, and are attached to observational reality in new and surprising ways.
  • The Ontological Expansion Strategy: this is the “radical” strategy.  Here scientists are driven to introduce new structure into theory that has no counterpart in previous theory, in effect expanding the “concept count” of the theory.

It is no accident that I have chosen to schematize revolutions in the physical sciences in this way.  I want to show, by a (rapid, highly simplified) review of a few historical cases, that there is a clear correspondence between the kind of crisis that afflicts a science and the kind of strategy that is likely to cure the crisis and restore that science to its “normal” state of puzzle-solving.

“Internal” Crises (I): Special Relativity

Insofar as the taxonomy of crises is concerned, it appears to me that the overwhelming majority of crises of mature, developed sciences are of the “Internal” variety.  Examples abound.  Famously, the advent of special relativity was spurred by the intolerable inconsistencies between classical dynamics — with its well-recognized Galilean invariance — and the Maxwell theory of electromagnetism — whose internal symmetry, belatedly recognized after the revolution to be Poincaré invariance, is inconsistent with Galilean invariance.  This incompatibility manifested itself as a decades-long research program on the properties of the “luminiferous ether”, designed to bear the Galilean relativity wrongly ascribed to Maxwell theory (although this is a modern perspective, and it is doubtful that many researchers thought of the ether in these terms).3

Einstein’s solution strategy was “conservative”. The special theory of relativity is kind of like seeing classical, Newtonian dynamics through a funhouse mirror. In both theories there are conceptions of time and space, of momentum and energy. But they mean different things, are related differently to each other, and are in a very different relation to observation — in fact, a careful redefinition of what an observer actually does with these quantities was a crucial step in the formation of the new theory. In effect, Einstein re-purposed existing conceptions of classical dynamics to make them conform to the symmetries of Maxwellian dynamics. Note that “new” mathematical structures, such as Minkowski space-time and the flat metric were introduced later, for convenience, and in actual fact had latent counterparts in the old Newtonian dynamical system (these were of limited interest, but were implicit in the theory nonetheless).

“Internal” Crises (II): General Relativity

Another famous example from Einstein’s opus is of course General Relativity. Here the internal contradiction was the difficulty in assimilating Newtonian gravitation as a causal field force, analogous to Maxwell forces, within the new context of special relativity. There was no observational need for GR (indeed, it took many decades to accumulate a substantial body of observations that validated the theory to the satisfaction of most skeptics). It came about simply because it was not possible to give a coherent special-relativistic discussion of gravitation.

Again, the resolution strategy was a conservative one. The existing passive flat metric of special relativity was replaced by a non-trivial curved metric, which was now assigned a dynamical role in cooperation with the matter and radiation that rode on it. Gravity as a separate force was banished, and instead gravitation took up residence in the very fabric of space-time. In this sense, the resolution was an “Ontological Reduction”, since there were fewer moving parts in the newer theory than in the old one!

“Internal” Crises (III): Quantum Electrodynamics

A final example, possibly less well-known, I found in Silvan Schweber’s “QED, And The Men Who Made It”. This is the story of “Second Quantization”. Starting in the late 1920’s the new field of quantum mechanics, whose success at explaining properties of matter was immediate and spectacular, was applied with mixed success to the study of radiation. The situation was deeply unsettling: Certain calculations, performed to leading order in perturbation theory, gave result in excellent agreement with experiment (Compton scattering is a good example). On the other hand, when those calculations were extended to higher-order (and supposedly higher accuracy), a catastrophe ensued: the results were infinite. Infinities started to crop up everywhere, in this new field of “Quantum Electrodynamics” (QED). It was deeply weird: lower-accuracy approximations furnished excellent predictions of experimental results, while higher-accuracy approximations gave nonsense. This was obviously an “internal” crisis, driven entirely by an inconsistency in the theory.

Schweber points out that the “Old Lions” of the quantum-mechanical revolution (Bohr, Heisenberg, Pauli, Dirac) were very pessimistic that the problems with QED could be resolved through technical reforms of existing theory. Influenced by the triumph of radical reform that they had brought about with the creation of Quantum Mechanics in the 1920s (see below), they looked for another such wrenching root-and-branch restructuring of physics as a means of escaping the crisis.

In point of fact, the resolution turned out to be a “conservative” technical reform, an absolutely classic case of conceptual readjustment. Schwinger, Tomonaga, and above all, Feynman (with some translation assistance from Dyson to mediate among their different formalisms and show that they were all the same despite different outward appearances) demonstrated that no new structure was required in QED. All that was necessary was to be more precise in how one attaches the concepts of “mass” and “charge” in the theory to actual measurements of electron “mass” and “charge”. The original theory had done this in a way that turned out to be naive. When the reformers exhibited how “mass” and “charge” could be defined in a self-consistent manner in the theory, they removed the problem of divergences once and for all. The new “Renormalized” QED was capable of producing spectacularly accurate predictions of all known quantum electrodynamic phenomena, and furnished the working model upon which the more sophisticated quantum field theories required by new particle discoveries could be based.

“External” Crises (I): Quantum Mechanics

Passing now to “external”, or data-driven crises, the most famous among such crises is, of course the crisis that culminated in the rise of Quantum Mechanics in the 1920s. The story is well-known and well-loved folklore among physicists. Those who would like to learn a bit more about this wrenching change and the deep (and deeply distressing) crisis that made it necessary could do worse than perusing Kuhn’s “Black-Body Radiation and the Quantum Discontinuity”, or Max Jammer’s “The Conceptual Development of Quantum Mechanics”. Briefly, the advent of precision quantitative spectroscopy brought to the attention of physicists certain properties of radiation, and of the interaction of radiation with matter, that were extremely difficult to square with classical theory (both before and after Einstein reworked the field). The properties of black-body radiation were mysterious, the existence of spectral emission lines even more so, and in addition, certain properties of bulk specific heats were simply inexplicable on classical grounds.

By the early 1900s, with the introduction of quanta by Planck and their successful use by Einstein to explain various phenomena, the process of supplementing classical physics with ad-hoc new rules had begun. This process was too incoherent to be characterized as “theorizing”, since the new rules were spackled on to the old theory in a way that did not give rise to any kind of coherent logical structure. With the work of Bohr, Sommerfeld, Kramers, and others, the process of stating such supplementary rules became at least somewhat systematized, but their meaning, and their relation to classical theory remained inscrutable. Nobody would have taken such a hodge-podge seriously, except for the fact that the hodge-podge appeared capable of producing physical predictions where the classical theory was powerless.

Finally, with the work of Schrödinger, Heisenberg, Jordan, Pauli, and the Copenhagen School in the early 1920s (some quarter-century following the advent of the crisis), a new, internally-coherent theoretical scheme was produced that encompassed both the classical theory and the “crazy” quantum rules. The cost of this scheme was the importation of a truckload of new structure into the theory — Hilbert spaces, state-space collapse, quantum phase effects, probabilistic dynamics, “complementarity”, etc. — that had no counterpart in the classical theory. This resolution constituted without question an ontological expansion: the “concept count” of the theory increased dramatically. But the new moving parts in the theory gave it unmatched explanatory power. We’ve been using quantum theory ever since, in physical contexts remote from the concerns of its inventors, and unimagined by them, with essentially unbroken success. If any of our physical theories about how the world is run is “true”, even provisionally, then that theory is quantum theory.

“External” Crises (II): The Copernican Revolution

The advent of quantum theory is a bit of an anomaly. It is not that easy to think of other examples where a mature, modern physical science has been driven to a crisis by the arrival of some new observational data. Usually it is pre-paradigmatic (Kuhn’s term for sciences in their “pre-modern”, somewhat chaotic state) sciences that enter the paradigmatic phase when the community of scientists in question comes to agreement on which observations matter and which are secondary, and start drilling down systematically into lab phenomena, producing bursts of interesting data. Once sciences enter that phase, and reside in it for a century or more, it is not so common for new data to show up that is so inexplicable as to upset the apple cart.

However, I can think of one other famous example that fits: the Copernican revolution. Ptolemaic astronomy had reigned for 1300 years by the time Copernicus undertook to reform it. There was a perfectly good reason for this, which is often overlooked by those who dismiss Ptolemy’s Almagest as simply wrong: it worked. Within the limits of observations of planetary motions (probably no better than a degree of arc or so), and on the short timescales accessible to individual humans, the Almagest gave a perfectly workable algorithm for predicting the positions of planets, certainly adequate for the very important task of casting horoscopes. However, by the 1500s, it was clear to astronomers that the Almagest needed reform. The long time baseline since it had been written was enough to show serious drifts of planets away from predicted positions, and Ptolemy’s scheme had been supplemented with a proliferation of ad-hoc devices (deferents, equants, etc.) to fix up the appearances. The resulting scheme struck Copernicus as a “monster”, which was his stated motive for seeking an alternative.

The alternative that he found — Sun-centered, as opposed to the Almagest‘s Earth-centered astronomy — amounted to a bold reconstruction of the theory. It was in no sense a minor technical reform. The Earth was now a planet! Think of what a courageous act of imagination that was: planets were specks of light that subtly flitted and danced in regular patterns among the fixed stars. The Earth is the vast mess that we stand on, towards which everything falls, with oceans, and mountains, and earthquakes, and dirt. To say “the Earth is a planet” must have struck many people at that time as an absurdity, a “not even wrong” statement comparable to saying “the numeral Seven has the smell of lilacs” to a modern-day person.

Well, brave, yes, but so far not-so-radical. There were circles in the old theory and circles in the new theory, they were just centered in different places. But of course, the Copernican reform only began with Copernicus. There followed new, and unprecedentedly precise observations of planetary position by Tycho Brahe, which made clear that both; the Ptolemaic “monster” and the new Copernican astronomy were quantitatively wrong (although the Copernican model at least had the virtue of greater simplicity). It was these new observations that finally drove a near-desperate Kepler to consider discarding circles in favor of ellipses, and the formulation of his “area law” and period law. And ultimately it fell to Newton to turn observational astronomy into a branch of dynamics, within which forces contended with momenta, where earlier all had been geometry. Here was the ontology expansion: area laws, period relations, and later force laws, none of which were commensurate with Ptolemy’s model.

Other “External” Crises?

I’m sure there are others, but I suspect that to find them we must look to subfields, or subfields of subfields, rather than to the sort of famous, dramatic, science-shaking historical events described above. I think that one could describe the rise of gauge field theory in response to the discovery of hordes of high-energy particles in the 1950s would qualify as an external crisis resolved by an ontology expansion. The BCS model of superconductivity may constitute another such case.

My own sub-sub-field of Gamma-Ray Bursts underwent a small revolution starting in 1997, when new observations from the BeppoSax satellite revealed the existence of optical counterparts to what had, until then, been regarded as a gamma-ray-only phenomenon. The field was instantly transformed, and within months we were discussing physical models of shocks in outflows, which the community had not bothered with terribly much prior to that time. Again, new data produces “crisis” (opportunity, really, this was a field in a funk until then), resolution is new structure to the models.

What Have We Learned?

This forced-march through capsule history of science would probably enrage any serious historian of science for its superficiality and blithe judgments. I don’t offer it as professional-grade history, but rather (as a perhaps greater offense to historians) to abstract a principle of scientific sociology. Astute readers will have noticed a correlation in the above selection of case histories: “Internal” crises have invariably been resolved by the “conservative” strategy of conceptual readjustment. It is only “External”, data-driven crises that have successfully yielded to the strategy of ontological expansion.

On reflection, this is not such a surprising conclusion. On philosophical grounds, we would expect, in some loosely-defined sense, that concept-counts in theories should be commensurate with the bodies of phenomena explained by those theories. If a new set of phenomena intrudes upon a well-established science, we should not be surprised that we may be forced to accept some new structure in the theory to accomodate those phenomena. This is not to say that this should be a first resort, but all else failing, it is an acceptable strategy with a proven record of success in such circumstances.

On the other hand, if a successful theory (or cluster of theories) gets wedged internally despite its explanatory power, we should be very suspicious of efforts to “fix” the theory by adding structure. If the theory works, our historical experience teaches us that it is far more likely that the consistency problem is traceable to some naive assumption which everyone believes is “true” about the meaning of the theory or of the relations between its parts. Such näiveté is not to be cured by bolting new epicycles to the side of the model. It is necessary to ponder the existing theory more deeply.

But What Have We Learned About String Theory?

Considered in this light, string theory amounts to an effort to cure an “internal” theory-driven crisis by adopting a strategy of ontology expansion. There is all sorts of new structure in string theory that has no counterpart in normal quantum field theory — the strings and branes themselves, the compactification of extra dimensions, the role of “Anthropic Principles” all are novelties that are proper only to string theory. My thesis, expanded upon at painful lengths above, is that this strategy is doomed. Ontology expansion of this sort is quite simply inappropriate for the kind of crisis that we confront in high-energy physics.

A person who is more optimistic about the prospects for string-theoretic success than I am might object to this view. How could an external, sociological argument about human activity possibly have any weight in deciding the correctness of a physical theory?4 In my view this objection, while serious, has a serious answer: Even if the correct new theory should happen to require new structure to get us out of the current difficulties, there is essentially no chance that we can guess at what that structure may be, without some guidance from experimental observation. The new structure is virtually certain to comprise “crazy ideas” — witness the wave-particle complementarity brought into being at the birth of quantum mechanics, for example. The trouble is that the space of “crazy ideas” is essentially infinite-dimensional. It’s too big for there to be any realistic possibility of leaping to the right crazy idea. Without the benefit of some experimental signpost, we’re far more likely to spin our wheels developing wrong crazy ideas, which is very likely what the past two decades of string theoretic work amounts to.

On the other hand, our current stock of “crazy ideas” that compose the existing cluster of theories has at least been suggested to us by actual experimental evidence, and is very well-validated in many regimes. To throw any of them out and start with a blank slate, and write theories incapable of those very same experimental validations is simply reckless. The duty of high-energy physicists is to re-consider the foundations that they’re building on, because it is extremely that something down there which we “know” is right is actually wrong. At a very minimum, until a rigorous “due-diligence” audit of those foundations has been carried out, we have no business importing new crazy ideas into the edifice of physics.

Not many people are interested in doing this sort of examination of the foundations of physics at the moment. Principally it’s the folks whose training is in General Relativity, rather than in particle physics, who are more favorably disposed to such work. Smolin’s book discusses some of the possibilities.

Roger Penrose stands out in this respect: his “Road To Reality”, while hilarious for its conceit that advanced mathematics can be brought to housewives and barflys, makes a startlingly powerful case that perhaps the unresolved measurement-theory issues surrounding the “collapse of the wavefunction”, which were really swept under the rug by means of the Copenhagen interpretation of quantum mechanics, have finally come back to bite us. I would love it if this were true — Einstein would richly deserve getting the last laugh at Bohr’s expense.

But perhaps the resolution is in a different part of the foundation. I don’t feel qualified to judge. I do feel sure, however, that there are facts known to all physicists that are part of standard theory today that will, someday, appear to us in a totally different light. We’ve probably already known all the parts of the answer for over forty years now. We just haven’t yet put them together correctly.


1 In lower-case, “scientific revolution” is generic term, as distinct from the unique, capitalized “Scientific Revolution”, the historical event that ushered in modern science and whose periodization is often held to be bracketed between the discoveries of Copernicus and those of Newton.

2 These singularities are “failures” of GR in the same sense that singularities always signal a failure of a physical theory. Clearly, a prediction that all distances go to zero and all energy densities and temperatures diverge to infinity is unphysical. This is not to say that “Big Bang Cosmology” is a failure — quite the contrary, BB Cosmology is an incredible achievement, without which almost all modern extragalactic astronomical observations would be unintelligible. It is just the Big Bang itself that is silly.

3A legitimate objection may be lodged here: what of the Michaelson-Morley experiment? Wasn’t that a bit of “data” that contradicted existing theory, and doesn’t that make the advent of special relativity at least a bit of a “data-driven” revolution?

The weak answer is that Einstein, by his own testimony, knew nothing of the Michelson-Morley result. This is weak because it is too contingent, and in any event, other important participants in the revolution such as Lorentz and Fitzgerald were conscious of the result.

The better answer is that the category of “data-driven” crises comprises events wherein lots of new, diverse, and inexplicable data suddenly forces itself on the community e.g. continuum and line spectra from matter and from astronomy, or new experimental relations between currents, magnetic objects, and static electrification. The Michelson-Morley result was a single negative result, the absence of a predicted phenomenon, which is rather different from having a slew of new phenomena. The prediction of a positive result was due to the defects underlying the theory, which were therefore the deeper cause of the crisis.

4Although note that the “Anthropic Principle”, the current lifeboat to which string theorists cling, is an argument of precisely this sort.


Tags: , , , , ,

One Response to “Waiting For The Revolution In Physics”

  1. dannyburton Says:

    question: why not? (saying ‘just plain nuts’ don’t count lol)

    The human term ‘number’ and the concepts of a counting system are descriptions of difference between topologically whole areas. ‘Two fish’ decribes two discreet entities within a set ‘fish’. What we call number theory is the detailed analysis of how areas of difference within topologically whole entities organise efficiently within that entity.

    The differences described however are not the result of human numbering, human numbering is a classification of already existing areas of difference within a given set. A number of fish existed, in an awful lot of discreetly different ways, before the human number system. If we insist that the different areas only existed as areas of discreet difference after they were perceived to, we are what is commonly termed ‘creationist’.

    It is accepted that the universe (by definition) is a topologically whole entity. Physics is the analysis of the areas of disceet differences, and how they interact, combine and divide within the topologically whole universe. In physics these areas of difference, and the way they ‘organise’ are treated as the results of naturally-occurring phenomena. Physics has always used mathematical
    tools to analyse these ‘physical’ areas of difference, and many words have been written about the miraculous coincidence that the language of mathematics is so well suited to do such analyses.

    the relationship between the ‘naturally-occurring areas of discreet difference in the topologically whole universe’ and ‘human numbering system, number theory and mathematics’ is the equivalent of the relationship between ‘the naturally-occurring force between masses’ and what we call ‘the theory of gravity’.

    relationship N->n
    equivalent to
    relationship G->g

    where the capital letter represents a natural phenomenon and the lower-case represents the human analysis of the natural phenomenon.

    The implications are that the naturally-occurring processes that we call ‘number theory’ will result in the naturally-occurring processes that we call ‘quantum mechanics’ and further to all other naturally occurring processes that we eventually call ‘physics’.

    One final line – if the universe IS a topologically whole entity, and everything within that universe is composed of various fractions of the whole: then inflation is in fact division and subdivision. The expansion is in the ‘numbers’ ie the discreetly different areas within the whole.

    it is not a set of sets, which is then a set of set of sets… the set of sets is absolute by definition and any introduction of further sets merely shows subdivision of the original.

    It is eminently testable as it predicts that ‘number theory’ and ‘quantum mechanics’ will become increasingly converged (ok, all areas of physics… but I say quantum mechanics because it’s at the narrow end of the decreasing complexity).

    the prediction is: more and more ‘coincidences’ such as the riemann-zeta function will be ‘discovered’ at the LHC and other high-energy early-universe particle experiments. (In fact anywhere all naturally-occurring topological wholes being subdivided over time, when analysed mathematically should show evidence’s of ‘strange’ similarities between each other, whether it’s in physics, biology or any other field).

    still with me?


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: