If you flip a hundred coins and fifty-five of them come up heads, you haven’t seen anything out of the ordinary. But if you flip a hundred thousand coins, and fifty-five thousand of them come up heads, you have plenty of reason to think you have ten thousand two-headed coins, give or take a few hundred. Just by looking at the sides that turned up, you wouldn’t know which coins were the two-headed ones, but you could reasonably assume that ten thousand is about the right number.
The same kind of reasoning can be applied to sets of experiments that have many more than two possible outcomes like heads or tails. If some outcomes occur significantly more often than what you’d expect for specified conditions, it’s quite reasonable to think that those specified conditions aren’t the actual ones you’re dealing with. If some alternative conditions would in fact make the same otherwise-unlikely outcomes practically certain, those alternatives might well be the actual conditions—especially if the outcomes would only be likely under those alternative conditions and no others.
The existence of a recently-discovered type of subatomic particle whose characteristics match that of a long-hypothesized particle known as the Higgs boson[Wikipedia; Wikipedia] was demonstrated in just this way, through experiments done primarily at CERN, the European high-energy physics laboratory, by collaborations of physicists from many countries around the world and sponsored by several organizations, including the U. S. Department of Energy. The actual outcomes of recent experiments with colliding high-energy subatomic particles would be very unlikely if Higgs bosons don’t exist, but would be very likely if they do. But the existence of Higgs bosons would have a significance beyond itself, because it would imply the correctness of a particular theory of the laws governing physical processes at the most elementary known level—that of the subatomic particles like quarks, gluons, photons, electrons, and others that make up every physical object in the universe.
The last few centuries of experiments indicate that all the different kinds of physical influence, by which any one thing affects another, are apparently the result of just a few basic kinds of interaction among the subatomic particles that constitute the things.
Two of these kinds seem at first to be very different. One, electromagnetism, involves the electric and magnetic interactions among both static and moving electric charges. These electric and magnetic forces become weaker the further the charges are from one another, but they never disappear completely. A second type of interaction is quite evident when the particles involved get within an attometer[Wikipedia] or so of each other, but whenever the particles move much further apart than that, this interaction suddenly becomes practically nonexistent. When this second interaction does affect the particles, it not only influences their motion, but often changes one species of particle into another—for example, when hydrogen nuclei (protons) in the sun fuse together to produce helium nuclei (in which some of the protons’ u quarks[Wikipedia] have been changed into d quarks,[Wikipedia] making the protons into neutrons) and thus release energy (some of which, incidentally, becomes sunshine). This second interaction is simply known as the “weak nuclear interaction”, or even more simply as the “weak interaction”.
Once people began experimenting a few centuries ago with charging and magnetizing various objects and observing their behavior under various conditions, they gradually worked out the laws of electromagnetism, which were eventually expressed in a clear mathematical form by James Clerk Maxwell[Wikipedia] in the mid-19th century. One immediate implication of Maxwell’s equations was that a changing electric force field, such as what you’d get by charging or discharging a capacitor, would automatically be accompanied by a magnetic force field that could affect the orientation of nearby magnets. It was already known that changing magnetic fields were accompanied by electric fields—this is the principle behind electric generators. The similar association of magnetic force fields with changing electric fields had not been observed before, but this phenomenon was soon verified.
Maxwell’s equations had a second, surprising implication as a kind of byproduct of the first. If changes in either an electric or a magnetic field are always accompanied by another field of the other type, then if a field of either type is varied in such a way that the other accompanying field varies in a similar fashion, the other field will itself be accompanied by an additional field of the first type—which will be accompanied by yet another field of the other type, and so on and so on. Furthermore, such variations in an electromagnetic field will rapidly propagate outward from their source as waves which, as Maxwell’s equations show, travel through space at the speed of light. As shown by further experiments, these waves actually are light. When Heinrich Hertz[Wikipedia] produced waves of clearly electromagnetic origin in the late 19th century and proved they had the known properties of light, he finally confirmed a major feature of Maxwell’s theory, long after many other predictions from the equations had been verified by other experiments and practical devices.
The understanding of weak interactions took a somewhat similar path from experiment to theory and back. Phenomena eventually recognized as weak-interaction processes were discovered in the late 19th century, though detailed understanding of these interactions came through 20th- and 21st-century experiments with colliding subatomic particles; in many of the experiments accelerators were used to smash the particles into each other so they’d be close enough together for weak interactions to affect them. The theory developed to account for what was learned turned out to include, as Maxwell’s theory did, predictions of new phenomena, such as weak-interaction fields mathematically similar to electromagnetic fields, and weak interactions that could change one type of subatomic particle to another without also changing the particle’s electric charge the way weak interactions had previously been known to do.
But perhaps the most interesting finding was that the simplest theory comprehensive enough to account for the then-known facts about weak interactions was automatically a theory of weak and electromagnetic interactions. In a manner somewhat like, and somewhat different from, the way Maxwell’s theory connects electric and magnetic forces, the “electroweak” theory implies a connection, through a mechanism proposed by Peter Higgs and other physicists in 1964,[Wikipedia] between electromagnetic interactions and weak nuclear interactions—specifically between photons, the energy quanta[Wikipedia] of waves in the electromagnetic field, and the corresponding quanta of weak-interaction fields, known as W and Z bosons.[Wikipedia] A byproduct of this theory is its implication that a previously unobserved kind of particle should exist—the Higgs boson.
As mentioned above, the electroweak theory was partially confirmed through the discovery of phenomena other than Higgs bosons, just as many implications of Maxwell’s electromagnetic equations were proven accurate long before Hertz’ experiments on electromagnetic waves. But the characteristics of Higgs bosons are not as well defined by electroweak theory as the properties of light are by Maxwell’s equations. The simplest version of electroweak theory implies that there is only one kind of Higgs particle, but we should still see many of the same weak-interaction phenomena if more than one kind of Higgs particle exists. And whether there is only one kind or many, electroweak theory tell us very little about how massive any Higgs particles are—if Higgs particles have one mass, certain phenomena should occur, but if they have a different mass, experiments should produce some different phenomena.
So even with the discovery that particles do exist whose behavior is at least consistent with that of Higgs bosons, and with the discovery’s implications about the accuracy of electroweak theory, there’s still plenty to learn about the nature of these particles, as the following reports on recent DoE-sponsored research indicate.
Pushpalatha C. Bhat, from Fermi National Accelerator Laboratory (Fermilab), is the author of an October 2012 report about the findings of one group’s experiment at CERN, entitled “Observations of a Higgs-like boson in CMS at the LHC”[SciTech Connect]—the “LHC” referring to CERN’s highest-energy particle accelerator and the “CMS” referring to the experimental device by which the group collects data about the results of the accelerated particles’ collisions. The report describes particular reactions by which Higgs bosons should be produced from the energy of proton collisions, and the most common ways that the Higgs bosons so produced should decay to form other particles that the CMS particle-detecting device can identify—processes whose rates of occurrence depend on how massive Higgs bosons are if they exist, which rates partially determine just how often different sets of particles should show up in the CMS detector. The report describes in detail what the CMS experimenters looked for, what they found, and what their findings tell us about the mass and intrinsic angular momentum, or spin[Wikipedia], of the newly-discovered boson: its mass is most likely around 125.3 billion electron-volts (125.3 GeV)[Wikipedia], or more than 133 times the mass of the protons found in ordinary atoms, while its spin is different from the one-unit spin characteristic of such particles as photons (the energy quanta of light waves). The simplest theory of electroweak interactions suggests that the Higgs boson’s spin is zero units; if the new discovery is the Higgs boson, it would be the first known noncomposite particle[Wikipedia] to have no intrinsic spin whatsoever.
CERN’s announcement of the new particle discovery by the CMS and ATLAS groups was made just before an already-scheduled workshop of “Higgs hunters”, at which the discovery was of course a major topic of interest. Michael E. Peskin of SLAC National Accelerator Laboratory gave a theoretical summary lecture[SciTech Connect] addressing whether the discovery was indeed the Higgs particle, how that could be determined by its behavior, what suspected deviations from the simplest version of electroweak-interaction theory might show up in further experiments, and what might be learned about the new particle using accelerators that can produce it in greater numbers.
The statement above that the mass of the new particle is around 125.3 GeV doesn’t just reflect a limitation of measuring instruments. Any type of particle that decays into others actually has an intrinsically indefinite mass, best described by a distribution of possible masses that has a given width (or standard deviation) around an average. The width of the distribution is greater for particles that decay more quickly, less for those that decay more slowly, and theoretically zero (meaning that the mass is definite, always the same) for particles that never decay at all. As expected for Higgs bosons, the new particle discovered at CERN decays into others—indeed, the high rate of decay-product generation was the evidence for the new particle’s existence—and as with any particle, the time it typically takes to decay corresponds to the inherent width of its mass distribution.
Experimental devices like the CMS at CERN can distinguish the masses of different particles only if the masses differ by more than the devices’ mass-resolution limit, at least if the devices are used in the obvious way as designed. Thus it would appear impossible to determine the width of the new particle’s mass distribution if the width turns out to be smaller than this resolution limit. A way around this limitation, using experimental data in a less obvious way, is described in the SLAC publication “Bounding the Higgs Boson Width Through Interferometry”.[SciTech Connect] In the authors’ method, two types of colliding-proton processes are considered: those producing Higgs bosons that decay into a given set of particles, and those producing the same final set of particles without producing a Higgs boson first. The authors discuss how calculations that involve the probabilities of both types of process can be used to determine maximum bounds on the width of the Higgs boson’s mass distribution—perhaps even the actual width itself.
The “interferometry” in the report’s title refers to a key feature of the calculation. The different ways to produce the final particle sets, both those that involve Higgs bosons and those that don’t, each have a probability of occurring that can be calculated from standard electroweak theory. If these probabilities were independent, the probability that a given final set of particles results from any one of the processes would just be the sum of the individual process’ probabilities. But the Higgs-mediated and non-Higgs-mediated processes are interdependent in a way commonly observed for quantum-physical systems. The mass distribution of Higgs bosons turns out to be related to the probabilities of the different particle-production processes through quantities known as probability amplitudes, which have directions in an abstract space that can either align with each other (implying a greater overall particle-production rate than independent probabilities would), or oppose and interfere with each other (implying a lesser rate). The Higgs boson width can thus be at least partially determined through considerations of amplitude interference, whence the report title.
As mentioned above, electroweak-interaction theory has several possible variants, including some that involve more than one Higgs boson. Determining which, if any, of these theories matches reality requires that we observe more features of reality, which we can see through appropriate experiments or natural occurrences. Arguments for a particular variant of electroweak-interaction theory are given in the report “Chiral U(1) flavor models and flavored Higgs doublets: the top FB asymmetry and the W jj”.[SciTech Connect] The authors note that standard electroweak theory suggests that when top and antitop quarks[Wikipedia] are produced by Fermilab’s collisions of protons and antiprotons, both types of quark should fly out of the collision in roughly equal numbers forwards and backwards (i.e., along the directions of both the proton beam and the antiproton beam). Actually, significantly more top quarks come out aligned mostly with the proton beam and antitop quarks with the antiproton beam, contrary to the standard theory. In another deviation of standard theory, one Fermilab experiment has found more collisions than expected that produce pairs of particle jets[Wikipedia] together with a W particle. The authors propose an extension to standard electroweak theory that would explain both excesses. Their extension implies the existence of additional Higgs bosons, a new analog of the Z particle, and particles that might be the dark matter[Science Showcase] whose existence is implied by astrophysical phenomena. The report points out how the theory might be tested at the LHC and at Fermilab’s Tevatron accelerator.
The basic electroweak theory of electromagnetic and weak nuclear interactions, combined with the theory of strong nuclear interactions (which hold protons and neutrons together in atomic nuclei despite the protons’ mutual electrostatic repulsion), are together known as the “Standard Model” of particle interactions. Despite being “Standard”, this theory doesn’t describe everything about strong and electroweak interactions, and completely omits the other known basic interaction: gravity. The theory of gravity corroborated in most detail by experiment is Einstein’s general theory of relativity, which relates gravitational effects to the curvature of the space-time continuum. However, general relativity theory lacks any reference to quantum-physical properties of the kind associated with probability amplitudes. The Standard Model, on the other hand, doesn’t describe the probability amplitudes for strong and electroweak interactions as affecting or being affected by spacetime curvature. So there is much longstanding interest in the formulation and testing of extension to the Standard Model and general relativity that incorporate experimentally-confirmed features of both.
One attempt to formulate a single correct and coherent theory of all the known basic interactions hypothesizes a particular kind of symmetry, called supersymmetry,[Wikipedia] between the two major classes of subatomic particle: bosons, which are characterized by integer-number units of intrinsic spin and a tendency to collect in the same physical state, and fermions, which are characterized by half-plus-integer-number units of spin and an inability to exist with any other fermion in the same physical state. A particular form of supersymmetry even implies a mechanism for gravity that would produce the same phenomena general relativity theory describes under conditions where general relativity has been proven accurate. Thus a supersymmetric theory of particle interactions would, if it is accurate, correctly account for every known type of interaction.
Supersymmetry between bosons and fermions would associate a particular fermion with every boson and a particular boson with every fermion, with the intrinsic spins of each pair’s boson and fermion differing by one half unit, but with other properties of the pair being the same. The subatomic particles that we already know about can’t be paired up in this way, so if our universe’s set of particles is at all supersymmetric, the symmetry must be approximate, and the counterparts to every known particle should be more massive than the known ones—which means there should be at least twice as many elementary particles as the number we already know about. A minimal modification of the Standard Model that incorporates supersymmetry indicates that the supersymmetric counterparts’ masses should be just low enough that the LHC might be able produce them from the energy of its proton collisions. Exactly how the collisions might produce them, if they exist at all, depends partly on the masses of known particles—including the new particle recently found with the LHC. The details of a minimally supersymmetric modification of the Standard Model that includes a Higgs boson with the same mass as the newly-discovered particle are explored in the report “The Higgs Sector and Fine-Tuning in the pMSSM”[SciTech Connect] – the “pMSSM” standing for “phenomenological Minimally Supersymmetric Standard Model”.
While CERN’s LHC, or Large Hadron Collider, is obviously a suitable instrument for discovering the existence of the new particle that at least looks like a Higgs boson, it may not be ideal for precise determination of all of its properties. Like every device, it does some things well and other things not so well. As a machine that collides hadrons (particles made of quarks and/or gluons[Wikipedia]), the particle interactions it produces actually take place between some particular quark or gluon within one colliding hadron and another particular quark or gluon within the other colliding hadron; the hadrons’ other quarks and gluons are mainly clutter. Within each colliding hadron, which quark or gluon is involved in the interaction is a random occurrence not determined by the experimenters. Any given hadron collision may involve two quarks of the same type, two quarks of different types, different possible combinations of some type of quark and some type of gluon, or two gluons of the same or different types. The reactions’ input particles are thus not among the variables like beam energy and intensity that experimenters can control. Figuring out what happens after the hadron collisions requires one to account for the random variations in the interacting quark or gluon types.
The situation is quite different in a lepton collider. Leptons, like individual quarks and gluons, don’t seem to have an internal structure, at least none that has been discerned in experiments to date, so smashing two beams of leptons together produces reactions that involve the same initial particles every time instead of different particles each time, which greatly simplifies the reaction analysis.
Michael Peskin, the author of the aforementioned theoretical summary lecture for the 2012 Higgs hunting workshop,[SciTech Connect] ended his summary with a discussion of what might be learned about the new particle from new or upgraded accelerators. He carries that discussion further in another report,[SciTech Connect] by describing how well certain identifying properties of the particle would be measured by the LHC and by a long-proposed lepton collider, the International Linear Collider (ILC). He provisionally concludes in the absence of more detailed information about LHC experiments that the LHC should be able to demonstrate whether the newly-discovered particle’s properties are close to those of the standard electroweak theory, but would not be able to distinguish whether it’s a Higgs boson of the standard theory or one described by one of the variants of that theory. For that purpose, Peskin concludes that further experiments at the ILC would be needed.
Aside from accelerating leptons instead of hadrons, the International Linear Collider would differ from the LHC in one significant feature. Being a linear instead of a circular collider, its beams would intersect in just one place, with the beams’ particles having one chance only to collide and produce Higgs bosons or any other kind of particle. Since the beams in circular colliders circulate, particles that don’t collide at the first beam crossing have many chances to collide. Yet circular colliders have a disadvantage of their own, since their circulating charged-particle beams have electric fields that circulate with the particles. Since these electric fields circulate, they change in such a way that changing magnetic fields accompany them, which themselves are accompanied by changing electric fields, &c. … so that the circulating particle beams constantly generate electromagnetic waves, which constantly radiate away large amounts of energy. Circulating-beam accelerators thus have to constantly replenish the energy of their beam particles to maintain the particles’ collision energy. While the beam particles in linear accelerators also have electric fields that change during the particles’ forward acceleration, the waves generated by charged particles accelerating forward are much less energetic than the waves generated by charged particles revolving in a circle, so beams in linear accelerators radiate away less energy than similar beams in circular accelerators.
Whether circular or linear, accelerators designed especially to produce a particular kind of particle for study are known as “factories”; accordingly, three recent SLAC reports have the term “Higgs factory” in their titles. One of these reports,[SciTech Connect] an analysis from a workshop of the International Committee for Future Accelerators (ICFA), considers a wider range of accelerator types for Higgs boson studies, describing some advantages and disadvantages of each, but without settling on a specific recommendation “which is only possible with further input from the physics side” since “… it is expected that more data from the LHC will further clarify what kind of Higgs factory (or factories) will be needed” (page 4). The accelerators analyzed include upgrades of the proton-colliding LHC, linear positron-electron colliders like the ILC, circular positron-electron colliders, colliders of muons (a type of lepton more massive than electrons), and photon colliders.
A specific proposal for a positron-electron collider, “A Feasibility Study of an Ring Collider for Higgs Factory”,[SciTech Connect] provides an interesting example of how much the given parameters of a design problem can dictate much of the shape of the solution. The proposal is to use the underground tunnel that houses the LHC for a positron-electron collider that can produce the newly-discovered particle in large quantities to precisely measure its properties. This seems feasible, since the energy required to produce the particle is about 120 billion electron-volts per electron or positron, which is only about 15% higher than the energy achieved with the positron-electron accelerator that used to be in the tunnel before the LHC was built. Considerations of the beam intensity required to produce the new particles in large numbers, the energy radiated away by the circulating beams that has to be replenished, and the scattering of electrons and positrons out of the beams when the beams cross paths all help to determine an accelerator design that would produce sufficiently intense and long-lasting lepton beams to generate enough of the new particles for more precise measurement of their properties than the present LHC would permit.
An accelerator type not analyzed in the ICFA workshop’s report is discussed in “A Beam Driven Plasma-Wakefield Linear Collider: From Higgs Factory to Multi-TeV”[SciTech Connect]. Whereas most large particle accelerators today are designed to accelerate particle beams in a vacuum by means of electromagnetic waves traveling through hollow electromagnetic waveguides,[Wikipedia] a plasma-wakefield accelerator accelerates the beams using waves in a plasma. The technique is much newer and has yet to accelerate particles to the high energies of the largest vacuum-waveguide accelerators, but is of great interest because the accelerating waves have much stronger electromagnetic fields, meaning they can produce large accelerations in a much shorter distance, with less power consumption, and lower cost. The report describes an accelerator that would be constructed in stages to reach higher and higher collision energies, from the 250 billion electron-volts (250 GeV) of a Higgs boson factory through the 500-GeV initial energy and 1-TeV (one-trillion electron-volt) planned upgrade energy of the International Linear Collider to a 3-TeV energy. The chosen parameters represent an attempt to find the best plasma-wakefield design while benefiting from the last two decades’ extensive research and development on vacuum-waveguide accelerators. The highest collision energies go well beyond what would be needed for detailed measurement of the particle recently found at CERN, but would make possible the discovery of other phenomena that would occur only at those higher energies.
Confirming the existence of Higgs bosons amounts to finding one piece of a larger, more significant puzzle.
The evidence gathered throughout the history of experimental science shows us only a few basic interactions among a few kinds of subatomic particle underlying every physical phenomenon. Gravity was the first to be figured out in some detail, from experiments with falling objects on earth and observations of how the moon, planets, comets, and other objects move in outer space. Next was electromagnetism, whose laws were worked out starting from fairly simple experiments that involved statically charging and discharging various objects, magnetizing and demagnetizing pieces of metal, sending electric currents through wires and other things, and observing how all these affected each other. The weak and strong nuclear forces were not as easy to figure out, since these interactions have discernible effects only over short ranges (attometers and femtometers,[Wikipedia] respectively); experiments that manifested the laws of their behavior involved smashing subatomic particles into each other through natural or artificial means.
The known laws of the different interactions do have some mathematical similarity, but they’re also different enough that each interaction plays a different set of roles in nature. Strong interactions not only hold the protons and neutrons of atomic nuclei together, but they bind quarks together to form the protons and neutrons themselves, as well as many other particles made of quarks; without the strong interaction, atoms, and matter as we know it, would not exist. Weak interactions are involved in changing different types of particle into other types. As mentioned above, a weak interaction that changes a u quark into a d quark (while also producing other particles) is one of the key steps in the production of starlight, including the sunshine that drives the weather and powers life on earth. Electromagnetic forces shape the outer layers of atoms and molecules by the way they bind electrons to atomic nuclei; this shaping in turn directs the course of all chemical reactions among atoms and molecules. Gravity’s many effects include holding the earth’s ocean and atmosphere to its surface, keeping the earth and other planets near the sun, and driving the fusion of atomic nuclei within the cores of the sun and other stars as the cores are squeezed by the weight of the stars’ outer layers.
We have found more and more practical ways to use these interactions as our knowledge of their governing laws has increased. How might we use our new information about Higgs bosons? Could we gain a new source of power, for example, as we did with electricity? Maybe. But it’s important to note that we’ve put the different interactions to work in different ways.
The similarities and differences between gravity and electromagnetism are somewhat striking. Maxwell’s equations for electromagnetism and Einstein’s equation for gravity can be expressed in very similar forms, with a few differences reflecting details such as the mutual gravitational attraction of masses and the mutual electrical repulsion of like charges. But their technical uses have been different. From electromagnetism we’ve gained a means of transmitting both energy and information. Gravity was of course put to work early on as a power source to drive machines, but its most advanced uses include knowing precisely how gravity affects the motion of satellites and space probes so we can get them where we want them to go, and accounting for the gravitational slowing of time so that clock signals from our space-based global positioning satellites are correctly interpreted and translated into locations.
We have also put weak and strong interactions to work in some measure, using nuclear reactions to power generators, process materials, and diagnose and treat illnesses. But the further uses we make of them will depend on what we discover about their nature and what we figure out they’re good for.
Prepared by Dr. William N. Watson, Physicist
DoE Office of Scientific and Technical Information
Last updated on Monday 23 September 2013