# In the OSTI Collections: Effective Field Theories

 Article Acknowledgement:Dr. William N. Watson, Physicist DOE Office of Scientific and Technical Information

Any set of objects will be attracted to each other by gravity.  If the objects are electrically charged or magnetized, they may have additional mutual attractions, or repulsions.

Before the mid-1800s, scientists expressed the laws that govern such attractions and repulsions in terms of how far the objects were from each other, and in which directions.  But various attempts to explain exactly how an object A, here, attracts or repels object B, over there, failed to hold up when tested by experiments until Michael Faraday[Wikipedia] proposed his concept of electric and magnetic fields existing in the space between the objects.

According to the field concept, charged or magnetized objects are surrounded by an electric or magnetic field that extends throughout space and has a strength and orientation that varies from point to point.  The field is shaped by the charged or magnetized objects within it; the shape of a magnet’s field, for instance, can be seen by the standard experiment of sprinkling iron filings around the magnet and seeing how they arrange themselves.  While being shaped by the charged or magnetized objects within it, the field in turn exerts forces on the same charged or magnetized objects.  Thus rather than objects A and B exerting forces on each other directly and instantaneously across empty space, the objects affect and are affected by the field around them; moving one object disturbs the field, which propagates the disturbance at a finite speed throughout the surrounding space so that it reaches the other object, which in turn reacts to the disturbance.

The laws of electric and magnetic fields worked out by Faraday were given a more standard mathematical expression by James Clerk Maxwell[Wikipedia], whose augmentation of them to account for the displacement of charged particles within dielectric materials led to the discovery that electromagnetic-field disturbances should propagate through space as waves, at the speed of light.  (Later experiments showed that these waves are light.)  Later, Einstein’s theory of curved space and time, general relativity, turned out to explain gravity as due to a field—a field that, rather than existing in the space surrounding objects, is actually a feature of spacetime itself, namely its curvature, which varies from point to point and guides the motion of objects passing through it.  The field concept—the idea of an entity that can have varying strength and orientation at each point of space, which may even interact with things in the space—has been used to describe and understand many physical phenomena, including those that involve interactions other than electromagnetism and gravity that were discovered later.

When one uses known laws of physics, such as field laws, to predict what will happen in a physical process, one might try accounting for every single feature of the process and calculating how each feature influences the rest.  In general, such calculations would be inordinately complex.  Yet often enough, one can consider the nature and relative importance of the features and see practical ways to simplify the calculations.

If some features’ influence makes no practical difference to the outcome of the process, one can simply leave them out of the calculation.  For instance, an infinitely precise general-relativity calculation of how the planets move in our solar system would account for, among other things, the stresses and momentum of the plasma that the sun is made of, since both solar features affect the curvature of spacetime around the sun that determines the planets’ possible orbits.  But while some stars other than the sun have enough internal stress or momentum to significantly affect the curvature of spacetime around them, such effects in our own solar system are so much smaller than those produced by the sun’s mass that they barely affect the planets at all.  To accurately determine planetary orbits, then, one can approximate the internal stress and momentum of the sun’s plasma as being zero throughout the sun, consider only the sun’s mass, and proceed with a much simpler calculation.

For other processes, a different kind of simplification is possible.  Effects that contribute to the outcome of a process can be arbitrarily divided into large-scale and small-scale effects, with the “scale” in question referring to spatial distance or some other quantity.  If the small-scale effects are too significant to ignore, but their main result is to make the process resemble one that’s due only to large-scale effects with modified characteristics unaccompanied by any small-scale effects, one can accurately describe the process in terms of a theory that implicitly accounts for the small-scale effects as modifications of large-scale effects alone.  Theories that do this instead of ignoring the small-scale effects completely are called effective field theories.  The accuracy of a process’ description by such theories depends on the scale that distinguishes “large” from “small” effects really being small for the process; field phenomena characterized by parameters smaller than the effective theory’s defining scale may not be accurately described by the theory.

Figure 1.  Two ways to simplify the mathematical representation of a process.  The top equation schematically represents two sets of effects (r and o) contributing to a process p.  If the effects o are simply negligible, one may accurately approximate how the process works by omitting them and retaining only the effects r in one’s calculations (left).  But if the effects o are not so insignificant, but the nonnegligible portion of their effects can be summarized as a modification R of the effects r, an approximation like the one on the right may be suitable, with the notation R(r, o) indicating the dependence of R on both r and o.  Such approximations, whether implicitly or explicitly made, characterize effective field theories.

Effective field theories geared to a particular large/small-scale separation parameter can be formulated as simplifications of an explicit “all-scale” theory, or directly without reference to one.  In principle, any field phenomenon that has both large- and small-scale effects in terms of some parameter could be described by an effective field theory; in practice, effective field theories are very often used to analyze processes involving subatomic-particles and solid matter, although other things (e.g., fluids in nuclear reactors[SciTech Connect]) have also been studied with effective field theories.  The reports described below discuss the use of effective field theories in studies sponsored by the U. S. Department of Energy of particle and nuclear interactions, condensed-matter phenomena, the nature of the universe’s “dark matter”, and possible scenarios in the universe’s extremely early history.

Top

Particles and nuclei

One such investigation, of how certain astrophysical processes work, involves calculating how frequently some subatomic particles react with different atomic nuclei.  Some of these reactions aren’t accessible to experiment, but other related reactions are.  The report “Chiral effective field theory predictions for muon capture on deuteron and 3He”[SciTech Connect], from researchers at the University of Pisa, Old Dominion University, and the Thomas Jefferson National Accelerator Facility, describes how the reaction rates were calculated for two such related reactions, using the same type of effective field theory, to test how the theory’s predictions stand up to the test of comparison with experiment.

The reactions examined involve atoms of heavy hydrogen (2H, or deuterium)[Wikipedia] and light helium (3He)[Wikipedia] in which the role of an electron is taken by a muon[Wikipedia], a similar but unstable particle with a larger mass.  When they have no other nearby particles to react with, the average muon decays into an electron, a muon neutrino[Wikipedia], and an electron antineutrino[Wikipedia] in about 2.2 microseconds.[Reference 1]  But a muon in the lowest-energy orbital it can occupy within an atom has a significant probability of being inside the atom’s nucleus, where it can react with one of the protons, changing itself into a muon neutrino and the proton into a neutron.  (Protons are made of two u quarks[Wikipedia] and one d quark[Wikipedia], while neutrons are made of one u quark and two d quarks, the quarks being held together by their interactions with the gluon[Wikipedia] field, much as electrically charged particles affect each other by their interactions with the electromagnetic field.  The transformation of a proton into a neutron thus involves changing a u quark into a d quark.)

The strong interactions that hold atomic nuclei together are described in detail by a theory of quarks interacting with the gluon field.  But to determine the average rate of the muon + proton à muon-neutrino + neutron reactions in 3He and 2H, the authors used a simplification of this theory[Reference 2] in which the gluon field is accounted for only by the existence of protons, neutrons, and other particles made of quarks that the gluon field holds together.  The report describes the effective field theory calculations in some detail, with the finding that the average reaction rate for muons in 3He is 1494  21 reactions per second (agreeing with the already measured rate of 1496  4 per second) and the rate for muons in 2H is 399  3 reactions per second (to be compared later with the results of an experiment in progress[Reference 3]).  If the latter calculation also proves accurate, it will increase researchers’ confidence that the theory accurately describes other astrophysical processes.

The muon-capture reactions discussed in the previous report take place at low-enough energies that the strongly-interacting particles’ exact quark-and-gluon composition didn’t need to be accounted for in detail.  Somewhat more detail is required for the analyses mentioned in the Duke University report “Heavy Quarks, QCD, and Effective Field Theory”[SciTech Connect], in which Soft-Collinear Effective Theory[SciTech Connect, arXiv, Wikipedia] is used.  The “soft-collinear” refers to the way that, in the phenomena analyzed, the gluons’ momenta compares with the (high) momenta of the quarks.  The gluons’ momenta and energy may be small or large, but in either case, the components of their momenta that are perpendicular to the momenta of any quarks they interact with are small.  Thus the gluons’ momenta are either low (soft), or are practically collinear with the quarks’ momenta.

Figure 2.  Comparisons of gluon and quark momenta, represented by arrows showing the momenta’s relative size and direction.  Top: The gluon momenta on the right are either soft or collinear relative to the quark momentum on the left: the components of the gluon momenta are always small in directions perpendicular to the quark momentum.  Bottom: The perpendicular components of the gluon momenta are not small compared with the quark momentum—a situation not encountered in phenomena described by soft-collinear effective theory.

The report describes how Soft-Collinear Effective Theory was used to calculate experimental production rates for one known subatomic particle (named J/y[Wikipedia]) and for one new particle type (“color-octet scalars”) predicted to exist by a particular extension of the Standard Model of particle interactions.[Wikipedia, Wikipedia]  For the J/y particle, the calculated production rate matches the actual experimental rate.  For color-octet scalars, comparing the calculated rates with production rates in real experiments would test whether the Standard Model extension that says the new particles exist is correct.  The report author and collaborators also found a Soft-Collinear Effective Theory explanation for an asymmetry in certain B particle decays and suggested experimental tests of the explanation; with another collaborator, the author demonstrated how a traditional method of avoiding double counts of soft collinear gluon processes in calculations is equivalent to the different approach used with Soft-Collinear Effective Theory.

Top

Condensed matter

The report “Effective Field Theory of Effective Quantized Hall Nematics”[SciTech Connect], by researchers at MIT, Microsoft Station Q, Stanford University, and SLAC National Accelerator Laboratory, describes a quite different application of the effective field theory concept.  The phenomenon analyzed, the fractional quantum Hall effect[Wikipedia, Wikipedia], is of interest for many reasons, one of which is its potential use in realizing quantum computers.[Reference 4, OSTI]  The effect resembles one discovered by Edwin Hall in the late 19th century[Wikipedia], in which a current of electric charges that crosses a magnetic field will accumulate charges along the sides of the their conductor as the magnetic field, doing what such fields always do, deflects the charges sideways.  The accumulated charges, being opposite on opposite sides of the conductor, produce an electric field perpendicular to both the magnetic field and the main direction of current flow.  The charges quit accumulating once their electric field is strong enough to repel further sideways charge deflection by the magnetic field, thereby setting up a sideways voltage difference across the conductor.

Figure 3.  The Hall effect in a conductor of length L, width W, and thickness t.  Here, a negative electric charge is accelerated outward along  the length of the conductor by the applied voltage difference Vx, but the upward magnetic field Bz deflects it sideways.  A current of such negative charges would accumulate along the left side of the conductor, producing the Hall voltage difference VH across the conductor’s width.  If the electric current through the conductor were made of positive charges instead of negative ones, the voltage Vx would drive them inward along the conductor’s length and the magnetic field would deflect them to the conductor’s left side, making a Hall voltage difference in the opposite direction across the conductor’s width.  (Wikipedia, http://en.wikipedia.org/wiki/File:Hall_Effect_Measurement_Setup_for_Electrons.png.)

In the ordinary Hall effect, the ratio of electric current to sideways voltage difference is directly proportional to the strength of the magnetic field.   In the quantum versions of this effect, which occurs in conductors that confine the charges to move in only two dimensions, the current/Hall voltage ratio increases stepwise with the magnetic field—the field can increase in strength by a certain amount with no change in the ratio of current/Hall voltage, but then the ratio will jump rapidly with a small increase in magnetic field strength before plateauing again.  The quantum Hall effects have plateaus of current/voltage ratios that are simple multiples of the quantum e2/h, e being the charge observed on an electron in its long-range interactions and h being Planck’s constant[Wikipedia].  In the integer quantum Hall effect, the plateaus are integer multiples of e2/h, while in the fractional quantum Hall effect, the plateaus are simple fractional multiples of e2/h like 1/3, 4/7, 5/2, etc.

“Effective Field Theory of Effective Quantized Hall Nematics” describes an effective-field analysis of conductors whose features are equally influenced by the conductors’ having one distinguished axis (nematic[Wikipedia] symmetry) and undergoing the fractional quantum Hall effect—a combination that experimenters “may have observed” recently, as the authors note.  The authors’ analysis reexpresses short-distance fluctuations in the conductor’s electric-charge distribution and the electromagnetic field as effects on the masses and voltage differences of the electric charges.  The analysis suggests that “an unusual quantum critical point[Wikipedia]” should separate the fractional quantum Hall conductor’s nematic state from a state in which the conductor’s properties are the same along all its axes.

As the aforementioned reports show, effective field theories are used to deduce the behavior of subatomic particles, condensed matter, and astrophysical entities.  Another recent report from SLAC National Accelerator Laboratory, “Three-point current correlation functions as probes of Effective Conformal Theories”[SciTech Connect], describes using a mathematical fact about astrophysical theories—theories of gravity, in this case—to deduce condensed-matter behavior.  Certain effective theories of gravity plus other interactions, which describe a few kinds of weakly interacting entities, have a mathematical duality with other theories that describe systems of strongly interacting particles, like the particles that constitute condensed matter.  The duality means that the mathematical structure of a particular theory of the weaker set of interactions implies a corresponding mathematical structure for some theory of the stronger set of interactions, and vice versa.

The specific problem addressed in the report is to find a particular condensed-matter quantity:  the three-point current correlation function, which describes how the average product of currents at three different points and times in the material varies with the choice of the three points and times.  While this function’s measurement is “near impossible” with current technology, and even its theoretical relationship to other measureable quantities is hard to calculate directly through the theory of the condensed matter’s strongly interacting particles, calculations of the corresponding quantity in the dual “gravitational” theory are feasible.  Such calculations in the effective theory of gravitation-like phenomena thus show how the condensed-matter three-point current correlation relates to quantities that can be more easily measured.

Top

Dark matter

Observations of the universe indicate that a majority of the material in it is very different from the visible matter that the sun, stars, and planets are made of[OSTI], but those observations seem to give more clues about what the “dark” matter isn’t than what it is.  Clues like those leave many possibilities to investigate.  One possibility is explored in the report “An Effective Theory of Dirac Dark Matter”[SciTech Connect] by researchers at Stanford University, SLAC National Accelerator Laboratory, and the University of Oregon.  The theory postulates that the dark matter is composed of particles whose motions are described by an equation that Paul Dirac published in the early 20th century[Wikipedia].  Under a few more mathematically simple assumptions about the particles’ behavior, one finds that the most likely interactions for these particles are with particles of the same general type or with Higgs bosons[Wikipedia, OSTI], whose existence was recently confirmed by experiments with the Large Hadron Collider at the European laboratory CERN

When this information is combined with astrophysical data about the presence and energy distribution of different subatomic particles, one finds that the ratio of the number of positrons in the universe that have a given energy to the number of electrons having that same energy should decrease drastically with an increase in E just as E/c2 exceeds the mass of the proposed dark-matter fermions.  A similar drastic decrease in the number of cosmic gamma rays with energy should occur for the same energy E.  These effects provide a way to test the theory.  A different test of the theory plus one additional assumption is possible.  If the dark-matter Dirac fermions are of a particular type (“binos”) assumed to exist in one of the Standard Model extensions[Wikipedia], an additional type of particle (“sleptons”) must also exist, which should appear in particle-accelerator experiments.  Thus far, however, neither sleptons nor sudden decreases of positron/electron ratio or gamma-ray flux at some particular energy have been observed.

While “An Effective Theory of Dirac Dark Matter” described logical consequences of assuming that a particular kind of particle constitutes the universe’s dark matter, the more recent report “The Effective Field Theory of Dark Matter Direct Detection”[SciTech Connect], by researchers at Stanford, the University of California at Berkeley, Lawrence Berkeley National Laboratory, Boston University, and SLAC, starts with less specific assumptions but focuses on their implications for how the dark matter might be detected through interactions with atomic nuclei.  While other analyses have shown how the response of a nucleus to dark matter might depend on the nucleus’ charge, atomic number, or the spins of its nucleons, this analysis showed two less familiar ways in which this response could be affected by nucleons’ orbital motions as well.  The report starts with a very general mathematical form for an effective field theory of dark-matter/nucleus interactions, and later examines some detailed theories that might reduce to this general form.  When applied to the use of specific nuclei in dark-matter detectors (e.g., 19F, 23Na, 70Ge, 72Ge, 127I, 128Xe, and 129Xe among others), the effective field theory indicates that detectors made with different nuclei should have quite different sensitivities—a finding that could usefully inform the design of dark-matter detectors.

Top

Cosmic inflation

Einstein’s general theory of relativity expresses the interrelation of spacetime curvature and features of the matter and energy within it.  The equation for this interrelation has many possible solutions, each describing a possible large-scale history of the universe, some of which agree with observational data about the universe’s expansion and its remarkably near-uniform mass-energy distribution.  Uniformity throughout any extended volume is understandable if the different portions of that volume have been in direct indirect contact long enough for the different portions to have affected each other and arrived at equilibrium.  But according to the simplest histories consistent with observation and with general relativity, this near-uniformity is puzzling:  the different parts of the universe that light can reach us from were never in contact with each other, yet they have nearly the same curvature and matter distribution rather than the amount of variation that they might.

Figure 4.  One nearly-uniform feature of the part of the universe visible to us is the cosmic microwave background radiation, as shown by this whole-sky map derived from Wilkinson Microwave Anisotropy Probe data.  The color differences represent differences in temperature of the microwave radiation, which span a narrow range of 200 microkelvins.  (NASA/Wilkinson Microwave Anisotropy Probe Science Team, http://map.gsfc.nasa.gov/media/121238/ilc_9yr_moll4096.png.)

One widely-studied possibility for resolving this problem is the idea of cosmic inflation:  a sudden, extraordinarily rapid expansion of the universe almost immediately after it began, during a very tiny fraction of a second, brought about as universe’s material contents rapidly changed form at that time.  The occurrence of cosmic inflation would result in features like those we see in the universe today, but inflation is not necessarily the only thing that could produce those features.  Thus researchers look for additional features that could differentiate between inflationary and noninflationary universes to see what kind ours is.  If inflation did occur, the additional features could even distinguish between different causes of inflation, helping us see which inflationary scenarios did not occur.

The effective-field theory approach has been found to offer advantages for mathematically connecting inflationary processes that might have occurred in the early universe to features that should be observable now.  Researchers at Stanford, SLAC, the Institute for Advanced Study, and elsewhere have generalized a basic effective-field theory, which describes the inflation-causing portion of the early universe’s matter content in terms of a single field f, in different ways to calculate the observable consequences of each generalization.  One of the resulting reports, “The Effective Field Theory of Multifield Inflation”[SciTech Connect], sets forth present-day results that we should expect if more than one type of field had significant effects on spacetime curvature during cosmic inflation.  The report “Dissipative effects in the Effective Field Theory of Inflation”[SciTech Connect], produced with colleagues at the University of Buenos Aires and Columbia University, shows what to expect if the additional fields didn’t directly affect the composition of matter during inflation or how long the inflation lasted, but did affect them by their interaction with the field f.  “(Small) Resonant non-Gaussianities:  Signatures of a Discrete Shift Symmetry in the Effective Field Theory of Inflation”[SciTech Connect] discusses the possibility that the f field was a manifestation of a particular kind of hypothetical particle—either axions[Wikipedia] or something similar.  This report, made in collaboration with Boston University and New York University researchers, shows that the relation of an axionlike f-field’s strength to its energy would result in the universe’s matter exhibiting certain oscillatory patterns in its density fluctuations today.

Top

References

Wikipedia

 • Michael Faraday • Physics beyond the Standard Model • James Clerk Maxwell • Fractional quantum Hall effect • Deuterium • Quantum Hall effect • Helium-3 • Topological order • Neutrino: Antineutrinos • Hall effect • Time dilation of moving particles • Planck constant • U quark • Liquid crystal: Nematic phase • D quark • Quantum critical point • Gluon • Dirac equation • Soft-collinear effective theory • Higgs boson • J/psi meson • Supersymmetry: The Supersymmetric Standard Model • Standard Model • Axion

Research Organizations and Facilities

^ 1 • That is, 2.2 microseconds in the muon’s own (or proper) frame of reference, in which the muon’s speed is zero.  Rapidly moving muons take much longer to decay due to time dilation

^ 2 • “Chiral effective field theory and nuclear forces”, arXiv

^ 3MuSun Experiment

“We propose to measure the rate Ld for muon capture on the deuteron to better than 1.5% precision. This process is the simplest weak interaction process on a nucleus that can both be calculated and measured to a high degree of precision. The measurement will provide a benchmark result, far more precise than any current experimental information on weak interaction processes in the two-nucleon system. Moreover, it can impact our understanding of fundamental reactions of astrophysical interest, like solar pp fusion and the n + d reactions observed by the Sudbury Neutrino Observatory. Recent effective field theory calculations have demonstrated, that all these reactions are related by one axial two-body current term, parameterized by a single low-energy constant. Muon capture on the deuteron is a clean and accurate way to determine this constant. Once it is known, the above mentioned astrophysical, as well as other important two-nucleon reactions, will be determined in a model independent way at the same precision as the measured muon capture reaction.”—“Muon Capture on the Deuteron:  The MuSun Experiment” by the MuSun Collaboration

^ 4 • Topological order in zero-temperature quantum matter is important in the quantum Hall effect and has potential applications to fault-tolerant quantum computation.  See “Topological order”, Wikipedia, and “Fault-tolerant quantum computation by anyons”, arXiv

View Past "In the OSTI Collections" Articles

Top