In the OSTI Collections: Density Functional Theory

Dr. Watson computer sleuthing scientist

Article Acknowledgement:

Dr. William N. Watson, Physicist

DOE Office of Scientific and Technical Information

 

Alternate Text Placeholder

Certain things look harder to figure out at first than they really are.  The reason some of them look so hard is because figuring them out seems to require a large amount of information, when in fact only a part of that information—perhaps a very small part—is actually needed.  That part may be mixed in and spread out among other data in a form that makes all the information look essential.  But reexpressing the information in another form can separate the necessary from the unnecessary portions.  The unnecessary parts might only be needed to figure other things out. 

 

One classic example of such a situation is a deduction that Ludwig Boltzmann[Wikipedia] made in the 19th century of how the density and speeds of air molecules are most likely to vary with altitude.[Jaynes, pp. 14-18]  If our atmosphere were a gas made of just a tiny number of molecules, one might apply Newton’s laws of motion to their individual positions and velocities to calculate, with much labor, how the positions and velocities were likely to be affected by gravity and the molecules’ collisions.  However, in even a few liters of gas at anything like atmospheric pressure and density, there are roughly 1023 molecules—about 100 sextillion of them.  Even one piece of numerical data about each molecule would exceed the capacity of the largest supercomputers to store, much less calculate with.  But without any electronic computer, Boltzmann did deduce how the molecules would be distributed over all possible combinations of position and momenta, using just two kinds of information about the gas:  the fact that such distribution functions, represented geometrically, stay the same size no matter how their shapes change, and the fact that the total energy of the gas remains constant if the gas has no energy exchange with its environment.  From only this, Boltzmann found that, if the gas has the same temperature throughout,

 

  • the molecular density decreases exponentially with altitude independently of the molecules’ velocities,
  • the proportion of molecules whose kinetic energies are in a given range decreases exponentially with kinetic energy independently of altitude, and
  • the rates of exponential decrease themselves are inversely proportional to the temperature. 

 

Had Boltzmann tried to take further details about the molecules into account, such as which molecules had what positions and momenta, or how their individual positions and momenta changed with time, he’d have found that those details don’t affect this distribution at all. 

 

Figure 1.  The physical law that Ludwig Boltzmann[Wikipedia] derived in the 1870’s[Wikipedia] of how the molecules of a gas in a uniform gravitational field are distributed by velocity v and altitude z if the gas consists of a single type of molecule with mass m at a uniform temperature T, expressed in terms of the probability distribution function p(v, z|g, m, T).  In this equation, g represents the acceleration due to gravity[Wikipedia] and k, known as Boltzmann’s constant[Wikipedia], relates temperature and energy. 

 

Boltzmann’s discovery involved ignoring information that turned out not to matter at all for what he was figuring out.  Many other analyses ignore information that’s at least practically if not completely irrelevant, if taking account of it would involve a greater effort than the results would be worth.  And when certain information is too hard or even impossible to get, people often try using less information and seeing how far that gets them toward solving their problems, even if it’s not clear at first which pieces of information will prove essential or negligible.  Figuring out how multiple electrons are arranged in most atoms, molecules, or solid materials—arrangements that underlie their chemical and physical behavior—initially looked like a problem of the latter type.  Accurately accounting for how negatively-charged electrons are distributed around the positively charged nuclei of these systems requires taking the electrons’ quantum-physical behavior into account, and the most obvious way to represent that behavior mathematically is by wavefunctions whose values depend on the possible positions of each electron.  But evaluating wavefunctions precisely enough to usefully predict chemical and physical phenomena is difficult when more than 10 electrons have to be accounted for[Di Rocco & al., p. 2]

 

One early approach to analyzing multielectron systems with less data was a method[Wikipedia] devised independently by both Llewellyn Thomas[Wikipedia] and Enrico Fermi[Wikipedia] and published in 1927.  Instead of describing atoms in terms of exactly where each electron might be, they focused on what the density of electrons was at each point around the atoms’ nuclei, treating the density as if it were a continuously-varying function of position (though it is actually discontinuous for discrete entities like electrons that will either be found or not found if sought at a particular location).  The assumptions they used to estimate the electron density were sufficiently accurate only for a limited portion of the atom.  If the positive charge of the atom’s nucleus was that of P protons, the estimated electron density was close to correct for positions less than 53 picometers (0.053 nanometer) from the nucleus and further away than 53/P picometers.  While most electrons in a complex atom would usually be found within this region, the inaccuracy outside it was great enough to lead to very inaccurate estimates of other important parameters.  One significant inaccuracy was discovered by Edward Teller[Wikipedia] and published in 1962:  applying the Thomas-Fermi method to molecules indicated that a molecule’s energy would be greater than the energies of its individual atoms.[Wikipedia]  If this were true, the molecule would never form.  While it takes energy to pull the atoms of real molecules apart, Thomas-Fermi molecules would require energy to hold their atoms together.  The problem was with assumptions underlying the electron-density estimates.  Various efforts were made to account for more properties of electrons and their interactions when estimating electron densities.  The resulting mathematical models were more accurate but still limited. 

 

The Thomas-Fermi method involved using only the information in a one-variable function of position that didn’t take account of how any one electron’s position related to the other electrons’ positions.  In the mid-1960s, Pierre Hohenberg[Wikipedia], Walter Kohn[Wikipedia], and Lu Jeu Sham[Wikipedia] worked out an unobvious relationship between electron wavefunctions, electron densities, and the electrons’ attractions to atomic nuclei in molecules and solids.  Their calculations showed that the probability of a single electron’s being at a particular place in a system (e.g., a solid, molecule, or atom), given that the other electrons were also present somewhere in the system, directly implied the system’s multielectron wavefunction, and thus the probabilities for each of the system’s possible electron configurations.  What looked like extra information in the wavefunction about the multiple electrons’ several locations was in fact a redundant expression of the single-variable probability function, like the same thing being said multiple times with different words.  Probability functions of this type can thus be used to calculate things like the mechanical properties of molecules or the effects of dopants on semiconductors that depend on where the all of the system’s electrons are.  But unlike Boltzmann’s calculations of gas-molecule distributions, which don’t take dynamical information into account that might be relevant for other properties of gases, calculations based on single-electron probability functions don’t actually omit any information about the electron dynamics; they just eliminate redundancy. 

 

A system’s single-electron probability function, multiplied by the number of electrons in the system, is something like an electron density function for that system, and yields more accurate predictions of the system’s behavior than a Thomas-Fermi density function does.  “Density functional theory”[Wikipedia] based on this improvement continues to be widely used to analyze electron distributions and the many material properties that depend on them.  Other material properties, which involve the distribution of particles other than electrons, can also be analyzed in terms of density functional theory. 

 

Figure 2.  The probability distribution for any one of the N particles in a system of identical particles (e.g., electrons in an atom, molecule, or solid) is at position  when the rest of the particles are in positions    is equal to the squared magnitude of the wavefunction    for that particle configuration.  The probability that any one particle is near position  when the other particles are at any arbitrary positions is proportional to the same squared magnitude  integrated over all possible positions for each of the other particles.  This probability distribution p times N is a function of position  that equals an average particle density at each point .  Other quantities, like the energy of the system of particles, can be expressed as depending on the density function N Χ p—that is, as functionals[Wikipedia] of N Χ p—so the theory of how these quantities depend on N Χ p is called density functional theory. 

Although density functional theory based on the single-particle probability function has been in use for several decades, its underlying concepts continue to be explored.  One exploration is described in the September 2016 issue of the journal Computation, in an article[DoE PAGES] by researchers at Lawrence Livermore National Laboratory.  The authors address the problem of determining whether a particular distribution of electrons could actually persist in the presence of electrostatic forces.  Electrons might be momentarily distributed in all kinds of ways, but could only stay distributed in a particular way if an appropriate set of forces that could keep them there were physically possible.  The authors describe a mathematical method of checking whether particular wavefunctions that match any given single-electron probability function (or “density function”) correspond to a possible electrostatic potential.  They then illustrate the method with some mathematically simple cases, showing that some wavefunctions can underlie electron density functions while other wavefunctions are physically impossible.  Incidentally, the article’s mathematical argument also illustrates the reason for the name “density functional theory”:  while the electron “density” is a function of position in space, deductions in density functional theory involve quantities like the electrons’ kinetic energy that are treated as functions, not of single quantities like position, but as functions of the “density function” itself.  The fact that certain quantities depend on a function, not just on a finite set of numbers, is significant. 

 

Many other research efforts use density functional theory to explore particular phenomena.  Some of those recently explored have involved condensed matter, chemical reactions, and certain effects of fission. 

 

Top

Alternate Text Placeholder

Condensed matter, chemistry

 

Chemical transformations involve rearrangement of the electrons that chemically bond atoms together into molecules.  Thus, to learn some details of how one inedible plant component can be chemically transformed into fuels when that component undergoes reactions on a metal surface, density functional theory has been applied to the appropriate electron distributions.  This work is described in a paper entitled “Hydrodeoxygenation of Phenol to Benzene and Cyclohexane on Rh(111) and Rh(211) Surfaces:  Insights from Density Functional Theory”[DoE PAGES] that was published in the August 25, 2016 issue of The Journal of Physical Chemistry C

 

Often, 30% of the weight of biomass-derived oil consists of phenolic compounds[Wikipedia], which can react with hydrogen to remove their oxygen atoms and become pure hydrocarbons—particularly on metal surfaces whose presence can reduce the energy required for the reaction to occur.  This hydrodexoygenation of various other phenolic compounds had previously been observed in experiments and analyzed with density functional theory.  The Journal of Physical Chemistry paper used density functional theory to specifically examine how phenol, or carbolic acid (C6H5OH), directly deoxygenates to C6H6 (benzene), and how the phenol derivative cyclohexanol ((CH2)5CHOH) hydrodeoxygenates to C6H12 (cyclohexane), when those reactions are catalyzed on the least and most reactive facets of particles of rhodium[Wikipedia]—a rare metal, but one whose catalytic properties “are often comparable to those of nonprecious metals” like nickel.  The authors found that breaking the oxygen atoms’ chemical bonds with carbon atoms is easier in cyclohexanol than in phenol, and that both reactions should happen on rhodium catalysts at an appreciable rate.  For some purposes, benzene is a more desirable product than cyclohexanol since producing benzene consumes less hydrogen than producing the hydrogen-saturated cyclohexanol does; keeping the reaction temperature high and the hydrogen pressure low could reduce hydrogen consumption.  The authors also established how the final reaction state relates to the energies of oxygen-carbon chemical bond cleavage.  Their trend analyses contribute “a first important step” to the design of catalysts for hydrodeoxygenation. 

 

Figure 3  

Figure 3.  Left:  top and side views of phenol molecules adsorbed on rhenium crystal surfaces—the “(111)” surface (a, b) and the “(211)” surface (c, d).  Middle:  steps in the hydrodeoxygenation of a phenol molecule catalyzed on both types of rhenium surface at a temperature of 550 K (823.15 °C, 1513.67 °F) and atmospheric pressure, with graphs of changes ΔG in the system’s Gibbs free energy at the various stages.  Right:  a similar representation of steps in the hydrogenation of cyclohexanol with graphs of ΔG.  (After “Hydrodeoxygenation of Phenol to Benzene and Cyclohexane on Rh(111) and Rh(211) Surfaces:  Insights from Density Functional Theory”[DoE PAGES], pp. 18531, 18532, and 18534.) 

 

Density functional theory was also used to study another kind of rearrangement.  In an imperfect crystal containing sites from which atoms are missing, atoms may move into the vacancies, making their former locations into new vacancy sites.  Vacancies can thus diffuse through the crystal, much as actual atoms can, and their diffusion can significantly affect the crystal’s properties.  This diffusion was studied in a material known as strontium titanate[DoE PAGES], which in its perfect form consists of strontium, titanium, and oxygen atoms in the ratio 1:1:3, as indicated by the formula SrTiO3, arranged in the cubically symmetric lattice of a perovskite crystal[OSTI, Wikipedia].  According to the study, strontium titanate has several useful features:  “good insulating properties, excellent optical transparency, … high dielectric constants … shows potential capacitive and resistive switching properties … has been extensively used as a substrate for superlattices and is the parent oxide that leads to two-dimensional electron gases (2DEGs) in many oxide heterointerfaces.”  The formation of strontium titanate crystals often results in their forming with vacancies where oxygen atoms would be in perfect crystals.  The vacancies “have been indicated to be the driving force” of, e.g., strontium titanate’s emission of blue and green light.  The study by scientists at the University of Tennessee, Knoxville and Oak Ridge National Laboratory (ORNL), reported in the June 1, 2016 volume of Computational Materials Science, examined the energetics of oxygen-vacancy diffusion to enable the tailoring of strontium titanate crystals for future uses. 

 

This particular study was not the first to analyze vacancy diffusion in strontium titanate using density functional theory.  The amount of energy required for an oxygen vacancy to be relocated had been estimated differently by earlier analyses, with estimates ranging widely (from 0.35 electronvolts[Wikipedia] to 0.9 electronvolts) for unclear reasons.  Some analyses showed that the number and arrangement of cubic units included in the analyzed crystal lattice would affect the estimated energy, with less energy required in larger crystals—a reasonably expected finding for nanocrystals, but one of limited guidance for how much energy it takes for a vacancy to move in a macroscopic crystal.  Whether extra electrons were present or absent in the oxygen-atom vacancies also affected the diffusion-energy estimates.  The mathematical forms assumed for the density functionals were significant too, with different researchers’ assumptions leading to different conclusions about how much energy it takes to move an oxygen vacancy.  As the authors of the University of Tennessee/ORNL study noted, this energy might also be affected by other, possibly unknown, factors.  Accordingly, these researchers systematically examined the effects of nanocrystal size, different functionals for describing how the perovskite’s electron interactions and interchanges affect its energy, and the presence or absence of electrons in the oxygen-atom vacancies on the vacancies’ diffusion.  They found, among other things, that all these factors were significant for small nanocrystals, but that only the finite crystal size significantly affected vacancy migration in larger crystals. 

 

Analyses by other researchers combine density functional theory with additional analysis methods to understand other physical systems, including charged-atom solutions and suspensions that contain larger, electrically charged biomolecules, colloids, or nanoparticles—common soft-matter and biological compositions whose behavior is significantly affected by the charges’ interactions.  Salt solutions containing electrically charged nanoparticles were analyzed by Sandia National Laboratories researchers, who used both density functional theory and explicit computer simulations of atomic motions, and described their work[DoE PAGES] in a special (July 7, 2016) issue of the Journal of Physical Chemistry B.  Nanoparticles separated by empty space that were all positively charged or all negatively charged would repel each other, but when placed in a solution both positive and negative ions[Wikipedia], the nanoparticles’ mutual repulsion may be overcome by their attractions to ions whose charges are opposite to theirs (counterions). 

 

The conditions under which the nanoparticles would mutually attract instead of repel, as determined by both molecular-dynamics simulations and by density functional theory, are described in the Physical Chemistry B paper.  Repulsion instead of attraction would result if the oppositely charged ions are only capable of bonding chemically to one other atom, but if the counterions could bond to 2 or 3 atoms, the nanoparticles would be drawn toward each other and could even aggregate under certain conditions.  If secondary counterion layers form over the primary layers around the nanoparticles, the nanoparticles become more widely spaced and more loosely bound.  Density functional analysis shows that nanoparticle attraction can only occur with electrostatic correlations among the counterions, in general when the counterions form highly correlated condensed layers, if the nanoparticles are close enough together for their counterion layers to overlap or at least almost overlap.  But the presence of ions with the same type of charge as the nanoparticles and the ability to chemically bond with a single additional atom, fundamentally affects how the nanoparticles interact.  The paper presents several details of how the ion charges, densities, and chemical potentials[Wikipedia] were found to affect the interactions of particles between 2 and 7 nanometers in size. 

 

Figure 4

Figure 4.  Simulation of two negatively charged nanoparticles (large pink spheres) surrounded by positive ions (cyan) and negative ions (magenta) in a salt solution; neutral fluid particles are not shown.  Solutions of electrically charged atoms that contain larger charged entities like this are common soft-matter and biological compositions.  This situation is analyzed with a combination of molecular dynamics modeling and density functional theory, as described in the Journal of Physical Chemistry B paper “Charged nanoparticle attraction in multivalent salt solution: A classical-fluids density functional theory and molecular dynamics study”[DoE PAGES].  (Illustration from p. 2, ibid.) 

 

Density functional theory has also been combined in various ways with other computational methods to answer questions about different physical systems.  In a 40-month project to mathematically model what happens in solar cells as they convert light into electrical energy [SciTech Connect], the researchers used density functionals that vary with time to describe the solar cells’ electrons while describing their atomic nuclei in a way that takes only their more significant quantum-mechanical features into account[Wikipedia].  A different group of researchers worked out an essential component for using x-ray pulses to watch the particle dynamics of processes in materials of all kinds, not just solar cells.[SciTech Connect]  For this technique to be useful, one has to know how the behavior of a material’s charged particles affect the x-rays so that one can tell what’s happening in the material by observing what happens to the x-rays that shine on it.  The researchers thus unified density functional theory with two other analysis methods “to achieve a more realistic material-specific picture of the interaction between X-rays and complex matter”, concentrating on cuprate[Wikipedia] materials, on which most experiments had been done. 

 

Further combination of density functional with other methods was used to analyze the electron structures of lanthanide and actinide metal complexes[Wikipedia] and reported in the Dalton Transactions paper “The roles of 4f- and 5f-orbitals in bonding: A magnetochemical, crystal field, density functional theory, and multi-reference wavefunction study”[DoE PAGES].  The “4f- and 5f-orbitals” mentioned in the report title are electron wavefunctions for isolated atoms or ions in which the electron described has three units of angular momentum (“ħ[Wikipedia]) and is at the fourth or fifth set of energy levels in that atom or ion.  Since different metal ions in these complexes have similar radii, separating them chemically requires multiple stages; better understanding of how those ions bond to others in the same material might inform improvements in the separation processes.  Wavefunctions for electrons that move within sets of atoms configured slightly differently are slightly different themselves; one can imagine a continuum of wavefunctions ranging from those of electrons in a metal complex, in which neighboring atoms share electrons that form chemical bonds between them, to wavefunctions of electrons near each of the same atoms isolated by infinite separation from the others.  As one could see by progressing along the continuum, the isolated atoms’ 4f and 5f wavefunctions correspond to different wavefunctions in the assembled complex, some of which are wavefunctions for chemical bonds between atoms.  The analysis showed that isolated-atom 5f wavefunctions appear to correspond to chemical bonds in the complex, but only one of the seven types of isolated-atom 4f wavefunction (specifically, type “[Wikipedia]) corresponds significantly to chemical-bond wavefunctions. 

Top

Alternate Text Placeholder


Nuclear chemistry and physics

 

Density functional theory has also been used to analyze how the products of nuclear fuels’ chain reactions affect the fuels’ ability to maintain those reactions.  The nucleus of an atom of fissile material is induced to split by absorbing a neutron.  This results in the release of more neutrons that can induce fission in other fissile nuclei, the release of energy as the positively-charged fragments of the first nucleus repel each other, the heating of the surrounding material as the mutually repelling fragments crash into other atoms, and the settling of the fragments into new positions among those other atoms, which may be dislocated from their original positions in the process.  Since nuclei can split in any of several ways, each fragment will have one of several possible combinations of protons and neutrons, thus constituting (with surrounding electrons) an atom of one of several possible elements—at least some of which can have significant effects on the fuel’s physical properties.  According to the “Report on simulation of fission gas and fission product diffusion in UO2[SciTech Connect], when some of the uranium atom fragments in uranium dioxide (UO2) fuel are atoms of noble gases[Wikipedia] like xenon (Xe), the gas atoms can reduce the thermal conductivity of the fuel itself and of the gap between the fuel and its cladding, make the fuel swell into the cladding, and increase the pressure of (helium) gas within the cladding.  A different report[SciTech Connect], with some of the same authors, notes that the higher thermal conductivity of the fuel-cladding gap increases the risk of fuel-pellet melting and also makes the fuel hotter, thus increasing the fission-gas production and making the gas more mobile, further exacerbating the problem.  Enough gas pressure could even rupture the fuel cladding. 

Figure 5 

Figure 5.  How a double vacancy (divacancy) “moves” when an atom moves.  (a) Schematic showing that the migration of a uranium divacancy (dashed line) is related to the migration of one of its constituent vacancies.  As the migrating uranium atom (shown in yellow) moves from corner of the depicted cubic arrangement to the center of one face, the divacancy shifts its position and orientation.  (b and c) Combined snapshots from two different angles of the migrating uranium ion to the nearest vacancy, as determined from density functional theory calculations.  The green solid line represents a straight path, but from the snapshots it is clear that the path is curved away from that direction as well as tilted.  This illustration and description, after “Report on simulation of fission gas and fission product diffusion in UO2[SciTech Connect], p. 10, involves the motion of a uranium atom, but the divacancy motion would be similar if atoms of a different element occupied some of the typically “uranium” positions. 

 

The first report describes how density functional theory was used to study the “formation, binding and migration energies of small Xe atoms and vacancies” while other phenomena were analyzed by additional mathematical methods.  While earlier calculations, assuming that the diffusion involves xenon occupying one of two adjoining uranium-atom vacancies (possibly also adjoining some oxygen-atom vacancies), have generally predicted that xenon should diffuse more slowly through irradiated uranium dioxide fuel than it actually does in real reactors, the report’s authors found that xenon in triple uranium-atom vacancies should have faster, more realistic diffusion speeds.  The authors also used density functional theory to calculate relevant material properties of larger vacancies, with six uranium and eight oxygen sites; they found the results consistent with actually observed diffusion speeds, but noted that more work was needed to solidify their conclusions—which they planned to undertake. 

 

Work described in the second report goes further in some respects than that described in the first, dealing with two noble gases (xenon and krypton) and four dioxides (uranium dioxide, plutonium dioxide, thorium dioxide, and cerium dioxide).  The first two oxides include fissile atoms; thorium is a “fertile” metal that can be transmuted into a fissile uranium isotope after absorbing a neutron, and cerium dioxide is often used as a less problematic surrogate for material studies of uranium dioxide and plutonium dioxide.  The second report’s authors found mathematical functions for potentials[Wikipedia] that accurately represent the interaction forces among the oxide and noble-gas atoms over the broad temperature ranges that nuclear fuels are subject to in reactors.  Using those functions to investigate how radiation damage affects the low-temperature diffusion of fission gas, they found that the diffusion is too large to be accounted for by the early phase of radiation damage, in which incoming radiation irreversibly knocks fuel atoms out of position.  More displacement of fission gas was found to result from the fuel electrons’ stopping of the radiation, though further calculation would be needed to see whether the electronic stopping phase accounts for the rest of the fission gas diffusion. 

 

Figure 6

Figure 6.  This figure shows how temperatures change during the irradiation of uranium dioxide containing xenon atoms.  Red and blue semitransparent regions indicate hot and cold regions, respectively, with atoms below 600 kelvins (873.15 °C, 1603.67 °F) not shown.  The opaque spheres indicate displaced xenon atoms, their color indicating how far they were displaced from their initial positions.  It can be seen that the first atom to be knocked out of place by the incoming radiation recoils rapidly and causes branching within the first 0.01 picosecond (0.01 ps).  Further branching occurs within the next few picoseconds as the cascade spreads out.  By 20 picoseconds it can be seen that significant xenon displacement has occurred and the cascade is beginning to cool down.  Finally by 30 picoseconds the cascade energy has more or less fully dissipated and the xenon atoms are permanently displaced from their original configuration.  Note that, although for clarity they are not shown here, there is also displacement of the uranium and oxygen ions that form the bulk of the material.  (After “Milestone report: The simulation of radiation driven gas diffusion in UO2 at low temperature”[SciTech Connect], pp. 11 and 12.) 

 

Although uranium dioxide is commonly used as a nuclear reactor fuel, other fuels are being investigated to make reactors more accident-tolerant.  To better evaluate one candidate, uranium silicide (U3Si2), researchers at Argonne National Laboratory and Idaho National Laboratory performed density functional calculations[SciTech Connect] to predict how fission gases would behave in that fuel under various conditions occurring in reactors that use ordinary waterinstead of heavy water[Wikipedia] to slow down the neutrons produced in fission and to cool the reactor[Wikipedia].  According to their calculations, most of the fission gas stays in bubbles within fuel granules during steady-state reactor operation, whereas at temperatures above 1000 kelvins (1273.15 °C, 2323.67 °F), fission gas bubbles between fuel granules and release of fission gas dominates.  The researchers also developed a mathematical model of overpressurized bubbles to simulate fission gas behavior in a loss-of-coolant accident, and found from the model that, after a 70-second accident, bubbles within fuel grains are still dominant, making gaseous swelling of the fuel controllable.  While the researchers encouraged improvement of their models (particularly the models of gas bubbles between fuel granules) to further evaluate uranium silicide’s performance and its longer-term accident tolerance, they noted that according to their current models the fuel’s fission gas behavior is benign at both steady-state and loss-of-coolant accident conditions in a light-water reactor. 

 

Plutonium, whose fissile isotopes are also commonly used in nuclear reactors, had its atomic arrangements analyzed by density functional theory in a study published as the Los Alamos National Laboratory technical report “Ab initio study of the effects of dilute defects on the local structure of unalloyed δ-plutonium”[SciTech Connect].  At atmospheric pressure, pure plutonium metal only exhibits its maximally close-packed, face-centered cubic δ (delta) phase at temperatures ranging from about 583 to 725 kelvins (310 °C to 452 °C), but if the plutonium has a low density of impurities or vacancies, its atoms can be arranged in a slightly defective version of the delta phase at lower temperatures.  For example, the delta phase can exist at room temperature if some of the plutonium atoms are replaced by gallium.  Furthermore, as a radioactive element, plutonium generates its own impurities as it ages, most often when plutonium atoms emit alpha particles (helium nuclei) and thereby turn into uranium atoms.  The recoil of the uranium atom is expected to leave a vacancy in most cases.  The density functional theory analysis showed that gallium atoms make the delta structure more stable and tend not to be near vacancies in the plutonium-atom lattice, since a vacancy’s proximiity would make the lattice’s energy higher than it would be otherwise; gallium paired with uranium leaves the plutonium-lattice less distorted than it would be if the uranium atom were by itself, and much less distorted than if the uranium atom were paired with a different defect—a pairing which may form stable lattice defects. 

 

The uses of density functional theory in the analyses described thus far applied the technique to electron configurations  However, density functional theory isn’t intrinsically applicable only to electrons.  A different analysis of plutonium[DoE PAGES], published in Physical Review C, used density functional theory to understand the behavior of the protons and neutrons that make up plutonium atoms’ nuclei—specifically, the nuclei of plutonium-239 atoms that absorb one additional neutron and afterward undergo fission—in order to see how likely each possible proton and neutron distribution is among the smaller nuclei that the plutonium-240 nucleus splits into.  As the authors noted, accurate knowledge of these distributions is essential for purposes ranging from optimizing reactor fuel cycles to understanding how heavy elements might be synthesized in outer space by existing nuclei’s absorption of fission fragments.  They found that their calculations, each based on some different assumptions, typically implied the most likely distribution to be within two proton or neutron masses of the most common distribution seen in experiments.  However, to reach the ~10% accuracy required for scientific and technical uses, more accurate models of several important physical effects may be needed as input for the calculations. 

 

Figure 7

Figure 7.  The relative frequencies with which the fission fragments of plutonium-240 have various masses and electric charges, as determined from compilations of experimental data (Schilleebeeckx, Nishio, JEFF-3.1) and as calculated from density functional theory using two different estimates of how the energy of the nucleus depends on the density function.  Plutonium-240 forms when plutonium-239 nuclei absorb single neutrons, but is less stable and has a small probability per second of spontaneous fission.  (From “Fission fragment charge and mass distributions in 239Pu(n, f) in the adiabatic nuclear energy density functional theory”[DoE PAGES], pp. 12 and 13.) 

 

Top

Alternate Text Placeholder

References

 

Wikipedia

 

Top

Alternate Text Placeholder

Reports available through DoE PAGES

 

 

 

 

 

 

 

Top

Alternate Text Placeholder

Reports available through OSTI’s SciTech Connect

 

 

  • “Building a Unified Computational Model for the Resonant X-Ray Scattering of Strongly Correlated Materials” [Metadata]

 

 

 

 

 

Top

Alternate Text Placeholder

Additional references