National Library of Energy BETA

Sample records for model runs ecmwfdiag

  1. ARM - Instrument - ecmwfdiag

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    govInstrumentsecmwfdiag Documentation ECMWFDIAG : XDC documentation ARM Data Discovery Browse Data Comments? We would love to hear from you! Send us a note below or call us at 1-888-ARM-DATA. Send Instrument : European Centre for Medium Range Weather Forecasts Diagnostic Analyses (ECMWFDIAG) Instrument Categories Derived Quantities and Models General Overview ARM receives two different types of ECMWF products. Diagnostic data derived from ECMWF model runs are especially generated for ARM

  2. ARM - Campaign Instrument - ecmwfdiag

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    govInstrumentsecmwfdiag Comments? We would love to hear from you! Send us a note below or call us at 1-888-ARM-DATA. Send Campaign Instrument : European Centre for Medium Range Weather Forecasts Diagnostic Analyses (ECMWFDIAG) Instrument Categories Derived Quantities and Models Campaigns Fall 1997 SCM IOP [ Download Data ] Southern Great Plains, 1997.09.15 - 1997.10.05 Marine ARM GPCI Investigation of Clouds (MAGIC) [ Download Data ] MAGIC (Marine ARM GPCI Investigation of Clouds); Mobile

  3. Sandia Energy - Developing a Fast-Running Turbine Wake Model

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Developing a Fast-Running Turbine Wake Model Home Renewable Energy Energy Water Power News News & Events Developing a Fast-Running Turbine Wake Model Previous Next Developing a...

  4. Computer support to run models of the atmosphere. Final report

    SciTech Connect (OSTI)

    Fung, I.

    1996-08-30

    This research is focused on a better quantification of the variations in CO{sub 2} exchanges between the atmosphere and biosphere and the factors responsible for these exchangers. The principal approach is to infer the variations in the exchanges from variations in the atmospheric CO{sub 2} distribution. The principal tool involves using a global three-dimensional tracer transport model to advect and convect CO{sub 2} in the atmosphere. The tracer model the authors used was developed at the Goddard institute for Space Studies (GISS) and is derived from the GISS atmospheric general circulation model. A special run of the GCM is made to save high-frequency winds and mixing statistics for the tracer model.

  5. 2013 CEF RUN - PHASE 1 DATA ANALYSIS AND MODEL VALIDATION

    SciTech Connect (OSTI)

    Choi, A.

    2014-05-08

    Phase 1 of the 2013 Cold cap Evaluation Furnace (CEF) test was completed on June 3, 2013 after a 5-day round-the-clock feeding and pouring operation. The main goal of the test was to characterize the CEF off-gas produced from a nitric-formic acid flowsheet feed and confirm whether the CEF platform is capable of producing scalable off-gas data necessary for the revision of the DWPF melter off-gas flammability model; the revised model will be used to define new safety controls on the key operating parameters for the nitric-glycolic acid flowsheet feeds including total organic carbon (TOC). Whether the CEF off-gas data were scalable for the purpose of predicting the potential flammability of the DWPF melter exhaust was determined by comparing the predicted H{sub 2} and CO concentrations using the current DWPF melter off-gas flammability model to those measured during Phase 1; data were deemed scalable if the calculated fractional conversions of TOC-to-H{sub 2} and TOC-to-CO at varying melter vapor space temperatures were found to trend and further bound the respective measured data with some margin of safety. Being scalable thus means that for a given feed chemistry the instantaneous flow rates of H{sub 2} and CO in the DWPF melter exhaust can be estimated with some degree of conservatism by multiplying those of the respective gases from a pilot-scale melter by the feed rate ratio. This report documents the results of the Phase 1 data analysis and the necessary calculations performed to determine the scalability of the CEF off-gas data. A total of six steady state runs were made during Phase 1 under non-bubbled conditions by varying the CEF vapor space temperature from near 700 to below 300C, as measured in a thermowell (T{sub tw}). At each steady state temperature, the off-gas composition was monitored continuously for two hours using MS, GC, and FTIR in order to track mainly H{sub 2}, CO, CO{sub 2}, NO{sub x}, and organic gases such as CH{sub 4}. The standard deviation of the average vapor space temperature during each steady state ranged from 2 to 6C; however, those of the measured off-gas data were much larger due to the inherent cold cap instabilities in the slurry-fed melters. In order to predict the off-gas composition at the sampling location downstream of the film cooler, the measured feed composition was charge-reconciled and input into the DWPF melter off-gas flammability model, which was then run under the conditions for each of the six Phase 1 steady states. In doing so, it was necessary to perform an overall heat/mass balance calculation from the melter to the Off-Gas Condensate Tank (OGCT) in order to estimate the rate of air inleakage as well as the true gas temperature in the CEF vapor space (T{sub gas}) during each steady state by taking into account the effects of thermal radiation on the measured temperature (T{sub tw}). The results of Phase 1 data analysis and subsequent model runs showed that the predicted concentrations of H{sub 2} and CO by the DWPF model correctly trended and further bounded the respective measured data in the CEF off-gas by over predicting the TOC-to-H{sub 2} and TOC-to-CO conversion ratios by a factor of 2 to 5; an exception was the 7X over prediction of the latter at T{sub gas} = 371C but the impact of CO on the off-gas flammability potential is only minor compared to that of H{sub 2}. More importantly, the seemingly-excessive over prediction of the TOC-to-H{sub 2} conversion by a factor of 4 or higher at T{sub gas} < ~350C was attributed to the conservative antifoam decomposition scheme added recently to the model and therefore is considered a modeling issue and not a design issue. At T{sub gas} > ~350C, the predicted TOC-to-H{sub 2} conversions were closer to but still higher than the measured data by a factor of 2, which may be regarded as adequate from the safety margin standpoint. The heat/mass balance calculations also showed that the correlation between T{sub tw} and T{sub gas} in the CEF vapor space was close to that of the scale SGM, whose data were ta

  6. Simple Dynamic Gasifier Model That Runs in Aspen Dynamics

    SciTech Connect (OSTI)

    Robinson, P.J.; Luyben, W.L.

    2008-10-15

    Gasification (or partial oxidation) is a vital component of 'clean coal' technology. Sulfur and nitrogen emissions can be reduced, overall energy efficiency is increased, and carbon dioxide recovery and sequestration are facilitated. Gasification units in an electric power generation plant produce a fuel for driving combustion turbines. Gasification units in a chemical plant generate gas, which can be used to produce a wide spectrum of chemical products. Future plants are predicted to be hybrid power/chemical plants with gasification as the key unit operation. The widely used process simulator Aspen Plus provides a library of models that can be used to develop an overall gasifier model that handles solids. So steady-state design and optimization studies of processes with gasifiers can be undertaken. This paper presents a simple approximate method for achieving the objective of having a gasifier model that can be exported into Aspen Dynamics. The basic idea is to use a high molecular weight hydrocarbon that is present in the Aspen library as a pseudofuel. This component should have the same 1:1 hydrogen-to-carbon ratio that is found in coal and biomass. For many plantwide dynamic studies, a rigorous high-fidelity dynamic model of the gasifier is not needed because its dynamics are very fast and the gasifier gas volume is a relatively small fraction of the total volume of the entire plant. The proposed approximate model captures the essential macroscale thermal, flow, composition, and pressure dynamics. This paper does not attempt to optimize the design or control of gasifiers but merely presents an idea of how to dynamically simulate coal gasification in an approximate way.

  7. Running jobs

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Running jobs Running jobs Overview and Basic Description Euclid is a single node system with 48 processors. It supports both multiprocessing (MPI) and multithreading programming models. Interactive Jobs All Euclid jobs are interactive. To launch an MPI job, type in this at the shell prompt: % mpirun -np numprocs executable_name where numprocs is the total number of MPI processes that will be executed. Interactive Usage Policy Due to the dynamic and unpredictable nature of visualization and data

  8. Long-run growth rate in a random multiplicative model (Journal Article) |

    Office of Scientific and Technical Information (OSTI)

    SciTech Connect Long-run growth rate in a random multiplicative model Citation Details In-Document Search Title: Long-run growth rate in a random multiplicative model We consider the long-run growth rate of the average value of a random multiplicative process x{sub i+1} = a{sub i}x{sub i} where the multipliers a{sub i}=1+ρexp(σW{sub i}-1/2 σ²t{sub i}) have Markovian dependence given by the exponential of a standard Brownian motion W{sub i}. The average value (x{sub n}) is given by the

  9. Running of the Yukawa Couplings in a Two Higgs Doublet Model

    SciTech Connect (OSTI)

    Montes de Oca Y, J. H.; Juarez W, S. R.; Kielanowski, P.

    2008-07-02

    We solve the one loop Renormalization Group Equations (RGE) for the Yukawa couplings in the Standard Model with two Higgs doublets. In the RGE we include the contributions of the up and down quarks. In this approximation we explore universality and unification assumptions to study the mass-hierarchy problem through the running of the vacuum expectation values.

  10. Running Jobs

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Submitting Jobs To Mendel+ How to submit your job to the UGE. Read More Running with Java Solutions to some of the common problems users have with running on Genepool when the...

  11. Running Jobs

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Also pointers to more NERSC documentations on SLURM. Read More Interactive Jobs Learn how to run interactive jobs on Cori. Read More Batch Jobs SLURM keywords and...

  12. Running Jobs

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Moab scheduler and Torque resource manager. Read More Interactive Jobs There are two types of interactive jobs. The first type runs on a login node. These applications are...

  13. Dark Matter Benchmark Models for Early LHC Run-2 Searches. Report of the ATLAS/CMS Dark Matter Forum

    SciTech Connect (OSTI)

    Abercrombie, Daniel

    2015-07-06

    One of the guiding principles of this report is to channel the efforts of the ATLAS and CMS collaborations towards a minimal basis of dark matter models that should influence the design of the early Run-2 searches. At the same time, a thorough survey of realistic collider signals of Dark Matter is a crucial input to the overall design of the search program.

  14. Search for non-standard model signatures in the WZ/ZZ final state at CDF run II

    SciTech Connect (OSTI)

    Norman, Matthew; /UC, San Diego

    2009-01-01

    This thesis discusses a search for non-Standard Model physics in heavy diboson production in the dilepton-dijet final state, using 1.9 fb{sup -1} of data from the CDF Run II detector. New limits are set on the anomalous coupling parameters for ZZ and WZ production based on limiting the production cross-section at high {cflx s}. Additionally limits are set on the direct decay of new physics to ZZ andWZ diboson pairs. The nature and parameters of the CDF Run II detector are discussed, as are the influences that it has on the methods of our analysis.

  15. Running jobs

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    jobs may be run on Edison by requesting resources from the batch system. "salloc -p debug -N " is the basic command to request interactive resources. Read More Batch Jobs...

  16. Running Scripts

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Running Scripts Running Scripts Login versus Compute Nodes On Cray system login nodes, use Python as you normally would in any Unix environment, but be mindful of resource consumption since login nodes are shared by many users at the same time. To execute a Python script in the Edison or Cori batch/interactive environment (via sbatch/salloc) use srun: srun -n 1 python ./hello-world.py Of course, if the script has executable permission and contains "#!/usr/bin/env python" as its first

  17. Running jobs

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Euclid is a single node system with 48 processors. It supports both multiprocessing (MPI) and multithreading programming models. Interactive Jobs All Euclid jobs are...

  18. Search for the Trilepton Signal of the Minimal Supergravity Model in D0 Run II

    SciTech Connect (OSTI)

    Binder, Meta; /Munich U.

    2005-06-01

    A search for associated chargino neutralino pair production is performed in the trilepton decay channel q{bar q} {yields} {tilde {chi}}{sub 1}{sup {+-}} {tilde {chi}}{sub 2}{sup 0} {yields} {ell}{sup {+-}} {nu} {tilde {chi}}{sub 1}{sup 0} {mu}{sup {+-}} {mu}{sup {-+}} {tilde {chi}}{sub 1}{sup 0}, using data collected with the D0 detector at a center-of-mass energy of 1.96 TeV at the Fermilab Tevatron Collider. The data sample corresponds to an integrated luminosity of {approx}300 pb{sup -1}. A dedicated event selection is applied to all samples including the data sample and the Monte Carlo simulated samples for the Standard Model background and the Supersymmetry signal. Events with two muons plus an additional isolated track, replacing the requirement of a third charged lepton in the event, are analyzed. Additionally, selected events must have a large amount of missing transverse energy due to the neutrino and the two {tilde {chi}}{sub 1}{sup 0}. After all selection cuts are applied, 2 data events are found, with an expected number of background events of 1.75 {+-} 0.34 (stat.) {+-} 0.46 (syst.). No evidence for Supersymmetry is found and limits on the production cross section times leptonic branching fraction are set. When the presented analysis is considered in combination with three other decay channels, no evidence for Supersymmetry is found. Limits on the production cross section times leptonic branching fraction are set. A lower chargino mass limit of 117 GeV at 95% CL is then derived for the mSUGRA model in a region of parameter space with enhanced leptonic branching fractions.

  19. RHIC Au beam in Run 2014

    SciTech Connect (OSTI)

    Zhang, S. Y.

    2014-09-15

    Au beam at the RHIC ramp in run 2014 is reviewed together with the run 2011 and run 2012. Observed bunch length and longitudinal emittance are compared with the IBS simulations. The IBS growth rate of the longitudinal emittance in run 2014 is similar to run 2011, and both are larger than run 2012. This is explained by the large transverse emittance at high intensity observed in run 2012, but not in run 2014. The big improvement of the AGS ramping in run 2014 might be related to this change. The importance of the injector intensity improvement in run 2014 is emphasized, which gives rise to the initial luminosity improvement of 50% in run 2014, compared with the previous Au-Au run 2011. In addition, a modified IBS model, which is calibrated using the RHIC Au runs from 9.8 GeV/n to 100 GeV/n, is presented and used in the study.

  20. Running with Java

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Running with Java Running with Java Workflows that require the Java Virtual Machine will run into a couple of issues when running on Genepool. Java Software installed on Genepool...

  1. Running Jobs Efficiently

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Optimization Running Jobs Efficiently Running Jobs Efficiently Job Efficiency A job's efficiency is the ratio of its CPU time to the actual time it took to run, i.e., cputime ...

  2. Running on Carver

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Running on Carver Running on Carver STAR software has been copied from the usual installation on common on PDSF to projectprojectdirsstarcommon. At this point the installation...

  3. Department of Energy to Provide Supercomputing Time to Run NOAA...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Department of Energy to Provide Supercomputing Time to Run NOAA's Climate Change Models Department of Energy to Provide Supercomputing Time to Run NOAA's Climate Change Models ...

  4. Running Large Scale Jobs

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Running Large Scale Jobs Running Large Scale Jobs Users face various challenges with running and scaling large scale jobs on peta-scale production systems. For example, certain applications may not have enough memory per core, the default environment variables may need to be adjusted, or I/O dominates run time. This page lists some available programming and run time tuning options and tips users can try on their large scale applications on Hopper for better performance. Try different compilers

  5. Running Parallel Discrete Event Simulators on Sierra

    SciTech Connect (OSTI)

    Barnes, P. D.; Jefferson, D. R.

    2015-12-03

    In this proposal we consider porting the ROSS/Charm++ simulator and the discrete event models that run under its control so that they run on the Sierra architecture and make efficient use of the Volta GPUs.

  6. Running on Carver

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Running on Carver Running on Carver ATLAS software is obtained via cvmfs which is installed on PDSF nodes. There is presently no cvmfs installation available on Carver so it is not...

  7. Running Jobs by Group

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Running Jobs by Group Running Jobs by Group Daily Graph: Weekly Graph: Monthly Graph: Yearly Graph: 2 Year Graph: Last edited: 2016-02-01 08:06:40

  8. Running Jobs by Group

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Running Jobs by Group Running Jobs by Group Daily Graph: Weekly Graph: Monthly Graph: Yearly Graph: 2 Year Graph: Last edited: 2011-04-05 13:59:48...

  9. Running on Carver

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Running on Carver Running on Carver The ALICE software is installed on /project so no porting of code is necessary. Users can simply set up their environment as they do on PDSF and run. Last edited: 2016-02-01 08:07:1

  10. Running on Carver

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Running on Carver Running on Carver The Daya Bay software is installed on PDSF on /common so is therefore unavailable on Carver. At this point there has been no effort to port the code to /project for use on Carver. Last edited: 2016-02-01 08:07:13

  11. Combined Search for the Standard Model Higgs Boson Decaying to bb? Using the D0 Run II Data Set

    SciTech Connect (OSTI)

    Abazov, V. M.; Abbott, B.; Acharya, B. S.; Adams, M.; Adams, T.; Alexeev, G. D.; Alkhazov, G.; Alton, A.; Alverson, G.; Askew, A.; Atkins, S.; Augsten, K.; Avila, C.; Badaud, F.; Bagby, L.; Baldin, B.; Bandurin, D. V.; Banerjee, S.; Barberis, E.; Baringer, P.; Bartlett, J. F.; Bassler, U.; Bazterra, V.; Bean, A.; Begalli, M.; Bellantoni, L.; Beri, S. B.; Bernardi, G.; Bernhard, R.; Bertram, I.; Besanon, M.; Beuselinck, R.; Bhat, P. C.; Bhatia, S.; Bhatnagar, V.; Blazey, G.; Blessing, S.; Bloom, K.; Boehnlein, A.; Boline, D.; Boos, E. E.; Borissov, G.; Bose, T.; Brandt, A.; Brandt, O.; Brock, R.; Bross, A.; Brown, D.; Brown, J.; Bu, X. B.; Buehler, M.; Buescher, V.; Bunichev, V.; Burdin, S.; Buszello, C. P.; Camacho-Prez, E.; Casey, B. C. K.; Castilla-Valdez, H.; Caughron, S.; Chakrabarti, S.; Chakraborty, D.; Chan, K. M.; Chandra, A.; Chapon, E.; Chen, G.; Chevalier-Thry, S.; Cho, D. K.; Cho, S. W.; Choi, S.; Choudhary, B.; Cihangir, S.; Claes, D.; Clutter, J.; Cooke, M.; Cooper, W. E.; Corcoran, M.; Couderc, F.; Cousinou, M.-C.; Croc, A.; Cutts, D.; Das, A.; Davies, G.; de Jong, S. J.; De La Cruz-Burelo, E.; Dliot, F.; Demina, R.; Denisov, D.; Denisov, S. P.; Desai, S.; Deterre, C.; DeVaughan, K.; Diehl, H. T.; Diesburg, M.; Ding, P. F.; Dominguez, A.; Dubey, A.; Dudko, L. V.; Duggan, D.; Duperrin, A.; Dutt, S.; Dyshkant, A.; Eads, M.; Edmunds, D.; Ellison, J.; Elvira, V. D.; Enari, Y.; Evans, H.; Evdokimov, A.; Evdokimov, V. N.; Facini, G.; Feng, L.; Ferbel, T.; Fiedler, F.; Filthaut, F.; Fisher, W.; Fisk, H. E.; Fortner, M.; Fox, H.; Fuess, S.; Garcia-Bellido, A.; Garca-Gonzlez, J. A.; Garca-Guerra, G. A.; Gavrilov, V.; Gay, P.; Geng, W.; Gerbaudo, D.; Gerber, C. E.; Gershtein, Y.; Ginther, G.; Golovanov, G.; Goussiou, A.; Grannis, P. D.; Greder, S.; Greenlee, H.; Grenier, G.; Gris, Ph.; Grivaz, J.-F.; Grohsjean, A.; Grnendahl, S.; Grnewald, M. W.; Guillemin, T.; Gutierrez, G.; Gutierrez, P.; Hagopian, S.; Haley, J.; Han, L.; Harder, K.; Harel, A.; Hauptman, J. M.; Hays, J.; Head, T.; Hebbeker, T.; Hedin, D.; Hegab, H.; Heinson, A. P.; Heintz, U.; Hensel, C.; Heredia De La Cruz, I.; Herner, K.; Hesketh, G.; Hildreth, M. D.; Hirosky, R.; Hoang, T.; Hobbs, J. D.; Hoeneisen, B.; Hogan, J.; Hohlfeld, M.; Howley, I.; Hubacek, Z.; Hynek, V.; Iashvili, I.; Ilchenko, Y.; Illingworth, R.; Ito, A. S.; Jabeen, S.; Jaffr, M.; Jayasinghe, A.; Jeong, M. S.; Jesik, R.; Jiang, P.; Johns, K.; Johnson, E.; Johnson, M.; Jonckheere, A.; Jonsson, P.; Joshi, J.; Jung, A. W.; Juste, A.; Kaadze, K.; Kajfasz, E.; Karmanov, D.; Kasper, P. A.; Katsanos, I.; Kehoe, R.; Kermiche, S.; Khalatyan, N.; Khanov, A.; Kharchilava, A.; Kharzheev, Y. N.; Kiselevich, I.; Kohli, J. M.; Kozelov, A. V.; Kraus, J.; Kulikov, S.; Kumar, A.; Kupco, A.; Kur?a, T.; Kuzmin, V. A.; Lammers, S.; Landsberg, G.; Lebrun, P.; Lee, H. S.; Lee, S. W.; Lee, W. M.; Lei, X.; Lellouch, J.; Li, D.; Li, H.; Li, L.; Li, Q. Z.; Lim, J. K.; Lincoln, D.; Linnemann, J.; Lipaev, V. V.; Lipton, R.; Liu, H.; Liu, Y.; Lobodenko, A.; Lokajicek, M.; Lopes de Sa, R.; Lubatti, H. J.; Luna-Garcia, R.; Lyon, A. L.; Maciel, A. K. A.; Madar, R.; Magaa-Villalba, R.; Malik, S.; Malyshev, V. L.; Maravin, Y.; Martnez-Ortega, J.; McCarthy, R.; McGivern, C. L.; Meijer, M. M.; Melnitchouk, A.; Menezes, D.; Mercadante, P. G.; Merkin, M.; Meyer, A.; Meyer, J.; Miconi, F.; Mondal, N. K.; Mulhearn, M.; Nagy, E.; Naimuddin, M.; Narain, M.; Nayyar, R.; Neal, H. A.; Negret, J. P.; Neustroev, P.; Nguyen, H. T.; Nunnemann, T.; Orduna, J.; Osman, N.; Osta, J.; Padilla, M.; Pal, A.; Parashar, N.; Parihar, V.; Park, S. K.; Partridge, R.; Parua, N.; Patwa, A.; Penning, B.; Perfilov, M.; Peters, Y.; Petridis, K.; Petrillo, G.; Ptroff, P.; Pleier, M.-A.; Podesta-Lerma, P. L. M.; Podstavkov, V. M.; Popov, A. V.; Prewitt, M.; Price, D.; Prokopenko, N.; Qian, J.; Quadt, A.; Quinn, B.; Rangel, M. S.; Ranjan, K.; Ratoff, P. N.; Razumov, I.; Renkel, P.; Ripp-Baudot, I.; Rizatdinova, F.; Rominsky, M.; Ross, A.; Royon, C.; Rubinov, P.; Ruchti, R.; Sajot, G.; Salcido, P.; Snchez-Hernndez, A.; Sanders, M. P.; Santos, A. S.; Savage, G.; Sawyer, L.; Scanlon, T.; Schamberger, R. D.; Scheglov, Y.; Schellman, H.; Schlobohm, S.; Schwanenberger, C.; Schwienhorst, R.; Sekaric, J.; Severini, H.; Shabalina, E.; Shary, V.; Shaw, S.; Shchukin, A. A.; Shivpuri, R. K.; Simak, V.; Skubic, P.; Slattery, P.; Smirnov, D.; Smith, K. J.; Snow, G. R.; Snow, J.; Snyder, S.; Sldner-Rembold, S.; Sonnenschein, L.; Soustruznik, K.; Stark, J.; Stoyanova, D. A.; Strauss, M.; Suter, L.; Svoisky, P.; Takahashi, M.; Titov, M.; Tokmenin, V. V.; Tsai, Y.-T.; Tschann-Grimm, K.; Tsybychev, D.; Tuchming, B.; Tully, C.; Uvarov, L.; Uvarov, S.; Uzunyan, S.; Van Kooten, R.; van Leeuwen, W. M.; Varelas, N.; Varnes, E. W.

    2012-09-20

    We present the results of the combination of searches for the standard model Higgs boson produced in association with a W or Z boson and decaying into bb? using the data sample collected with the D0 detector in pp? collisions at ?s=1.96 TeV at the Fermilab Tevatron Collider. We derive 95% C.L. upper limits on the Higgs boson cross section relative to the standard model prediction in the mass range 100 GeV?MH?150 GeV, and we exclude Higgs bosons with masses smaller than 102 GeV at the 95% C.L. In the mass range 120 GeV?MH?145 GeV, the data exhibit an excess above the background prediction with a global significance of 1.5 standard deviations, consistent with the expectation in the presence of a standard model Higgs boson.

  12. Running Jobs Intermittently Slow

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Running Jobs Intermittently Slow Running Jobs Intermittently Slow October 2, 2014 Symptom: User jobs are seeing intermittent slowness, jobs can run very slow in certain stages or appear hung. This could happen to jobs having input/output on global file systems (/project, /global/homes, /global/scratch2). It could also happen to aplications using shared libraries, or CCM jobs on any Hopper file systems. The slowness is identified to be related to DVS/GPFS issues, the cause of slownwss yet

  13. Running Jobs Efficiently

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Optimization » Running Jobs Efficiently Running Jobs Efficiently Job Efficiency A job's efficiency is the ratio of its CPU time to the actual time it took to run, i.e., cputime / walltime. A good efficiency at PDSF might be 70% or higher. Certainly an efficiency of less than 50% is indicative of some sort of problem with the job. The most common reason for low efficiency is slow IO reading data from disk but other factors, such as loading software, also can contribute. To see the efficiency for

  14. Wiley Coyotes Santa Run

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Wiley Coyotes Santa Run AirSoft rifles are used to scare coyotes away from heavily used areas. WSI reaches major milestone with more than two million safe driving miles. Employees geared up for the annual run with thousands of Saint Nicks. See page 8. See page 3. See page 6. Revolutionary Gemini Gives Scientists Exciting Data Scientists at National Security Technologies, LLC (NSTec) and Los Alamos National Laboratory (LANL) are excited about the Gemini subcritical experiment series they're

  15. Running Interactive Batch Jobs

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Interactive Batch Jobs Running Interactive Batch Jobs You cannot login to the PDSF batch nodes directly but you can run an interactive session on a batch node using either qlogin or qsh. This can be useful if you are doing something that is potentially disruptive or if the interactive nodes are overloaded. qlogin will give you an interactive session in the same window as your original session on PDSF, however, you must have your ssh keys in place. You can do this locally on PDSF by following

  16. SSRL- Experimental Run Schedule

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    FY2008 Experimental Run Schedules 2008 Run Ends August 11, 2008. User Operations will resume November 2008. Operating / Maintenance Beam Line Schedule FY2009 FY2008 X-ray (1-4, 2-1, 2-3, 4-2, 6-2, 7-2, 7-3, 9-3, 10-2, 11-2, 11-3) Macromolecular Crystallography Support Staff Schedule (1-5, 7-1, 9-1, 9-2, 11-1) VUV (5-1, 5-2, 5-4, 8-1, 8-2, 10-1) Schedules for previous years Accelerator Schedule (for staff): Accelerator Physics SPEAR Maintenance and Shutdown SPEAR Startup Injector Startup

  17. PDU Run 10

    SciTech Connect (OSTI)

    Not Available

    1981-09-01

    PDU Run 10, a 46-day H-Coal syncrude mode operation using Wyodak coal, successfully met all targeted objectives, and was the longest PDU operation to date in this program. Targeted coal conversion of 90 W % was exceeded with a C/sub 4/-975/sup 0/F distillate yield of 43 to 48 W %. Amocat 1A catalyst was qualified for Pilot Plant operation based on improved operation and superior performance. PDU 10 achieved improved yields and lower hydrogen consumption compared to PDU 6, a similar operation. High hydroclone efficiency and high solids content in the vacuum still were maintained throughout the run. Steady operations at lower oil/solids ratios were demonstrated. Microautoclave testing was introduced as an operational aid. Four additional studies were successfully completed during PDU 10. These included a catalyst tracer study in conjunction with Sandia Laboratories; tests on letdown valve trims for Battelle; a fluid dynamics study with Amoco; and special high-pressure liquid sampling.

  18. Weighted Running Jobs by Group

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Weighted Running Jobs by Group Weighted Running Jobs by Group Daily Graph: Weekly Graph: Monthly Graph: Yearly Graph: 2 Year Graph: Last edited: 2016-02-01 08:06:59

  19. Access, Compiling and Running Jobs

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Access Compiling and Running Jobs Access, Compiling and Running Jobs Access Dirac Dirac can be accessed by logging into carver.nersc.gov. Compile To compile your code, you need to...

  20. Coordinating the 2009 RHIC Run

    ScienceCinema (OSTI)

    Brookhaven Lab - Mei Bai

    2010-01-08

    Physicists working at the Brookhaven National Lab's Relativistic Heavy Ion Collider (RHIC) are exploring the puzzle of proton spin as they begin taking data during the 2009 RHIC run. For the first time, RHIC is running at a record energy of 500 giga-elect

  1. Benchmark Distribution & Run Rules

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Rules Benchmark Distribution & Run Rules Applications and micro-benchmarks for the Crossroads/NERSC-9 procurement. You can find more information by clicking on the header for each of the topics listed below. Change Log Change and update notes for the benchmark suite. Application Benchmarks The following applications will be used by the Sustained System Improvement metric in measuring the performance improvement of proposed systems relative to NERSC's Edison platform. General Run Rules

  2. Run

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    PAMM / AP Low alpha University Holidays AP/PAMM Spear Down Oct Nov S 1 Apr May Jun Jul Aug Sep 1/8/2015 SPEAR OPERATING SCHEDULE 2014-2015 2014 2015 Oct Nov Dec Jan Feb Mar S 2 1 1 TBD M 3 1 2 1 1 2 2 PAMM PAMM 3 1 2 2 3 W 1 5 AP 4 1 2 AP 3 3 T 4 4 2 4 3 1 5 4 1 3 4 1 5 F 3 7 2 6 3 2 4 1 5 5 T 2 6 6 4 2 Lock 6 5 3 7 6 3 1 5 2 6 low α 3 SP 7 S 5 9 4 8 5 4 2 6 3 7 low α 7 S 4 8 7 5 9 8 5 3 7 4 8 low α 5 9 9 M 6 10 8 6 4 8 7 11 9 5 9 AP PAMM PAMM PAMM PAMM 6 10 7 AP 6 4 8 W 8 12 10 6 10 TBD 7 11

  3. Run

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Maintenance AP Spear Down University Holidays Sep Oct S 1 1 Mar Apr May Jun Jul Aug 442013 SPEAR OPERATING SCHEDULE 2012-2013 2012 2013 Sep Oct Nov Dec Jan Feb S 2 1 1 M 3 1 2...

  4. Access, Compiling and Running Jobs

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Access Compiling and Running Jobs Access, Compiling and Running Jobs Access Dirac Dirac can be accessed by logging into carver.nersc.gov. Compile To compile your code, you need to land on a dirac compute node 1st: qsub -q dirac_reg -l nodes=1 -l walltime=00:30:00 -I After you are inside the job, you can load the necessary module for compile: module unload pgi module unload openmpi module unload cuda module load gcc-sl6 module load openmpi-gcc-sl6 module load cuda Now you can compile your code.

  5. Combined Search for the Standard Model Higgs Boson Decaying to bb̄ Using the D0 Run II Data Set

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Abazov, V. M.; Abbott, B.; Acharya, B. S.; Adams, M.; Adams, T.; Alexeev, G. D.; Alkhazov, G.; Alton, A.; Alverson, G.; Askew, A.; et al

    2012-09-20

    We present the results of the combination of searches for the standard model Higgs boson produced in association with a W or Z boson and decaying into bb̄ using the data sample collected with the D0 detector in pp̄ collisions at √s=1.96 TeV at the Fermilab Tevatron Collider. We derive 95% C.L. upper limits on the Higgs boson cross section relative to the standard model prediction in the mass range 100 GeV≤MH≤150 GeV, and we exclude Higgs bosons with masses smaller than 102 GeV at the 95% C.L. In the mass range 120 GeV≤MH≤145 GeV, the data exhibit an excessmore » above the background prediction with a global significance of 1.5 standard deviations, consistent with the expectation in the presence of a standard model Higgs boson.« less

  6. PRELIMINARY Run Shutdown BL Commissioning

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    PRELIMINARY Run Shutdown BL Commissioning Maintenance / AP SPEAR Down / Injector Startup University Holidays Spear Down SPEAR Startup MA Sep Oct 2011 2012 MA Mar Apr May Jun Jul Aug Sep Oct Nov Dec Jan Feb S 30 31 31 S 31 29 30 30 F 30 28 30 29 31 29 T 29 31 27 29 28 30 28 29 W 28 30 28 30 29 26 AP 31 AP 28 27 29 27 31 T 27 31 25 30 27 26 30 28 26 30 27 28 M 26 31 28 24 29 26 25 29 27 25 29 26 29 AP S 25 27 30 27 23 28 25 29 24 28 26 24 28 25 27 24 S 24 29 30 26 24 28 26 25 22 27 24 23 27 25 23

  7. Run on Sun | Open Energy Information

    Open Energy Info (EERE)

    Run on Sun Jump to: navigation, search Name: Run on Sun Address: 655 S Raymond AV Place: Pasadena, California Country: United States Zip: 91105 Region: Southern CA Area Sector:...

  8. Running Jobs under SLURM on Babbage

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Running Jobs under SLURM on Babbage Running Jobs under SLURM on Babbage Overview SLURM (Simple Linux Utility For Resource Management) batch scheduler is used on Babbage. Basically SLURM job commands are: salloc (ask nodes for interactive job) sbatch: submit a batch job scancel: delete a queued or running job squeue: check queued jobs sinfo: check queue configuration More details on on SLURM keywords, job control and monitoring commands, etc. can be found at the SLURM Introduction (with links to

  9. Brent Run Generating Station Biomass Facility | Open Energy Informatio...

    Open Energy Info (EERE)

    Brent Run Generating Station Biomass Facility Jump to: navigation, search Name Brent Run Generating Station Biomass Facility Facility Brent Run Generating Station Sector Biomass...

  10. Mill Run Wind Power Project | Open Energy Information

    Open Energy Info (EERE)

    Run Wind Power Project Jump to: navigation, search Name Mill Run Wind Power Project Facility Mill Run Wind Power Project Sector Wind energy Facility Type Commercial Scale Wind...

  11. LCLS Experimental Run Schedules | Linac Coherent Light Source

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    LCLS Experimental Run Schedules Check-In | Computer Accounts | Data Collection & Analysis | Policies | Proposals | Shipping | User Portal LCLS generally operates November through August, using the shutdown period for upgrades and maintenance projects. LCLS Tentative Long-Range Operating Plans Run 13 (March to August 2016) Run 12 (October 2015 to March 2016) Run 11 (March to August 2015) Run 10 (October 2014 to March 2015) Run 9 (March to August 2014) Run 8 (July 2013 to March 2014) Run 7

  12. The PDF4LHC report on PDFs and LHC data: Results from Run I and preparation for Run II

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Rojo, Juan; Accardi, Alberto; Ball, Richard D.; Cooper-Sarkar, Amanda; de Roeck, Albert; Farry, Stephen; Ferrando, James; Forte, Stefano; Gao, Jun; Harland-Lang, Lucian; et al

    2015-09-16

    The accurate determination of Parton Distribution Functions (PDFs) of the proton is an essential ingredient of the Large Hadron Collider (LHC) program. PDF uncertainties impact a wide range of processes, from Higgs boson characterization and precision Standard Model measurements to New Physics searches. A major recent development in modern PDF analyses has been to exploit the wealth of new information contained in precision measurements from the LHC Run I, as well as progress in tools and methods to include these data in PDF fits. In this report we summarize the information that PDF-sensitive measurements at the LHC have provided somore » far, and review the prospects for further constraining PDFs with data from the recently started Run II. As a result, this document aims to provide useful input to the LHC collaborations to prioritize their PDF-sensitive measurements at Run II, as well as a comprehensive reference for the PDF-fitting collaborations.« less

  13. RHIC Polarized proton performance in run-8

    SciTech Connect (OSTI)

    Montag,C.; Bai, M.; MacKay, W.W.; Roser, T.; Abreu, N.; Ahrens, L.; Barton, D.; Beebe-Wang, J.; Blaskiewicz, M.; Brennan, J.M.; Brown, K.A.; Bruno, D.; Bunce, G.; Calaga, R.; Cameron, P.; Connolly, R.; D'Ottavio, T.; Drees, A.; Fedotov, A.V.; Fischer, W.; Ganetis, G.; Gardner, C.; Glenn, J.; Hayes, T.; Huang, H.; Ingrassia, P.; Kayran, D.A.; Kewisch, J.; Lee, R.C.; Lin, F.; Litvinenko, V.N.; Luccio, A.U.; Luo, Y.; Makdisi, Y.; Malitsky, N.; Marr, G.; Marusic, A.; Michnoff, R.; Morris, J.; Oerter, B.; Pilat, F.; Pile, P.; Robert-Demolaize, G.; Russo, T.; Satogata, T.; Schultheiss, C.; Sivertz, M.; Smith, K.; Tepikian, S.; D. Trbojevic, D.; Tsoupas, N.; Tuozzolo, J.; Zaltsman, A.; Zelenski, A.; Zeno, K.; Zhang, S.Y.

    2008-10-06

    During Run-8, the Relativistic Heavy Ion Collider (RHIC) provided collisions of spin-polarized proton beams at two interaction regions. Physics data were taken with vertical orientation of the beam polarization, which in the 'Yellow' RHIC ring was significantly lower than in previous years. We present recent developments and improvements as well as the luminosity and polarization performance achieved during Run-8, and we discuss possible causes of the not as high as previously achieved polarization performance of the 'Yellow' ring.

  14. Department of Energy to Provide Supercomputing Time to Run NOAA's Climate

    Office of Environmental Management (EM)

    Change Models | Department of Energy to Provide Supercomputing Time to Run NOAA's Climate Change Models Department of Energy to Provide Supercomputing Time to Run NOAA's Climate Change Models September 8, 2008 - 9:45am Addthis WASHINGTON, DC - The U.S. Department of Energy's (DOE) Office of Science will make available more than 10 million hours of computing time for the U.S. Commerce Department's National Oceanic and Atmospheric Administration (NOAA) to explore advanced climate change models

  15. Running Greener: E-Mobility at SAP

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Running Greener: E-Mobility at SAP 5.000 Electric Vehicles in 2020 Ashok R, Horst Terhalle, Marcus Wagner November 14th 2014 © 2014 SAP SE or an SAP affiliate company. All rights reserved. 2 Public Running Greener: E-Mobility at SAP AGENDA Why is e-Mobility important Marcus Wagner Our strategy "20% e-cars by 2020" Marcus Wagner Regional highlights Ashok R. E-Mobility Service Provider Solution Horst Terhalle FAQ all © 2014 SAP SE or an SAP affiliate company. All rights reserved. 3

  16. Rotary running tool for rotary lock mandrel

    SciTech Connect (OSTI)

    Dollison, W.W.

    1992-07-28

    This patent describes a running tool for a rotary lock mandrel. It comprises: a housing having a connection on the end thereof; an anvil slidably mounted in the housing and extending from the other end of the housing; means in the housing for rotating the anvil relative to the housing including: a helical slot in the housing, a lug slidably mounted in the helical slot and attached to the anvil and a spring biasing the anvil to extend from the housing; and means on the extending anvil for rotatively and releasably connecting the running tool to a rotary lock mandrel.

  17. 07-08 Run R3.xls

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    -22-2007 Run Shutdown Maintenance AP Injector SPEAR Startup Spear Down University Holidays 23 MA AP AP 27 MA 24 22 24 22 26 20 22 22 24 25 21 18 17 21 15 23 18 3 4 15 14 13 6 8...

  18. Alternative Fuels Data Center: Airport Shuttles Run on Propane

    Alternative Fuels and Advanced Vehicles Data Center [Office of Energy Efficiency and Renewable Energy (EERE)]

    Airport Shuttles Run on Propane to someone by E-mail Share Alternative Fuels Data Center: Airport Shuttles Run on Propane on Facebook Tweet about Alternative Fuels Data Center: Airport Shuttles Run on Propane on Twitter Bookmark Alternative Fuels Data Center: Airport Shuttles Run on Propane on Google Bookmark Alternative Fuels Data Center: Airport Shuttles Run on Propane on Delicious Rank Alternative Fuels Data Center: Airport Shuttles Run on Propane on Digg Find More places to share Alternative

  19. China Resources Wind Power Development Co Ltd Hua Run | Open...

    Open Energy Info (EERE)

    Resources Wind Power Development Co Ltd Hua Run Jump to: navigation, search Name: China Resources Wind Power Development Co Ltd (Hua Run) Place: Shantou, Guangdong Province, China...

  20. Scalable Run Time Data Collection Analysis and Visualization...

    Office of Scientific and Technical Information (OSTI)

    Scalable Run Time Data Collection Analysis and Visualization (Presentation). Citation Details In-Document Search Title: Scalable Run Time Data Collection Analysis and Visualization...

  1. OVIS: Scalable Run Time Data Collection Analysis and Visualization...

    Office of Scientific and Technical Information (OSTI)

    OVIS: Scalable Run Time Data Collection Analysis and Visualization (Abstract). Citation Details In-Document Search Title: OVIS: Scalable Run Time Data Collection Analysis and...

  2. Running against hunger | National Nuclear Security Administration

    National Nuclear Security Administration (NNSA)

    Running against hunger | National Nuclear Security Administration Facebook Twitter Youtube Flickr RSS People Mission Managing the Stockpile Preventing Proliferation Powering the Nuclear Navy Emergency Response Recapitalizing Our Infrastructure Countering Nuclear Terrorism About Our Programs Our History Who We Are Our Leadership Our Locations Budget Our Operations Library Bios Congressional Testimony Fact Sheets Newsletters Press Releases Photo Gallery Jobs Apply for Our Jobs Our Jobs Working at

  3. Running Line-Haul Trucks on Ethanol

    Alternative Fuels and Advanced Vehicles Data Center [Office of Energy Efficiency and Renewable Energy (EERE)]

    I magine driving a 55,000-pound tractor- trailer that runs on corn! If you find it difficult to imagine, you can ask the truck drivers for Archer Daniels Midland (ADM) what it's like. For the past 4 years, they have been piloting four trucks powered by ethyl alcohol, or "ethanol," derived from corn. Several advantages to operating trucks on ethanol rather than on conventional petro- leum diesel fuel present themselves. Because ethanol can be produced domestically, unlike most of our

  4. Top and bottom squark searches in run II of the Fermilab Tevatron (Journal

    Office of Scientific and Technical Information (OSTI)

    Article) | SciTech Connect Top and bottom squark searches in run II of the Fermilab Tevatron Citation Details In-Document Search Title: Top and bottom squark searches in run II of the Fermilab Tevatron We estimate the Fermilab Tevatron run II potential for top and bottom squark searches. We find an impressive reach in several of the possible discovery channels. We also study some new channels which may arise in nonconventional supersymmetry models. In each case we rely on a detailed Monte

  5. SSRL_2003_Run_Sched.xls

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    6/02 Run Shutdown Weekends Maintenance / AP Injector Startup University Holidays PPS Certification Injector / SPEAR Startup SLAC Closed Edited - Robleto, Scott 10 11 12 AP 13 14 12 AP MA/AP 13 14 15 8 9 7 3 L A 11 12 8 9 I S N 30 11 O 12 13 14 18 A I T 31 29 2002 2003 1 2 3 13 4 2002 2003 1 2 3 4 25 26 29 30 28 30 5 6 5 6 8 9 22 16 17 15 16 N 23 24 25 5 17 18 19 Startup 23 24 23 22 21 1 2 3 MA/AP 10 4 5 AP 6 7 8 9 20 22 18 24 24 17 22 23 20 21 14 15 11 16 10 12 9 13 7 8 S T A 1 2 3 15 4 5 5 6 8

  6. Illinois: Ozinga Concrete Runs on Natural Gas and Opens Private...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Illinois: Ozinga Concrete Runs on Natural Gas and Opens Private Station Illinois: Ozinga Concrete Runs on Natural Gas and Opens Private Station November 6, 2013 - 12:00am Addthis...

  7. Leprechaun fun run and walk | Argonne National Laboratory

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Leprechaun fun run and walk March 17, 2016 12:00PM to 1:00PM Location Building 617 Type Social Event A three-mile run and two-mile walk will be featured with refreshments available at the finish line. Meet at Bldg. 600 (near the Exchange Club). All employees and their guests are welcome. Related Sites Argonne Running Club

  8. The NUHM? after LHC Run 1

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Buchmueller, O.; Cavanaugh, R.; Citron, M.; De Roeck, A.; Dolan, M. J.; Ellis, J. R.; Flcher, H.; Heinemeyer, S.; Malik, S.; Marrouche, J.; et al

    2014-12-17

    We make a frequentist analysis of the parameter space of the NUHM2, in which the soft supersymmetry (SUSY)-breaking contributions to the masses of the two Higgs multiplets, m2Hu,d, vary independently from the universal soft SUSY-breaking contributions m20 to the masses of squarks and sleptons. Our analysis uses the MultiNest sampling algorithm with over 4 10? points to sample the NUHM2 parameter space. It includes the ATLAS and CMS Higgs mass measurements as well as the ATLAS search for supersymmetric jets + /ET signals using the full LHC Run 1 data, the measurements of BR(Bs?????) by LHCb and CMS togethermorewith other B-physics observables, electroweak precision observables and the XENON100 and LUX searches for spin-independent dark-matter scattering. We find that the preferred regions of the NUHM2 parameter space have negative SUSY-breaking scalar masses squared at the GUT scale for squarks and sleptons, m20 2Hu 2Hd 2 = 32.5 with 21 degrees of freedom (dof) in the NUHM2, to be compared with ?2/dof = 35.0/23 in the CMSSM, and ?2/dof = 32.7/22 in the NUHM1. We find that the one-dimensional likelihood functions for sparticle masses and other observables are similar to those found previously in the CMSSM and NUHM1.less

  9. 2005_Run 1-21-05.xls

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    21-05 Run Shutdown Maintenance / AP Injector Startup Spear Down Injector / SPEAR Startup University Holidays AP 7 24 26 MA 13 10 11 9 25 29 30 30 27 28 28 29 31 3 4 3 7 8 2 1 1 1 5 1 4 5 7 6 9 10 11 8 3 1 12 3 AP 4 7 30 2 20 19 14 17 30 2 1 4 5 9 10 7 13 9 12 14 4 2 3 2 2 1 6 3 7 5 19 9 10 6 1 3 5 5 16 3 13 6 7 9 8 9 15 11 14 12 29 17 25 17 16 23 24 25 18 30 27 28 27 25 26 24 23 29 31 28 27 31 30 29 2004 2005 31 18 19 20 12 15 16 17 14 9 8 14 15 13 11 13 11 12 10 8 5 3 8 13 10 12 11 4 6 5 4 2 2

  10. 2005_Run 3-29-05.xls

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    SLAC Shutdown SSRL 2004-2005 SPEAR RUN SCHEDULE AP 7 14 13 10 11 9 30 27 26 28 24 23 25 30 27 28 26 28 29 31 29 2 1 3 4 3 7 1 8 2 1 1 1 4 1 2 4 6 9 10 11 8 3 12 3 AP 4 1 5 7 5 30 19 20 16 17 14 14 20 19 14 17 2 13 5 30 9 10 9 12 3 13 11 10 12 2 1 2 3 10 6 3 7 5 4 MA 10 11 11 9 7 18 12 15 6 1 3 5 5 4 9 8 9 7 3 13 6 7 15 11 14 12 29 User Conf. 17 25 17 16 23 24 30 27 28 26 29 29 29 31 30 8 2004 2005 31 9 15 13 25 22 11 13 11 12 8 5 3 6 MA 8 13 10 12 11 4 6 5 4 2 2 7 24 23 14 31 29 30 27 29 29 30

  11. APEX: A Prime EXperiment at Jefferson Lab - Test Run Results and Full Run Plans; Update

    SciTech Connect (OSTI)

    Beacham, James

    2015-06-01

    APEX is an experiment at Thomas Jefferson National Accelerator Facility (JLab) in Virginia, USA, that searches for a new gauge boson (A') with sub-GeV mass and coupling to ordinary matter of g' ~ (10^-6 - 10?)e. Electrons impinge upon a fixed target of high-Z material. An A' is produced via a process analogous to photon bremsstrahlung, decaying to an e?+e? pair. A test run was held in July of 2010, covering mA' = 175 to 250 MeV and couplings g'/e > 10?. A full run is approved and will cover mA' ~ 65 to 525 MeV and g'/e > 2.3 x 10??, and is expected to occur sometime in 2016 or 2017.

  12. The NUHM2 after LHC Run 1

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Buchmueller, O.; Cavanaugh, R.; Citron, M.; De Roeck, A.; Dolan, M. J.; Ellis, J. R.; Flächer, H.; Heinemeyer, S.; Malik, S.; Marrouche, J.; et al

    2014-12-17

    We make a frequentist analysis of the parameter space of the NUHM2, in which the soft supersymmetry (SUSY)-breaking contributions to the masses of the two Higgs multiplets, m2Hu,d, vary independently from the universal soft SUSY-breaking contributions m20 to the masses of squarks and sleptons. Our analysis uses the MultiNest sampling algorithm with over 4 × 10⁸ points to sample the NUHM2 parameter space. It includes the ATLAS and CMS Higgs mass measurements as well as the ATLAS search for supersymmetric jets + /ET signals using the full LHC Run 1 data, the measurements of BR(Bs→μ⁺μ⁻) by LHCb and CMS togethermore » with other B-physics observables, electroweak precision observables and the XENON100 and LUX searches for spin-independent dark-matter scattering. We find that the preferred regions of the NUHM2 parameter space have negative SUSY-breaking scalar masses squared at the GUT scale for squarks and sleptons, m20 < 0, as well as m2Hu < m2Hd < 0. The tension present in the CMSSM and NUHM1 between the supersymmetric interpretation of (g – 2)μ and the absence to date of SUSY at the LHC is not significantly alleviated in the NUHM2. We find that the minimum χ2 = 32.5 with 21 degrees of freedom (dof) in the NUHM2, to be compared with χ2/dof = 35.0/23 in the CMSSM, and χ2/dof = 32.7/22 in the NUHM1. We find that the one-dimensional likelihood functions for sparticle masses and other observables are similar to those found previously in the CMSSM and NUHM1.« less

  13. Queueing & Running Jobs | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    System Overview Data Storage & File Systems Compiling & Linking Queueing & Running Jobs Reservations Cobalt Job Control How to Queue a Job Running Jobs FAQs Queuing and Running on BG/Q Systems Data Transfer Debugging & Profiling Performance Tools & APIs Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Queueing &

  14. Polarization simulations in the RHIC run 15 lattice

    SciTech Connect (OSTI)

    Meot, F.; Huang, H.; Luo, Y.; Ranjbar, V.; Robert-Demolaize, G.; White, S.

    2015-05-03

    RHIC polarized proton Run 15 uses a new acceleration ramp optics, compared to RHIC Run 13 and earlier runs, in relation with electron-lens beam-beam compensation developments. The new optics induces different strengths in the depolarizing snake resonance sequence, from injection to top energy. As a consequence, polarization transport along the new ramp has been investigated, based on spin tracking simulations. Sample results are reported and discussed.

  15. Dynamic aperture evaluation for the RHIC 2009 polarized proton runs

    SciTech Connect (OSTI)

    Luo,Y.; Tepikain, S.; Bai, M.; Beebe-Wang, J.; Fischer, W.; Montag, c.; Robert-Demolaize, G.; Satogata, T.; Trbojevic, D.

    2009-05-04

    In this article we numerically evaluate the dynamic apertures of the proposed lattices for the coming Relativistic Heavy Ion Collider (RHIC) 2009 polarized proton (pp) 100 GeV and 250 GeV runs. One goal of this study is to find out the appropriate {beta}* for the coming 2009 pp runs. Another goal is to check the effect of second order chromaticity correction in the RHIC pp runs.

  16. DOE Continues Long-Running Minority Educational Research Program |

    Office of Environmental Management (EM)

    Department of Energy Continues Long-Running Minority Educational Research Program DOE Continues Long-Running Minority Educational Research Program April 19, 2012 - 1:00pm Addthis Washington, DC - Four projects that will strengthen and promote U.S. energy security, scientific discovery and economic competitiveness while producing a diverse next generation of scientists and engineers have been selected as part of the U.S. Department of Energy's (DOE) long running minority educational research

  17. SRS Recovery Act Completes Major Lower Three Runs Project Cleanup

    Office of Environmental Management (EM)

    14, 2012 AIKEN, S.C. - American Recovery and Reinvestment Act can now claim that 85 percent of the Savannah River Site (SRS) has been cleaned up with the recent completion of the Lower Three Runs (stream) Project. Twenty miles long, Lower Three Runs leaves the main body of the 310-square mile site and runs through parts of Barnwell and Allendale Coun- ties until it flows into the Savannah River. Government property on both sides of the stream acts as a buffer as it runs through privately-owned

  18. Birch Run, Michigan: Energy Resources | Open Energy Information

    Open Energy Info (EERE)

    Run, Michigan: Energy Resources Jump to: navigation, search Equivalent URI DBpedia Coordinates 43.2508585, -83.7941309 Show Map Loading map... "minzoom":false,"mappingservice"...

  19. Pleasant Run, Ohio: Energy Resources | Open Energy Information

    Open Energy Info (EERE)

    Run, Ohio: Energy Resources Jump to: navigation, search Equivalent URI DBpedia Coordinates 39.2997791, -84.5635567 Show Map Loading map... "minzoom":false,"mappingservice":"go...

  20. Pleasant Run Farm, Ohio: Energy Resources | Open Energy Information

    Open Energy Info (EERE)

    Run Farm, Ohio: Energy Resources Jump to: navigation, search Equivalent URI DBpedia Coordinates 39.3031126, -84.5480009 Show Map Loading map... "minzoom":false,"mappingservice...

  1. Dry Run, Ohio: Energy Resources | Open Energy Information

    Open Energy Info (EERE)

    Dry Run, Ohio: Energy Resources Jump to: navigation, search Equivalent URI DBpedia Coordinates 39.1042277, -84.330494 Show Map Loading map... "minzoom":false,"mappingservice":...

  2. LCLS-scheduling-run_6_Ver4.xlsx

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    LCLS shutdown LCLS Approved Experiments for Run 6, June-December 2012 Instrument Prop Proposal Title Spokesperson XPP L503 Ultrafast Resonant Inelastic X-ray Scattering...

  3. ECOLOGICAL EFFECTS OF CONTAMINANTS IN THE UPPER THREE RUNS INTEGRATOR...

    Office of Scientific and Technical Information (OSTI)

    Technical Report: ECOLOGICAL EFFECTS OF CONTAMINANTS IN THE UPPER THREE RUNS INTEGRATOR OPERABLE UNIT Citation Details In-Document Search Title: ECOLOGICAL EFFECTS OF CONTAMINANTS...

  4. The Renormalization Group Running of the Higgs Quartic Coupling: Unification vs. Phenomenology

    SciTech Connect (OSTI)

    Montes de Oca Y, J. H.; Juarez W, S. R.; Kielanowski, P.

    2007-02-09

    Within the framework of the standard model (SM) of elementary particles, we obtained numerical solutions for the running Higgs mass, considering the renormalization group equations at the one and two loop approximation. Through the triviality condition (TC) and stability condition (SC) on the Higgs quartic coupling {lambda}H the bounds on the Higgs running mass have been fixed. The numerical results are presented for two special cases. One considering an unification of the three gauge couplings at the energy EU 1013 GeV and the other using the current experimental data for the gauge couplings.

  5. The CMSSM and NUHM1 after LHC Run 1

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Buchmueller, O.; De Roeck, A.; Cavanaugh, R.; Dolan, M. J.; Ellis, J. R.; Flacher, H.; Heinemeyer, S.; Isidori, G.; Marrouche, J.; Martinez Santos, D.; et al

    2014-06-13

    We analyze the impact of data from the full Run 1 of the LHC at 7 and 8 TeV on the CMSSM with μ > 0 and < 0 and the NUHM1 with μ > 0, incorporating the constraints imposed by other experiments such as precision electroweak measurements, flavour measurements, the cosmological density of cold dark matter and the direct search for the scattering of dark matter particles in the LUX experiment. We use the following results from the LHC experiments: ATLAS searches for events with E/T accompanied by jets with the full 7 and 8 TeV data, the ATLASmore » and CMS measurements of the mass of the Higgs boson, the CMS searches for heavy neutral Higgs bosons and a combination of the LHCb and CMS measurements of BR(Bs → μ+μ–) and BR(Bd → μ+μ–). Our results are based on samplings of the parameter spaces of the CMSSM for both μ > 0 and μ < 0 and of the NUHM1 for μ > 0 with 6.8×106, 6.2×106 and 1.6×107 points, respectively, obtained using the MultiNest tool. The impact of the Higgs-mass constraint is assessed using FeynHiggs 2.10.0, which provides an improved prediction for the masses of the MSSM Higgs bosons in the region of heavy squark masses. It yields in general larger values of Mh than previous versions of FeynHiggs, reducing the pressure on the CMSSM and NUHM1. We find that the global χ2 functions for the supersymmetric models vary slowly over most of the parameter spaces allowed by the Higgs-mass and the E/T searches, with best-fit values that are comparable to the χ2/dof for the best Standard Model fit. As a result, we provide 95% CL lower limits on the masses of various sparticles and assess the prospects for observing them during Run 2 of the LHC.« less

  6. Instrument Front-Ends at Fermilab During Run II

    SciTech Connect (OSTI)

    Meyer, Thomas; Slimmer, David; Voy, Duane; /Fermilab

    2011-07-13

    The optimization of an accelerator relies on the ability to monitor the behavior of the beam in an intelligent and timely fashion. The use of processor-driven front-ends allowed for the deployment of smart systems in the field for improved data collection and analysis during Run II. This paper describes the implementation of the two main systems used: National Instruments LabVIEW running on PCs, and WindRiver's VxWorks real-time operating system running in a VME crate processor.

  7. Preparations for p-Au run in 2015

    SciTech Connect (OSTI)

    Liu, C.

    2014-12-31

    The p-Au particle collision is a unique category of collision runs. This is resulted from the different charge mass ratio of the proton and fully stripped Au ion (1 vs.79/197). The p-Au run requires a special acceleration ramp, and movement of a number of beam components as required by the beam trajectories. The DX magnets will be moved for the first time in the history of RHIC. In this note, the planning and preparations for p-Au run will be presented.

  8. Run VMC 5K | Y-12 National Security Complex

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Run VMC 5K Run VMC 5K Posted: January 13, 2016 - 4:40pm Y-12's team for the Volunteer Ministry Center 5K. Run VMC is not a new rap group, but the 5K to benefit the Volunteer Ministry Center is a wrap. Y-12's team consisted of more than 20 employees and retirees. The race began and ended at Hardin Valley Elementary, and several team members received accolades. Travis Wilson from Mission Support/Infrastructure was the overall winner with a time of 16:48. Other team members who finished in the top

  9. Washington: State Ferries Run Cleaner With Biodiesel | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    State Ferries Run Cleaner With Biodiesel Washington: State Ferries Run Cleaner With Biodiesel April 18, 2013 - 12:00am Addthis Washington State Ferries, owned and operated by the Washington State Department of Transportation, is the largest ferry service in the United States and the third largest in the world. Thanks in part to an American Recovery and Reinvestment Act of 2009 (ARRA) investment from EERE, the ferries now run on a blended biodiesel fuel that will prevent more than 29,000 metric

  10. New Carlsbad Field Office Manager Hits the Ground Running

    Broader source: Energy.gov [DOE]

    CARLSBAD, N.M. – If you want to catch up with Carlsbad Field Office (CBFO) Manager Joe Franco, you may have to run. In fact, his first weeks in his new job have looked like a sprint.

  11. TianRun USA Inc | Open Energy Information

    Open Energy Info (EERE)

    Minnesota Sector: Wind energy Product: Minnesota-based investment arm of Goldwind Science & Technology, Beijing Tianrun invested USD 3m to set up the TianRun USA subsidiary in...

  12. TianRun UILK LLC | Open Energy Information

    Open Energy Info (EERE)

    Minnesota Sector: Wind energy Product: Minnesota-based joint venture formed by TianRun USA, Horizon Wind, and Dakota Wind to develop the UILK wind farm project in Minnesota....

  13. SRS Recovery Act Completes Major Lower Three Runs Project Cleanup

    Broader source: Energy.gov [DOE]

    American Recovery and Reinvestment Act can now claim that 85 percent of the Savannah River Site (SRS) has been cleaned up with the recent completion of the Lower Three Runs (stream) Project.

  14. OVIS: Scalable Run Time Data Collection Analysis and Visualization

    Office of Scientific and Technical Information (OSTI)

    (Abstract). (Conference) | SciTech Connect OVIS: Scalable Run Time Data Collection Analysis and Visualization (Abstract). Citation Details In-Document Search Title: OVIS: Scalable Run Time Data Collection Analysis and Visualization (Abstract). Abstract not provided. Authors: Chen, Frank Xiaoxiao ; Das, Ananya Publication Date: 2009-07-01 OSTI Identifier: 1141424 Report Number(s): SAND2009-4761C 506638 DOE Contract Number: DE-AC04-94AL85000 Resource Type: Conference Resource Relation:

  15. Launching applications on compute and service processors running under

    Office of Scientific and Technical Information (OSTI)

    different operating systems in scalable network of processor boards with routers (Patent) | SciTech Connect Patent: Launching applications on compute and service processors running under different operating systems in scalable network of processor boards with routers Citation Details In-Document Search Title: Launching applications on compute and service processors running under different operating systems in scalable network of processor boards with routers A multiple processor computing

  16. Office of Fossil Energy Continues Long-Running Minority Educational

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Research Program | Department of Energy Fossil Energy Continues Long-Running Minority Educational Research Program Office of Fossil Energy Continues Long-Running Minority Educational Research Program April 19, 2012 - 11:41am Addthis Annie Whatley Annie Whatley Deputy Director, Office of Minority Education and Community Development Editor's Note: This article is cross-posted from the Office of Fossil Energy. Four projects that will strengthen and promote U.S. energy security, scientific

  17. Scalable Run Time Data Collection Analysis and Visualization

    Office of Scientific and Technical Information (OSTI)

    (Presentation). (Conference) | SciTech Connect Scalable Run Time Data Collection Analysis and Visualization (Presentation). Citation Details In-Document Search Title: Scalable Run Time Data Collection Analysis and Visualization (Presentation). Abstract not provided. Authors: Gentile, Ann C. ; Chen, Frank Xiaoxiao [1] ; Das, Ananya + Show Author Affiliations Sandia National Laboratories, Livermore, CA [Sandia National Laboratories, Livermore, CA Publication Date: 2009-07-01 OSTI Identifier:

  18. Boise Buses Running Strong with Clean Cities | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Boise Buses Running Strong with Clean Cities Boise Buses Running Strong with Clean Cities May 28, 2013 - 12:05pm Addthis Working with Republic Services, the city of Boise and Valley Regional Transit, Treasure Valley Clean Cities built four compressed natural gas (CNG) fueling stations that allowed all three organizations to transition to CNG vehicles. | Photo courtesy of Valley Regional Transit. Working with Republic Services, the city of Boise and Valley Regional Transit, Treasure Valley Clean

  19. Running the Race for Clean Energy | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Running the Race for Clean Energy Running the Race for Clean Energy September 16, 2015 - 2:39pm Addthis Through the Energy Department's Climate Action Champions initiative, cities like Boston and Minneapolis have identified ways to reduce their carbon footprint by as much as 80 percent by 2050. Through the Energy Department's Climate Action Champions initiative, cities like Boston and Minneapolis have identified ways to reduce their carbon footprint by as much as 80 percent by 2050. Linda

  20. Well casing hanger and packoff running and retrieval tool

    SciTech Connect (OSTI)

    Pollock, J.R.; Valka, W.A.

    1992-04-21

    This patent describes a well tool for running a casing hanger and a packoff into, and retrieving a packoff from, a subsea wellhead. It comprises a tubular body including means to releasably connect a packoff to the body for running the packoff into a subsea wellhead; means to releasably connect a packoff to the body for retrieving the packoff from a subsea wellhead; means to relocate the packoff running means and the packoff retrieving means between their functional and non-functional positions; means to releasably connect a casing hanger to the body for running the hanger into a subsea wellhead; a tubular mandrel surrounded by and rotatable with respect to the body; means surrounding the mandrel for moving the casing hanger connection means into functional position; first anti-rotation means preventing relative rotation between the body and the means for moving the casing hanger connection means; second anti-rotation means for preventing relative rotation between connection means; second anti-rotation means for preventing relative rotation between the body and a casing hanger connected thereto: and means for connecting the mandrel to a pipe string for running the tool into a subsea wellhead.

  1. ARM - Datastreams - ecmwfsfcml

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Datastreamsecmwfsfcml Documentation XDC documentation Data Quality Plots ARM Data Discovery Browse Data Comments? We would love to hear from you! Send us a note below or call us at 1-888-ARM-DATA. Send Datastream : ECMWFSFCML ECMWF: model multilevel surface fields at 4 levels, entire coverage Active Dates 2000.09.13 - 2016.02.29 Measurement Categories Surface Properties Originating Instrument European Centre for Medium Range Weather Forecasts Diagnostic Analyses (ECMWFDIAG) Description These

  2. ARM - Datastreams - ecmwfvar

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Datastreamsecmwfvar Documentation XDC documentation Data Quality Plots ARM Data Discovery Browse Data Comments? We would love to hear from you! Send us a note below or call us at 1-888-ARM-DATA. Send Datastream : ECMWFVAR ECMWF: model met. and cloud variables at altitude, entire coverage, 1-hr avg Active Dates 1995.04.17 - 2016.02.29 Measurement Categories Atmospheric State, Cloud Properties Originating Instrument European Centre for Medium Range Weather Forecasts Diagnostic Analyses (ECMWFDIAG)

  3. SACO-1: a fast-running LMFBR accident-analysis code

    SciTech Connect (OSTI)

    Mueller, C.J.; Cahalan, J.E.; Vaurio, J.K.

    1980-01-01

    SACO is a fast-running computer code that simulates hypothetical accidents in liquid-metal fast breeder reactors to the point of permanent subcriticality or to the initiation of a prompt-critical excursion. In the tradition of the SAS codes, each subassembly is modeled by a representative fuel pin with three distinct axial regions to simulate the blanket and core regions. However, analytic and integral models are used wherever possible to cut down the computing time and storage requirements. The physical models and basic equations are described in detail. Comparisons of SACO results to analogous SAS3D results comprise the qualifications of SACO and are illustrated and discussed.

  4. Comments on Injector Proton Beam Study in Run 2014

    SciTech Connect (OSTI)

    Zhang, S. Y.

    2014-09-15

    During the entire period of injector proton study in run 2014, it seems that the beam transverse emittance out of Booster is larger than that in run 2013. The emittance measured at the BtA transfer line and also the transmission from Booster late to AGS late are presented for this argument. In addition to this problem, it seems that the multiturn Booster injection, which defines the transverse emittance, needs more attention. Moreover, for high intensity operations, the space charge effect may be already relevant in RHIC polarized proton runs. With the RHIC proton intensity improvement in the next several years, higher Booster input intensity is needed, therefore, the space charge effect at the Booster injection and early ramp may become a new limiting factor.

  5. Fast Bunch Integrators at Fermilab During Run II

    SciTech Connect (OSTI)

    Meyer, Thomas; Briegel, Charles; Fellenz, Brian; Vogel, Greg; /Fermilab

    2011-07-13

    The Fast Bunch Integrator is a bunch intensity monitor designed around the measurements made from Resistive Wall Current Monitors. During the Run II period these were used in both Tevatron and Main Injector for single and multiple bunch intensity measurements. This paper presents an overview of the design and use of these systems during this period. During the Run II era the Fast Bunch integrators have found a multitude of uses. From antiproton transfers to muti-bunch beam coalescing, Main Injector transfers to halo scraping and lifetime measurements, the Fast Bunch Integrators have proved invaluable in the creation and maintenance of Colliding Beams stores at Fermilab.

  6. ECOLOGICAL EFFECTS OF CONTAMINANTS IN THE UPPER THREE RUNS INTEGRATOR

    Office of Scientific and Technical Information (OSTI)

    OPERABLE UNIT (Technical Report) | SciTech Connect Technical Report: ECOLOGICAL EFFECTS OF CONTAMINANTS IN THE UPPER THREE RUNS INTEGRATOR OPERABLE UNIT Citation Details In-Document Search Title: ECOLOGICAL EFFECTS OF CONTAMINANTS IN THE UPPER THREE RUNS INTEGRATOR OPERABLE UNIT No abstract prepared. Authors: Paller, M. ; Dyer, S. ; Scott, S. Publication Date: 2011-07-18 OSTI Identifier: 1023278 Report Number(s): SRNL-TR-2011-00201 TRN: US201118%%1082 DOE Contract Number: DE-AC09-08SR22470

  7. Analysis of failed ramps during the RHIC FY09 run

    SciTech Connect (OSTI)

    Minty, M.

    2014-08-15

    The Relativistic Heavy Ion Collider (RHIC) is a versatile accelerator that supports operation with polarized protons of up to 250 GeV and ions with up to 100 GeV/nucleon. During any running period, various operating scenarios with different particle species, beam energies or accelerator optics are commissioned. In this report the beam commissioning periods for establishing full energy beams (ramp development periods) from the FY09 run are summarized and, for the purpose of motivating further developments, we analyze the reasons for all failed ramps.

  8. CNS Running Crew conquers marathon | Y-12 National Security Complex

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Running Crew conquers ... CNS Running Crew conquers marathon Posted: April 16, 2015 - 1:27pm Print version Marathon runner Barbara King with LiveWise Athletic Trainer Robert Eichin "It was amazing!" That is how Y-12 employee Barbara King described her first marathon. The 53-year-old Information, Solutions and Services worker, who joined the "Couch to 5K LiveWise program two years ago, accomplished her feat at the Covenant Health Knoxville Marathon on March 29. She was one of 26

  9. SARA Cadets and Midshipmen Hit the Ground Running | National Nuclear

    National Nuclear Security Administration (NNSA)

    Security Administration SARA Cadets and Midshipmen Hit the Ground Running | National Nuclear Security Administration Facebook Twitter Youtube Flickr RSS People Mission Managing the Stockpile Preventing Proliferation Powering the Nuclear Navy Emergency Response Recapitalizing Our Infrastructure Countering Nuclear Terrorism About Our Programs Our History Who We Are Our Leadership Our Locations Budget Our Operations Library Bios Congressional Testimony Fact Sheets Newsletters Press Releases

  10. NNSA employees run to raise awareness about concussions | National Nuclear

    National Nuclear Security Administration (NNSA)

    Security Administration run to raise awareness about concussions | National Nuclear Security Administration Facebook Twitter Youtube Flickr RSS People Mission Managing the Stockpile Preventing Proliferation Powering the Nuclear Navy Emergency Response Recapitalizing Our Infrastructure Countering Nuclear Terrorism About Our Programs Our History Who We Are Our Leadership Our Locations Budget Our Operations Library Bios Congressional Testimony Fact Sheets Newsletters Press Releases Photo

  11. Pantexans run against hunger | National Nuclear Security Administration

    National Nuclear Security Administration (NNSA)

    run against hunger | National Nuclear Security Administration Facebook Twitter Youtube Flickr RSS People Mission Managing the Stockpile Preventing Proliferation Powering the Nuclear Navy Emergency Response Recapitalizing Our Infrastructure Countering Nuclear Terrorism About Our Programs Our History Who We Are Our Leadership Our Locations Budget Our Operations Library Bios Congressional Testimony Fact Sheets Newsletters Press Releases Photo Gallery Jobs Apply for Our Jobs Our Jobs Working at NNSA

  12. The pMSSM10 after LHC run 1

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    de Vries, K. J.; Bagnaschi, E. A.; Buchmueller, O.; Cavanaugh, R.; Citron, M.; De Roeck, A.; Dolan, M. J.; Ellis, J. R.; Flächer, H.; Heinemeyer, S.; et al

    2015-09-01

    We present a frequentist analysis of the parameter space of the pMSSM10, in which the following ten soft SUSY-breaking parameters are specified independently at the mean scalar top mass scale MSUSY ≡ √mmore » $$\\tilde{t}$$1m$$\\tilde{t}$$2 : the gaugino masses M1,2,3 , the first-and second-generation squark masses m$$\\tilde{q}$$1 = m$$\\tilde{q}$$2 , the third-generation squark mass m$$\\tilde{q}$$3, a common slepton mass m$$\\tilde{ℓ}$$ and a common trilinear mixing parameter A , as well as the Higgs mixing parameter μ , the pseudoscalar Higgs mass MA and tanβ , the ratio of the two Higgs vacuum expectation values. We use the MultiNest sampling algorithm with ∼ 1.2 ×109 points to sample the pMSSM10 parameter space. A dedicated study shows that the sensitivities to strongly interacting sparticle masses of ATLAS and CMS searches for jets, leptons + E-slashT signals depend only weakly on many of the other pMSSM10 parameters. With the aid of the Atom and Scorpion codes, we also implement the LHC searches for electroweakly interacting sparticles and light stops, so as to confront the pMSSM10 parameter space with all relevant SUSY searches. In addition, our analysis includes Higgs mass and rate measurements using the HiggsSignals code, SUSY Higgs exclusion bounds, the measurements of BR(Bs→μ+μ-) by LHCb and CMS, other B -physics observables, electroweak precision observables, the cold dark matter density and the XENON100 and LUX searches for spin-independent dark matter scattering, assuming that the cold dark matter is mainly provided by the lightest neutralino χ-tilde10 . We show that the pMSSM10 is able to provide a supersymmetric interpretation of (g-2)μ , unlike the CMSSM, NUHM1 and NUHM2. As a result, we find (omitting Higgs rates) that the minimum χ2=20.5 with 18 degrees of freedom (d.o.f.) in the pMSSM10, corresponding to a χ2 probability of 30.8 %, to be compared with χ2/d.o.f.=32.8/24(31.1/23)(30.3/22) in the CMSSM (NUHM1) (NUHM2). We display the one-dimensional likelihood functions for sparticle masses, and we show that they may be significantly lighter in the pMSSM10 than in the other models, e.g., the gluino may be as light as ∼ 1250 GeV at the 68 % CL, and squarks, stops, electroweak gauginos and sleptons may be much lighter than in the CMSSM, NUHM1 and NUHM2. We discuss the discovery potential of future LHC runs, e+e- colliders and direct detection experiments.« less

  13. The pMSSM10 after LHC run 1

    SciTech Connect (OSTI)

    de Vries, K. J.; Bagnaschi, E. A.; Buchmueller, O.; Cavanaugh, R.; Citron, M.; De Roeck, A.; Dolan, M. J.; Ellis, J. R.; Flächer, H.; Heinemeyer, S.; Isidori, G.; Malik, S.; Marrouche, J.; Santos, D. Martínez; Olive, K. A.; Sakurai, K.; Weiglein, G.

    2015-09-01

    We present a frequentist analysis of the parameter space of the pMSSM10, in which the following ten soft SUSY-breaking parameters are specified independently at the mean scalar top mass scale MSUSY ≡ $\\sqrt{m$\\tilde{t}$1m$\\tilde{t}$2}$ : the gaugino masses M1,2,3 , the first-and second-generation squark masses m$\\tilde{q}$1 = m$\\tilde{q}$2 , the third-generation squark mass m$\\tilde{q}$3, a common slepton mass m$\\tilde{ℓ}$ and a common trilinear mixing parameter A , as well as the Higgs mixing parameter μ , the pseudoscalar Higgs mass MA and tanβ , the ratio of the two Higgs vacuum expectation values. We use the MultiNest sampling algorithm with ∼ 1.2 ×109 points to sample the pMSSM10 parameter space. A dedicated study shows that the sensitivities to strongly interacting sparticle masses of ATLAS and CMS searches for jets, leptons + E-slashT signals depend only weakly on many of the other pMSSM10 parameters. With the aid of the Atom and Scorpion codes, we also implement the LHC searches for electroweakly interacting sparticles and light stops, so as to confront the pMSSM10 parameter space with all relevant SUSY searches. In addition, our analysis includes Higgs mass and rate measurements using the HiggsSignals code, SUSY Higgs exclusion bounds, the measurements of BR(Bs→μ+μ-) by LHCb and CMS, other B -physics observables, electroweak precision observables, the cold dark matter density and the XENON100 and LUX searches for spin-independent dark matter scattering, assuming that the cold dark matter is mainly provided by the lightest neutralino χ-tilde10 . We show that the pMSSM10 is able to provide a supersymmetric interpretation of (g-2)μ , unlike the CMSSM, NUHM1 and NUHM2. As a result, we find (omitting Higgs rates) that the minimum χ2=20.5 with 18 degrees of freedom (d.o.f.) in the pMSSM10, corresponding to a χ2 probability of 30.8 %, to be compared with χ2/d.o.f.=32.8/24(31.1/23)(30.3/22) in the CMSSM (NUHM1) (NUHM2). We display the one-dimensional likelihood functions for sparticle masses, and we show that they may be significantly lighter in the pMSSM10 than in the other models, e.g., the gluino may be as light as ∼ 1250 GeV at the 68 % CL, and squarks, stops, electroweak gauginos and sleptons may be much lighter than in the CMSSM, NUHM1 and NUHM2. We discuss the discovery potential of future LHC runs, e+e- colliders and direct detection experiments.

  14. Alternative Fuels Data Center: Pennsylvania School Buses Run on Natural Gas

    Alternative Fuels and Advanced Vehicles Data Center [Office of Energy Efficiency and Renewable Energy (EERE)]

    Pennsylvania School Buses Run on Natural Gas to someone by E-mail Share Alternative Fuels Data Center: Pennsylvania School Buses Run on Natural Gas on Facebook Tweet about Alternative Fuels Data Center: Pennsylvania School Buses Run on Natural Gas on Twitter Bookmark Alternative Fuels Data Center: Pennsylvania School Buses Run on Natural Gas on Google Bookmark Alternative Fuels Data Center: Pennsylvania School Buses Run on Natural Gas on Delicious Rank Alternative Fuels Data Center: Pennsylvania

  15. CMS Data Processing Workflows during an Extended Cosmic Ray Run

    SciTech Connect (OSTI)

    Not Available

    2009-11-01

    The CMS Collaboration conducted a month-long data taking exercise, the Cosmic Run At Four Tesla, during October-November 2008, with the goal of commissioning the experiment for extended operation. With all installed detector systems participating, CMS recorded 270 million cosmic ray events with the solenoid at a magnetic field strength of 3.8 T. This paper describes the data flow from the detector through the various online and offline computing systems, as well as the workflows used for recording the data, for aligning and calibrating the detector, and for analysis of the data.

  16. Tevatron End-of-Run Beam Physics Experiments

    SciTech Connect (OSTI)

    Valishev, A.; Gu, X.; Miyamoto, R.; White, S.; Schmidt, F.; Qiang, J.; /LBNL

    2012-05-01

    Before the Tevatron Collider Run II ended in September of 2011, a number of specialized beam study periods were dedicated to the experiments on various accelerator physics concepts and effects during the last year of the machine operation. The study topics included collimation with bent crystals and hollow electron beams, diffusion measurements and various aspects of beam-beam interactions. In this report we concentrate on the subject of beam-beam interactions, summarizing the results of beam experiments. The covered topics include offset collisions, coherent beam stability, effect of the bunch-length-to-beta-function ratio, and operation of AC dipole with colliding beams.

  17. Experimental Run Schedules for Previous Years | Stanford Synchrotron

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Radiation Lightsource Run Schedules for Previous Years SPEAR Operating / Maintenance Beam Line Schedule Accelerator Physics FY2015 X-ray VUV, BL13 Macromolecular Crystallography FY2014 X-ray VUV, BL13 Macromolecular Crystallography FY2013 X-ray VUV Macromolecular Crystallography FY2012 X-ray Macromolecular Crystallography VUV FY2011 X-ray Macromolecular Crystallography VUV BL13 2011 FY2010 X-ray Macromolecular Crystallography VUV FY2009 X-ray Macromolecular Crystallography VUV FY2008 X-ray

  18. WIMP-Search Results from the Second CDMSlite Run

    SciTech Connect (OSTI)

    Agnese, R.

    2015-09-08

    The CDMS low ionization threshold experiment (CDMSlite) uses cryogenic germanium detectors operated at a relatively high bias voltage to amplify the phonon signal in the search for weakly interacting massive particles (WIMPs). Our results are presented from the second CDMSlite run with an exposure of 70 kg days, which reached an energy threshold for electron recoils as low as 56 eV. Furthermore, a fiducialization cut reduces backgrounds below those previously reported by CDMSlite. New parameter space for the WIMP-nucleon spin-independent cross section is excluded forWIMP masses between 1.6 and 5.5 GeV/c2.

  19. SSRL Experimental Run Schedule | Stanford Synchrotron Radiation Lightsource

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Experimental Run Schedule November 2015- May 2016 Schedules for X-ray, VUV and Macromolecular Crystallography beamlines. X-ray VUV (BL5, 8, 10-1, 13-1/2/3) Macromolecular Crystallography see also: Support Staff Schedule SPEAR3 Operating / Maintenance LCLS see Schedule Archive The SSRL storage ring team is in the final stages of installing hardware that will enable reducing the SPEAR3 emittance to 6 nm from its present value of 10 nm. The last piece is the installation of the thin septum

  20. Alternative Fuels Data Center: Santa Fe Metro Fleet Runs on Natural Gas

    Alternative Fuels and Advanced Vehicles Data Center [Office of Energy Efficiency and Renewable Energy (EERE)]

    Santa Fe Metro Fleet Runs on Natural Gas to someone by E-mail Share Alternative Fuels Data Center: Santa Fe Metro Fleet Runs on Natural Gas on Facebook Tweet about Alternative Fuels Data Center: Santa Fe Metro Fleet Runs on Natural Gas on Twitter Bookmark Alternative Fuels Data Center: Santa Fe Metro Fleet Runs on Natural Gas on Google Bookmark Alternative Fuels Data Center: Santa Fe Metro Fleet Runs on Natural Gas on Delicious Rank Alternative Fuels Data Center: Santa Fe Metro Fleet Runs on

  1. DOE-2 sample run book: Version 2.1E

    SciTech Connect (OSTI)

    Winkelmann, F.C.; Birdsall, B.E.; Buhl, W.F.; Ellington, K.L.; Erdem, A.E.; Hirsch, J.J.; Gates, S.

    1993-11-01

    The DOE-2 Sample Run Book shows inputs and outputs for a variety of building and system types. The samples start with a simple structure and continue to a high-rise office building, a medical building, three small office buildings, a bar/lounge, a single-family residence, a small office building with daylighting, a single family residence with an attached sunspace, a ``parameterized`` building using input macros, and a metric input/output example. All of the samples use Chicago TRY weather. The main purpose of the Sample Run Book is instructional. It shows the relationship of LOADS-SYSTEMS-PLANT-ECONOMICS inputs, displays various input styles, and illustrates many of the basic and advanced features of the program. Many of the sample runs are preceded by a sketch of the building showing its general appearance and the zoning used in the input. In some cases we also show a 3-D rendering of the building as produced by the program DrawBDL. Descriptive material has been added as comments in the input itself. We find that a number of users have loaded these samples onto their editing systems and use them as ``templates`` for creating new inputs. Another way of using them would be to store various portions as files that can be read into the input using the {number_sign}{number_sign} include command, which is part of the Input Macro feature introduced in version DOE-2.lD. Note that the energy rate structures here are the same as in the DOE-2.lD samples, but have been rewritten using the new DOE-2.lE commands and keywords for ECONOMICS. The samples contained in this report are the same as those found on the DOE-2 release files. However, the output numbers that appear here may differ slightly from those obtained from the release files. The output on the release files can be used as a check set to compare results on your computer.

  2. Models Datasets

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    iteration by iteration. RevSim is an Excel 2010 based model. Much of the logic is VBA code (Visual Basic for Applications); the user does not need to know VBA to run the...

  3. Impedances and collective instabilities of the Tevatron at Run II

    SciTech Connect (OSTI)

    Ng, King-Yuen, FERMI

    1998-09-01

    The longitudinal and transverse coupling impedances of the Tevatron vacuum chamber are estimated and summed up. The resistive-wall impedances of the beam pipe and the laminations in the Lambertson magnets dominate below {approximately} 50 MHz. Then come the inductive parts of the bellows and BPM`s. The longitudinal and transverse collective instabilities, for both single bunch and multi bunches, are studied using Run II parameters. As expected the transverse coupled-bunch instability driven by the resistive-wall impedance is the most severe collective instability. However, it can be damped by a transverse damper designed for the correction of injection offsets. The power of such a damper has been studied.

  4. LCLS-scheduling-run_V_Ver9c.xlsx

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 Tue Wed Thu Fri Sat Sun Mon Tue Wed Thu Fri Sat Sun Mon Tue Wed Thu Fri Sat Sun Mon Tue Wed Thu Fri Sat Sun Mon Tue Wed Day Com Com Com Com Com L421 Coffee Night L477 Robinson Gruebel (L304, run 4) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 Thur Fri Sat Sun Mon Tue Wed Thu Fri Sat Sun Mon Tue Wed Thu Fri Sat Sun Mon Tue Wed Thu Fri Sat Sun Mon Tue Wed Thu Fri Sat Day L498

  5. The PDF4LHC report on PDFs and LHC data: results from Run I and...

    Office of Scientific and Technical Information (OSTI)

    The PDF4LHC report on PDFs and LHC data: results from Run I and preparation for Run II Citation Details In-Document Search Title: The PDF4LHC report on PDFs and LHC data: results...

  6. Students Share Experiences from First Run of BioenergizeME Virtual Science

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Fair | Department of Energy Students Share Experiences from First Run of BioenergizeME Virtual Science Fair Students Share Experiences from First Run of BioenergizeME Virtual Science Fair December 18, 2014 - 12:13pm Addthis Students Share Experiences from First Run of BioenergizeME Virtual Science Fair View all student infographics by clicking on website links through the BioenergizeME Virtual Science Fair map. Last week concluded the beta run of the Bioenergy Technologies Office (BETO)

  7. The cce/8.3.0 C++ compiler may run into a linking error on Edison

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    The cce8.3.0 C++ compiler may run into a linking error on Edison The cce8.3.0 C++ compiler may run into a linking error on Edison July 1, 2014 You may run into the following...

  8. An overview of Booster and AGS polarized proton operation during Run 15

    SciTech Connect (OSTI)

    Zeno, K.

    2015-10-20

    This note is an overview of the Booster and AGS for the 2015 Polarized Proton RHIC run from an operations perspective. There are some notable differences between this and previous runs. In particular, the polarized source intensity was expected to be, and was, higher this year than in previous RHIC runs. The hope was to make use of this higher input intensity by allowing the beam to be scraped down more in the Booster to provide a brighter and smaller beam for the AGS and RHIC. The RHIC intensity requirements were also higher this run than in previous runs, which caused additional challenges because the AGS polarization and emittance are normally intensity dependent.

  9. SSRL_2004_Run_Sched_3_22_04.xls

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    22/04 Run Shutdown Maintenance / AP Injector Startup SLAC Closed Injector / SPEAR Startup University Holidays 17 MA/AP 18 MA 5 1 1 1 AP 3 4 7 4 9 5 6 8 10 20 10 11 12 11 AP 9 1 4 1 2 8 12 20 24 21 22 21 26 5 5 2 3 6 7 16 MA O 9 12 I 15 10 S 8 9 13 11 11 12 13 6 2 MA MA/AP AP 3 2 4 2 1 1 AP 3 T 17 17 18 16 16 4 3 13 14 12 8 14 15 U 20 10 21 13 MA 15 12 11 9 MA 4 5 5 3 MA 6 1 9 MA 16 13 3 4 1 13 M 5 12 11 M S C 8 11 M M 14 14 15 A E B M 31 M 29 28 MA 18 19 17 4 10 11 7 18 22 17 19 21 20 23 26 25 A

  10. Hydrogen production at run-of-river hydro plants

    SciTech Connect (OSTI)

    Tarnay, D.S.

    1983-12-01

    Production of energy from non-renewable petroleum, natural gas and coal is declining due to depletion and high prices. Presently, the research concentrates on reduction of consumption and more efficient use of traditional fuels, and on development of renewable sources of energy and new energy technologies. Most of the new energy sources, however, are not available in a convenient form for consumer. The new energy must be renewable, economically feasible and transportable. Not all the available renewable energy sources have these qualities. Many scientists and engineers believe that hydrogen meets these criteria best. Hydrogen can be produced from various renewable sources such as solar, wind, geothermal, tidal and glacier energies, ocean thermal energy conversion (OTEC), and obviously from - waterpower. The production of hydrogen at run-of-river hydropower plants via electrolysis could be the front-runner in developing new hydrogen energy technologies, and open the way to a new hydrogen era, similarly as the polyphase system and the a-c current generator of N. Tesla used at the Niagara Falls Hydropower Plant, opened the door to a new electrical age in 1895.

  11. Recent program evaluations: Implications for long-run planning

    SciTech Connect (OSTI)

    Baxter, L.W.; Schultz, D.K.

    1994-06-08

    Demand-side management (DSM) remains the centerpiece of California`s energy policy. Over the coming decade, California plans to meet 30 percent of the state`s incremental electricity demand and 50 percent of its peak demand with (DSM) programs. The major investor-owned utilities in California recently completed the first round of program impact studies for energy efficiency programs implemented in 1990 and 1991. The central focus of this paper is to assess the resource planning and policy implications of Pacific Gas and Electric (PG&E) Company`s recent program evaluations. The paper has three goals. First, we identify and discuss major issues that surfaced from our attempt to apply evaluation results to forecasting and planning questions. Second, we review and summarize the evaluation results for PG&E`s primary energy efficiency programs. Third, we change long-run program assumptions, based on our assessment in the second task, and then examine the impacts of these changes on a recent PG&E demand-side management forecast and resource plan.

  12. DEVELOPMENT AND APPLICATION OF A FAST-RUNNING TOOL TO CHARACTERIZE SHOCK DAMAGE WITHIN TUNNEL STRUCTURES

    SciTech Connect (OSTI)

    Glascoe, L; Morris, J; Glenn, L; Krnjajic, M

    2009-03-31

    Successful but time-intensive use of high-fidelity computational capabilities for shock loading events and resultant effects on and within enclosed structures, e.g., tunnels, has led to an interest in developing more expedient methods of analysis. While several tools are currently available for the general study of the failure of structures under dynamic shock loads at a distance, presented are a pair of statistics- and physics-based tools that can be used to differentiate different types of damage (e.g., breach versus yield) as well as quantify the amount of damage within tunnels for loads close-in and with standoff. Use of such faster running tools allows for scoping and planning of more detailed model and test analysis and provides a way to address parametric sensitivity over a large multivariate space.

  13. Search for the neutral MSSM Higgs bosons in the ditau decay channels at CDF Run II

    SciTech Connect (OSTI)

    Cuenca Almenar, Cristobal; /Valencia U., IFIC

    2008-04-01

    This thesis presents the results on a search for the neutral MSSM Higgs bosons decaying to tau pairs, with least one of these taus decays leptonically. The search was performed with a sample of 1.8 fb{sup -1} of proton-antiproton collisions at {radical}s = 1.96 TeV provided by the Tevatron and collected by CDF Run II. No significant excess over the Standard Model prediction was found and a 95% confidence level exclusion limit have been set on the cross section times branching ratio as a function of the Higgs boson mass. This limit has been translated into the MSSM Higgs sector parameter plane, tan{beta} vs. M{sub A}, for the four different benchmark scenarios.

  14. Department of Energy to Provide Supercomputing Time to Run NOAA...

    Office of Environmental Management (EM)

    We can systematically compare it to other climate models and evaluate its simulations against data collected by atmosphericradiation measurements," said retired Navy Vice Admiral ...

  15. Method for compression of data using single pass LZSS and run-length encoding

    DOE Patents [OSTI]

    Berlin, G.J.

    1997-12-23

    A method used preferably with LZSS-based compression methods for compressing a stream of digital data is disclosed. The method uses a run-length encoding scheme especially suited for data strings of identical data bytes having large run-lengths, such as data representing scanned images. The method reads an input data stream to determine the length of the data strings. Longer data strings are then encoded in one of two ways depending on the length of the string. For data strings having run-lengths less than 18 bytes, a cleared offset and the actual run-length are written to an output buffer and then a run byte is written to the output buffer. For data strings of 18 bytes or longer, a set offset and an encoded run-length are written to the output buffer and then a run byte is written to the output buffer. The encoded run-length is written in two parts obtained by dividing the run length by a factor of 255. The first of two parts of the encoded run-length is the quotient; the second part is the remainder. Data bytes that are not part of data strings of sufficient length are written directly to the output buffer. 3 figs.

  16. Method for compression of data using single pass LZSS and run-length encoding

    DOE Patents [OSTI]

    Berlin, G.J.

    1994-01-01

    A method used preferably with LZSS-based compression methods for compressing a stream of digital data. The method uses a run-length encoding scheme especially suited for data strings of identical data bytes having large run-lengths, such as data representing scanned images. The method reads an input data stream to determine the length of the data strings. Longer data strings are then encoded in one of two ways depending on the length of the string. For data strings having run-lengths less than 18 bytes, a cleared offset and the actual run-length are written to an output buffer and then a run byte is written to the output buffer. For data strings of 18 bytes or longer, a set offset and an encoded run-length are written to the output buffer and then a run byte is written to the output buffer. The encoded run-length is written in two parts obtained by dividing the run length by a factor of 255. The first of two parts of the encoded run-length is the quotient; the second part is the remainder. Data bytes that are not part of data strings of sufficient length are written directly to the output buffer.

  17. Method for compression of data using single pass LZSS and run-length encoding

    DOE Patents [OSTI]

    Berlin, Gary J. (Beech Island, SC)

    1997-01-01

    A method used preferably with LZSS-based compression methods for compressing a stream of digital data. The method uses a run-length encoding scheme especially suited for data strings of identical data bytes having large run-lengths, such as data representing scanned images. The method reads an input data stream to determine the length of the data strings. Longer data strings are then encoded in one of two ways depending on the length of the string. For data strings having run-lengths less than 18 bytes, a cleared offset and the actual run-length are written to an output buffer and then a run byte is written to the output buffer. For data strings of 18 bytes or longer, a set offset and an encoded run-length are written to the output buffer and then a run byte is written to the output buffer. The encoded run-length is written in two parts obtained by dividing the run length by a factor of 255. The first of two parts of the encoded run-length is the quotient; the second part is the remainder. Data bytes that are not part of data strings of sufficient length are written directly to the output buffer.

  18. CNS Running Crew tackles Covenant Health Knoxville Marathon | Y-12 National

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Security Complex Running Crew tackles ... CNS Running Crew tackles Covenant Health Knoxville Marathon Posted: March 6, 2015 - 1:59pm Members of the CNS Running Crew show off their bling at Melton Hill Lake. Front row, Toni Roberts, Debbie Ledford, Jennifer Christmas, Jessica Chadwell, Marianne Griffith, Karen Lacey, Jeff Gates and Barbara King. Back row, Robert Eichin, Jeff Gates, Christopher Hammonds and Brian Paul. By Gene Patterson - When the gun sounds on the 2015 Covenant Health

  19. Dynamic aperture evaluation of the proposed lattices for the RHIC 2009 polarized proton run

    SciTech Connect (OSTI)

    Luo,Y.; Bai, M.; Beebe-Wang, J.; Fischer, W.; Montag, C.; Robert-Demolaize, G.; Satogata, T.; Tepikian, S.; Trbojevic, D.

    2009-01-02

    In the article we evaluate the dynamic apertures of the proposed lattices for the coming Relativistic Heavy Ion Collider (RHIC) 2009 polarized proton (pp) 100 GeV and 250 GeV runs. One goal of this study is to find out the appropriate {beta}* for the coming 2009 pp runs. Another goal is to study the effect of second order chromaticity correction in the RHIC pp runs.

  20. DOE Selects Carnegie Mellon to Run Traineeship in Robotics | Department of

    Energy Savers [EERE]

    Energy Carnegie Mellon to Run Traineeship in Robotics DOE Selects Carnegie Mellon to Run Traineeship in Robotics March 16, 2016 - 1:00pm Addthis Media Contact: Darlene Prather, (202) 586- 8581 darlene.prather@hq.doe.gov Washington D.C.-The Department of Energy (DOE) Office of Environmental Management (EM) has selected Carnegie Mellon University (CMU) in Pittsburgh, PA for award consideration of a cooperative agreement to run a university traineeship in Robotics. The 5-year cooperative

  1. AGR-1 Irradiation Test Final As-Run Report

    SciTech Connect (OSTI)

    Blaise P. Collin

    2012-06-01

    This document presents the as-run analysis of the AGR-1 irradiation experiment. AGR-1 is the first of eight planned irradiations for the Advanced Gas Reactor (AGR) Fuel Development and Qualification Program. Funding for this program is provided by the US Department of Energy (DOE) as part of the Next-Generation Nuclear Plant (NGNP) project. The objectives of the AGR-1 experiment are: 1. To gain experience with multi-capsule test train design, fabrication, and operation with the intent to reduce the probability of capsule or test train failure in subsequent irradiation tests. 2. To irradiate fuel produced in conjunction with the AGR fuel process development effort. 3. To provide data that will support the development of an understanding of the relationship between fuel fabrication processes, fuel product properties, and irradiation performance. In order to achieve the test objectives, the AGR-1 experiment was irradiated in the B-10 position of the Advanced Test Reactor (ATR) at the Idaho National Laboratory (INL) for a total duration of 620 effective full power days of irradiation. Irradiation began on December 24, 2006 and ended on November 6, 2009 spanning 13 ATR cycles and approximately three calendar years. The test contained six independently controlled and monitored capsules. Each capsule contained 12 compacts of a single type, or variant, of the AGR coated fuel. No fuel particles failed during the AGR-1 irradiation. Final burnup values on a per compact basis ranged from 11.5 to 19.6 %FIMA, while fast fluence values ranged from 2.21 to 4.39 ?1025 n/m2 (E >0.18 MeV). Well say something here about temperatures once thermal recalc is done. Thermocouples performed well, failing at a lower rate than expected. At the end of the irradiation, nine of the originally-planned 19 TCs were considered functional. Fission product release-to-birth (R/B) ratios were quite low. In most capsules, R/B values at the end of the irradiation were at or below 10-7 with only one capsule significantly exceeding this value. A maximum R/B of around 2?10-7 was reached at the end of the irradiation in Capsule 5. Several shakedown issues were encountered and resolved during the first three cycles. These include the repair of minor gas line leaks; repair of faulty gas line valves; the need to position moisture monitors in regions of low radiation fields for proper functioning; the enforcement of proper on-line data storage and backup, the need to monitor thermocouple performance, correcting for detector spectral gain shift, and a change in the mass flow rate range of the neon flow controllers.

  2. AGR-2 IRRADIATION TEST FINAL AS-RUN REPORT

    SciTech Connect (OSTI)

    Blaise, Collin

    2014-07-01

    This document presents the as-run analysis of the AGR-2 irradiation experiment. AGR-2 is the second of the planned irradiations for the Advanced Gas Reactor (AGR) Fuel Development and Qualification Program. Funding for this program is provided by the U.S. Department of Energy as part of the Very High Temperature Reactor (VHTR) Technical Development Office (TDO) program. The objectives of the AGR-2 experiment are to: (a) Irradiate UCO (uranium oxycarbide) and UO2 (uranium dioxide) fuel produced in a large coater. Fuel attributes are based on results obtained from the AGR-1 test and other project activities. (b) Provide irradiated fuel samples for post-irradiation experiment (PIE) and safety testing. (c) Support the development of an understanding of the relationship between fuel fabrication processes, fuel product properties, and irradiation performance. The primary objective of the test was to irradiate both UCO and UO2 TRISO (tri-structural isotropic) fuel produced from prototypic scale equipment to obtain normal operation and accident condition fuel performance data. The UCO compacts were subjected to a range of burnups and temperatures typical of anticipated prismatic reactor service conditions in three capsules. The test train also includes compacts containing UO2 particles produced independently by the United States, South Africa, and France in three separate capsules. The range of burnups and temperatures in these capsules were typical of anticipated pebble bed reactor service conditions. The results discussed in this report pertain only to U.S. produced fuel. In order to achieve the test objectives, the AGR-2 experiment was irradiated in the B-12 position of the Advanced Test Reactor (ATR) at Idaho National Laboratory (INL) for a total irradiation duration of 559.2 effective full power days (EFPD). Irradiation began on June 22, 2010, and ended on October 16, 2013, spanning 12 ATR power cycles and approximately three and a half calendar years. The test contained six independently controlled and monitored capsules. Each U.S. capsule contained 12 compacts of either UCO or UO2 AGR coated fuel. No fuel particles failed during the AGR-2 irradiation. Final burnup values on a per compact basis ranged from 7.26 to 13.15% FIMA (fissions per initial heavy-metal atom) for UCO fuel, and 9.01 to 10.69% FIMA for UO2 fuel, while fast fluence values ranged from 1.94 to 3.471025 n/m2 (E >0.18 MeV) for UCO fuel, and from 3.05 to 3.531025 n/m2 (E >0.18 MeV) for UO2 fuel. Time-average volume-average (TAVA) temperatures on a capsule basis at the end of irradiation ranged from 987C in Capsule 6 to 1296C in Capsule 2 for UCO, and from 996 to 1062C in UO2-fueled Capsule 3. By the end of the irradiation, all of the installed thermocouples (TCs) had failed. Fission product release-to-birth (R/B) ratios were quite low. In the UCO capsules, R/B values during the first three cycles were below 10-6 with the exception of the hotter Capsule 2, in which the R/Bs reached 210-6. In the UO2 capsule (Capsule 3), the R/B values during the first three cycles were below 10-7. R/B values for all following cycles are not reliable due to gas flow and cross talk issues.

  3. FY:15 Transport Properties of Run-of-Mine Salt Backfill ? Unconsolidated to Consolidated.

    SciTech Connect (OSTI)

    Dewers, Thomas; Heath, Jason E.; Leigh, Christi D.

    2015-09-28

    The nature of geologic disposal of nuclear waste in salt formations requires validated and verified two-phase flow models of transport of brine and gas through intact, damaged, and consolidating crushed salt. Such models exist in other realms of subsurface engineering for other lithologic classes (oil and gas, carbon sequestration etc. for clastics and carbonates) but have never been experimentally validated and parameterized for salt repository scenarios or performance assessment. Models for waste release scenarios in salt back-fill require phenomenological expressions for capillary pressure and relative permeability that are expected to change with degree of consolidation, and require experimental measurement to parameterize and validate. This report describes a preliminary assessment of the influence of consolidation (i.e. volume strain or porosity) on capillary entry pressure in two phase systems using mercury injection capillary pressure (MICP). This is to both determine the potential usefulness of the mercury intrusion porosimetry method, but also to enable a better experimental design for these tests. Salt consolidation experiments are performed using novel titanium oedometers, or uniaxial compression cells often used in soil mechanics, using sieved run-of-mine salt from the Waste Isolation Pilot Plant (WIPP) as starting material. Twelve tests are performed with various starting amounts of brine pore saturation, with axial stresses up to 6.2 MPa (~900 psi) and temperatures to 90C. This corresponds to UFD Work Package 15SN08180211 milestone FY:15 Transport Properties of Run-of-Mine Salt Backfill Unconsolidated to Consolidated. Samples exposed to uniaxial compression undergo time-dependent consolidation, or creep, to various degrees. Creep volume strain-time relations obey simple log-time behavior through the range of porosities (~50 to 2% as measured); creep strain rate increases with temperature and applied stress as expected. Mercury porosimetry is used to determine characteristic capillary pressure curves from a series of consolidation tests and show characteristic saturation-capillary pressure curves that follow the common van Genuchten (1978, 1980) formulation at low stresses. Higher capillary pressure data are suspect due to the large potential for sample damage, including fluid inclusion decrepitation and pore collapse. Data are supportive of use of the Leverett J function (Leverett, 1941) to use for scaling characteristic curves at different degrees of consolidation, but better permeability determinations are needed to support this hypothesis. Recommendations for further and refined testing are made with the goal of developing a self- consistent set of constitutive laws for granular salt consolidation and multiphase (brine-air) flow.

  4. Off-momentum dynamic aperture for lattices in the RHIC heavy ion runs

    SciTech Connect (OSTI)

    Luo Y.; Bai, M.; Blaskiewicz, M.; Gu, X.; Fischer, W.; Marusic, A.; Roser, T.; Tepikian, S.; Zhang, S.

    2012-05-20

    To reduce transverse emittance growth rates from intrabeam scattering in the RHIC heavy ion runs, a lattice with an increased phase advance in the arc FODO cells was adopted in 2008-2011. During these runs, a large beam loss due to limited off-momentum dynamic aperture was observed during longitudinal RF re-bucketing and with transverse cooling. Based on the beam loss observations in the previous ion runs and the calculated off-momentum apertures, we decided to adopt the lattice used before 2008 for the 2012 U-U and Cu-Au runs. The observed beam decay and the measured momentum aperture in the 2012 U-U run are presented.

  5. Integrated starting and running amalgam assembly for an electrodeless fluorescent lamp

    DOE Patents [OSTI]

    Borowiec, Joseph Christopher (Schenectady, NY); Cocoma, John Paul (Clifton Park, NY); Roberts, Victor David (Burnt Hills, NY)

    1998-01-01

    An integrated starting and running amalgam assembly for an electrodeless SEF fluorescent lamp includes a wire mesh amalgam support constructed to jointly optimize positions of a starting amalgam and a running amalgam in the lamp, thereby optimizing mercury vapor pressure in the lamp during both starting and steady-state operation in order to rapidly achieve and maintain high light output. The wire mesh amalgam support is constructed to support the starting amalgam toward one end thereof and the running amalgam toward the other end thereof, and the wire mesh is rolled for friction-fitting within the exhaust tube of the lamp. The positions of the starting and running amalgams on the wire mesh are jointly optimized such that high light output is achieved quickly and maintained, while avoiding any significant reduction in light output between starting and running operation.

  6. RHIC performance for FY2011 Au+Au heavy ion run

    SciTech Connect (OSTI)

    Marr, G.; Ahrens, L.; Bai, M.; Beebe-Wang, J.; Blackler, I.; Blaskiewicz, M.; Brennan, J.M.; Brown, K.A.; Bruno, D.; Butler, J.; Carlson, C.; Connolly, R.; D'Ottavio, T.; Drees, K.A.; Fedotov, A.V.; Fischer, W.; Fu, W.; Gardner, C.J.; Gassner, D.M.; Glenn, J.W.; Gu, X.; Harvey, M.; Hayes, T.; Hoff, L.; Huang, H.; Ingrassia, P.F.; Jamilkowski, J.P.; Kling, N.; Lafky, M.; Laster, J.S.; Liu, C.; Luo, Y.; Mapes, M.; Marusic, A.; Mernick, K.; Michnoff, R.J.; Minty, M.G.; Montag, C.; Morris, J.; Naylor, C.; Nemesure, S.; Polizzo, S.; Ptitsyn, V.; Robert-Demolaize, G.; Roser, T.; Sampson, P.; Sandberg, J.; Schoefer, V.; Schultheiss, C.; Severino, F.; Shrey, T.; Smith, K.; Steski, D.; Tepikian, S.; Thieberger, P.; Trbojevic, D.; Tsoupas, N.; Tuozzolo, J.E.; VanKuik, B.; Wang, G.; Wilinski, M.; Zaltsman, A.; Zeno, K.; Zhang, S.Y.

    2011-09-04

    Following the Fiscal Year (FY) 2010 (Run-10) Relativistic Heavy Ion Collider (RHIC) Au+Au run, RHIC experiment upgrades sought to improve detector capabilities. In turn, accelerator improvements were made to improve the luminosity available to the experiments for this run (Run-11). These improvements included: a redesign of the stochastic cooling systems for improved reliability; a relocation of 'common' RF cavities to alleviate intensity limits due to beam loading; and an improved usage of feedback systems to control orbit, tune and coupling during energy ramps as well as while colliding at top energy. We present an overview of changes to the Collider and review the performance of the collider with respect to instantaneous and integrated luminosity goals. At the conclusion of the FY 2011 polarized proton run, preparations for heavy ion run proceeded on April 18, with Au+Au collisions continuing through June 28. Our standard operations at 100 GeV/nucleon beam energy was bracketed by two shorter periods of collisions at lower energies (9.8 and 13.5 GeV/nucleon), continuing a previously established program of low and medium energy runs. Table 1 summarizes our history of heavy ion operations at RHIC.

  7. Di-J/psi Studies, Level 3 Tracking and the D0 Run IIb Upgrade

    SciTech Connect (OSTI)

    Vint, Philip John; /Imperial Coll., London

    2009-10-01

    The D0 detector underwent an upgrade to its silicon vertex detector and triggering systems during the transition from Run IIa to Run IIb to maximize its ability to fully exploit Run II at the Fermilab Tevatron. This thesis describes improvements made to the tracking and vertexing algorithms used by the high level trigger in both Run IIa and Run IIb, as well as a search for resonant di-J/{psi} states using both Run IIa and Run IIb data. Improvements made to the tracking and vertexing algorithms during Run IIa included the optimization of the existing tracking software to reduce overall processing time and the certification and testing of a new software release. Upgrades made to the high level trigger for Run IIb included the development of a new tracking algorithm and the inclusion of the new Layer 0 silicon detector into the existing software. The integration of Layer 0 into the high level trigger has led to an improvement in the overall impact parameter resolution for tracks of {approx}50%. The development of a new parameterization method for finding the error associated to the impact parameter of tracks returned by the high level tracking algorithm, in association with the inclusion of Layer 0, has led to improvements in vertex resolution of {approx}4.5 {micro}m. A previous search in the di-J/{psi} channel revealed a unpredicted resonance at {approx}13.7 GeV/c{sup 2}. A confirmation analysis is presented using 2.8 fb{sup -1} of data and two different approaches to cuts. No significant excess is seen in the di-J/{psi} mass spectrum.

  8. Search for techniparticles at D0 Run II

    SciTech Connect (OSTI)

    Feligioni, Lorenzo; /Boston U.

    2006-01-01

    Technicolor theory (TC) accomplishes the necessary electroweak symmetry breaking responsible for the mass of the elementary particles. TC postulates the existence of a new SU(N{sub TC}) gauge theory. Like QCD the exchange of gauge bosons causes the existence of a non-vanishing chiral condensate which dynamically breaks the SU(N{sub TC}){sub L} x SU(N{sub TC}){sub R} symmetry. This gives rise to N{sub TC}{sup 2}-1 Nambu-Goldstone Bosons. Three of these Goldstone Bosons become the longitudinal components of the W{sup {+-}} and Z which therefore acquire mass; the remaining ones are new particles (technihadrons) that can be produced at the high energy colliders and detected. The Technicolor Straw Man Model (TCSM) is a version of the dynamical symmetry breaking with a large number of technifermions and a relative low value of their masses. One of the processes predicted by the TCSM is q{bar q} {yields} V{sub T} {yields} W{pi}{sub T}, where V{sub T} is the Technicolor equivalent of the QCD vector meson and {pi}{sub T} is the equivalent of the pion. W is the electroweak gauge boson of the Standard Model. This dissertation describes the search for W{pi}{sub T} with the D0 detector, a multi-purpose particle detector located at one of the collision points of the Tevatron accelerator situated in Batavia, IL. The final state considered for this thesis is a W boson that decays to electron and neutrino plus a {pi}{sub T} that decays into b{bar c} or b{bar b}, depending on the charge of the initial technivector meson produced. In the D0 detector this process will appear as a narrow cluster of energy deposits in the electromagnetic calorimeter with an associated track reconstructed in the tracking detector. The undetected neutrino from the decay of the W boson will be seen as missing momentum. The fragmentation of the quarks from the decay of the {pi}{sub T} will produce two jets of collimated particles. Events where a b-quark is produced are selected by requesting at least one jet to be associated with a secondary vertex of interaction produced by the decay of B-meson (b-tagging). In the absence of an excess over the Standard Model prediction for the final state considered in this analysis, we compute a 95% Confidence Level upper limit on the techniparticle production cross section for the V{sub T} mass range: 190 GeV/c{sup 2} {le} m(V{sub T} ) {le} 220 GeV/c{sup 2}.

  9. AGR 3/4 Irradiation Test Final As Run Report

    SciTech Connect (OSTI)

    Collin, Blaise P.

    2015-06-01

    Several fuel and material irradiation experiments have been planned for the Idaho National Laboratory Advanced Reactor Technologies Technology Development Office Advanced Gas Reactor Fuel Development and Qualification Program (referred to as the INL ART TDO/AGR fuel program hereafter), which supports the development and qualification of tristructural-isotropic (TRISO) coated particle fuel for use in HTGRs. The goals of these experiments are to provide irradiation performance data to support fuel process development, qualify fuel for normal operating conditions, support development and validation of fuel performance and fission product transport models and codes, and provide irradiated fuel and materials for post irradiation examination and safety testing (INL 05/2015). AGR-3/4 combined the third and fourth in this series of planned experiments to test TRISO coated low enriched uranium (LEU) oxycarbide fuel. This combined experiment was intended to support the refinement of fission product transport models and to assess the effects of sweep gas impurities on fuel performance and fission product transport by irradiating designed-to-fail fuel particles and by measuring subsequent fission metal transport in fuel-compact matrix material and fuel-element graphite. The AGR 3/4 fuel test was successful in irradiating the fuel compacts to the burnup and fast fluence target ranges, considering the experiment was terminated short of its initial 400 EFPD target (Collin 2015). Out of the 48 AGR-3/4 compacts, 42 achieved the specified burnup of at least 6% fissions per initial heavy-metal atom (FIMA). Three capsules had a maximum fuel compact average burnup < 10% FIMA, one more than originally specified, and the maximum fuel compact average burnup was <19% FIMA for the remaining capsules, as specified. Fast neutron fluence fell in the expected range of 1.0 to 5.51025 n/m2 (E >0.18 MeV) for all compacts. In addition, the AGR-3/4 experiment was globally successful in keeping the temperature in the twelve capsules relatively flat in a range of temperatures suitable for the measurement of fission product diffusion in compact matrix and structural graphite materials.

  10. Pilot Plant Completes Two 1,000-Hour Ethanol Performance Runs | Department

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    of Energy Pilot Plant Completes Two 1,000-Hour Ethanol Performance Runs Pilot Plant Completes Two 1,000-Hour Ethanol Performance Runs October 19, 2015 - 12:38pm Addthis ICM Inc. announced successful completion of two 1,000-hour performance runs of its patent-pending Generation 2.0 Co-Located Cellulosic Ethanol process at its cellulosic ethanol pilot plant in St. Joseph, Missouri. This is an important step toward the commercialization of cellulosic ethanol from switchgrass and energy sorghum.

  11. Two-loop ultrasoft running of the O(v{sup 2}) QCD quark potentials (Journal

    Office of Scientific and Technical Information (OSTI)

    Article) | SciTech Connect Two-loop ultrasoft running of the O(v{sup 2}) QCD quark potentials Citation Details In-Document Search Title: Two-loop ultrasoft running of the O(v{sup 2}) QCD quark potentials The two-loop ultrasoft contributions to the next-to-leading logarithmic (NLL) running of the QCD potentials at order v{sup 2} are determined. The results represent an important step towards the next-to-next-to-leading logarithmic (NNLL) description of heavy quark pair production and

  12. EERE Success Story-Washington: State Ferries Run Cleaner With Biodiesel |

    Office of Environmental Management (EM)

    Department of Energy State Ferries Run Cleaner With Biodiesel EERE Success Story-Washington: State Ferries Run Cleaner With Biodiesel April 18, 2013 - 12:00am Addthis Washington State Ferries, owned and operated by the Washington State Department of Transportation, is the largest ferry service in the United States and the third largest in the world. Thanks in part to an American Recovery and Reinvestment Act of 2009 (ARRA) investment from EERE, the ferries now run on a blended biodiesel fuel

  13. ARE PULSING SOLITARY WAVES RUNNING INSIDE THE SUN?

    SciTech Connect (OSTI)

    Wolff, Charles L.

    2012-09-10

    A precise sequence of frequencies-detected four independent ways-is interpreted as a system of solitary waves below the Sun's convective envelope. Six future observational or theoretical tests of this idea are suggested. Wave properties (rotation rates, radial energy distribution, nuclear excitation strength) follow from conventional dynamics of global oscillation modes after assuming a localized nuclear term strong enough to perturb and hold mode longitudes into alignments that form 'families'. To facilitate future tests, more details are derived for a system of two dozen solitary waves 2 {<=} l {<=} 25. Wave excitation by {sup 3}He and {sup 14}C burning is complex. It spikes by factors M{sub 1} {<=} 10{sup 3} when many waves overlap in longitude but its long-time average is M{sub 2} {<=} 10. Including mixing can raise overall excitation to {approx}50 times that in a standard solar model. These spikes cause tiny phase shifts that tend to pull wave rotation rates toward their ideal values {proportional_to}[l(l + 1)]{sup -1}. A system like this would generate some extra nuclear energy in two spots at low latitude on opposite sides of the Sun. Each covers about 20 Degree-Sign of longitude. Above a certain wave amplitude, the system starts giving distinctly more nuclear excitation to some waves (e.g., l = 9, 14, and 20) than to neighboring l values. The prominence of l = 20 has already been reported. This transition begins at temperature amplitudes {Delta}T/T = 0.03 in the solar core for a typical family of modes, which corresponds to {delta}T/T {approx} 0.001 for one of its many component oscillation modes.

  14. Queuing and Running on BG/Q Systems FAQ | Argonne Leadership...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Queuing and Running on BGQ Systems FAQ Contents Is there a limit on stack size? My job had empty stdout, and the stderr looks like it died immediately after it started. What...

  15. As-Run Physics Analysis for the UCSB-1 Experiment in the Advanced Test Reactor

    SciTech Connect (OSTI)

    Nielsen, Joseph Wayne

    2015-09-01

    The University of California Santa Barbara (UCSB) -1 experiment was irradiated in the A-10 position of the ATR. The experiment was irradiated during cycles 145A, 145B, 146A, and 146B. Capsule 6A was removed from the test train following Cycle 145A and replaced with Capsule 6B. This report documents the as-run physics analysis in support of Post-Irradiation Examination (PIE) of the test. This report documents the as-run fluence and displacements per atom (DPA) for each capsule of the experiment based on as-run operating history of the ATR. Average as-run heating rates for each capsule are also presented in this report to support the thermal analysis.

  16. Pilot Plant Completes Two 1,000-Hour Ethanol Performance Runs...

    Broader source: Energy.gov (indexed) [DOE]

    1,000-hour performance runs of its patent-pending Generation 2.0 Co-Located Cellulosic Ethanol process at its cellulosic ethanol pilot plant in St. Joseph, Missouri. This is an...

  17. Pilot Plant Completes Two 1,000-Hour Ethanol Performance Runs

    Broader source: Energy.gov [DOE]

    ICM Inc. announced successful completion of two 1,000-hour performance runs of its patent-pending Generation 2.0 Co-Located Cellulosic Ethanol process at its cellulosic ethanol pilot plant in St....

  18. LCLS-schedul_run-II_10_05_6-detail.xls

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Run II Detailed Schedule, May 6-September 13, 2010 Thurs Fri Sat Sun Mon Tues Wed BL Prop Spokesperson PI Planned ActivityExperiment Title POC AD Program Deputy Week 1 6-May...

  19. Princeton and PPPL projects selected to run on super-powerful...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Princeton and PPPL projects selected to run on super-powerful computer to be delivered to Oak Ridge Leadership Computing Facility By John Greenwald June 1, 2015 Tweet Widget Google...

  20. RHIC Performance as a 100 GeV Polarized Proton Collider in Run-9

    SciTech Connect (OSTI)

    Montag, C.; Ahrens, L.; Bai, M.; Beebe-Wang, J.; Blaskiewicz, M.; Brennan, J.M.; Brown, K.A.; Bruno, D.; Connolly, R.; DOttavio, T.; Drees, A.; Fedotov, A.V.; Fischer, W.; Ganetis, G.; Gardner, C.; Glenn, J.; Hahn, H.; Harvey, M.; Hayes, T.; Huang, H.; Ingrassia, P.; Jamilkowski, J.; Kayran, D.; Kewisch, J.; Lee, R.C.; Luccio, A.U.; Luo, Y.; MacKay, W.W.; Makdisi, Y.; Malitsky, N.; Marr, G.; Marusic, A.; Menga, P.M.; Michnoff, R.; Minty, M.; Morris, J.; Oerter, B.; Pilat, F.; Pile, P.; Pozdeyev, E.; Ptitsyn, V.; Robert-Demolaize, G.; Roser, T.; Russo, T.; Satogata, T.; Schoefer, V.; Schultheiss, C.; Severino, F.; Sivertz, M.; Smith, K.; Tepikian, S.; Thieberger, P.; Trbojevic, D.; Tsoupas, N.; Tuozzolo, J.; Zaltsman, A.; Zelenski, A.; Zeno, K.; Zhang, S.Y.

    2010-05-23

    During the second half of Run-9, the Relativisitc Heavy Ion Collider (RHIC) provided polarized proton collisions at two interaction points. The spin orientation of both beams at these collision points was controlled by helical spin rotators, and physics data were taken with different orientations of the beam polarization. Recent developments and improvements will be presented, as well as luminosity and polarization performance achieved during Run-9.

  1. New Jersey: Atlantic City Jitneys Running on Natural Gas | Department of

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Energy New Jersey: Atlantic City Jitneys Running on Natural Gas New Jersey: Atlantic City Jitneys Running on Natural Gas November 6, 2013 - 12:45pm Addthis In 2009, the New Jersey Clean Cities Coalition was one of 25 recipients to receive funding from the EERE Clean Cities' Alternative Fuel and Advanced Technology Vehicles Pilot Program. The approximately $15 million in funding allowed he city to purchase nearly 300 compressed natural gas vehicles, including 190 Atlantic City

  2. PNNL's Lab Homes Run Energy-Efficient Technologies Through the Paces |

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Department of Energy PNNL's Lab Homes Run Energy-Efficient Technologies Through the Paces PNNL's Lab Homes Run Energy-Efficient Technologies Through the Paces November 14, 2013 - 10:10am Addthis At the Energy Department's Pacific Northwest National Laboratory (PNNL), researchers are using two modular homes to test energy-efficient products and calculate their energy savings. Researchers test new technologies in the Experimental home (pictured above), while the Baseline home (not pictured)

  3. Illinois: Ozinga Concrete Runs on Natural Gas and Opens Private Station |

    Office of Environmental Management (EM)

    Department of Energy Illinois: Ozinga Concrete Runs on Natural Gas and Opens Private Station Illinois: Ozinga Concrete Runs on Natural Gas and Opens Private Station November 6, 2013 - 12:00am Addthis In 2012, Ozinga Brothers Concrete opened Chicago's first privately owned compressed natural gas fueling station to local businesses and government agencies. The station is specifically designed for medium and heavy-use trucks and buses, but can handle light-duty vehicles and can fill more than

  4. New Jersey: Atlantic City Jitneys Running on Natural Gas | Department of

    Office of Environmental Management (EM)

    Energy New Jersey: Atlantic City Jitneys Running on Natural Gas New Jersey: Atlantic City Jitneys Running on Natural Gas November 6, 2013 - 12:00am Addthis In 2009, the New Jersey Clean Cities Coalition was one of 25 recipients to receive funding from the EERE Clean Cities' Alternative Fuel and Advanced Technology Vehicles Pilot Program. The approximately $15 million in funding allowed he city to purchase nearly 300 compressed natural gas vehicles, including 190 Atlantic City

  5. EERE Success Story-New Jersey: Atlantic City Jitneys Running on Natural

    Office of Environmental Management (EM)

    Gas | Department of Energy Atlantic City Jitneys Running on Natural Gas EERE Success Story-New Jersey: Atlantic City Jitneys Running on Natural Gas November 6, 2013 - 12:45pm Addthis In 2009, the New Jersey Clean Cities Coalition was one of 25 recipients to receive funding from the EERE Clean Cities' Alternative Fuel and Advanced Technology Vehicles Pilot Program. The approximately $15 million in funding allowed he city to purchase nearly 300 compressed natural gas vehicles, including 190

  6. Models and Datasets

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    iteration by iteration. RevSim is an Excel 2010 based model. Much of the logic is VBA code (Visual Basic for Applications); the user does not need to know VBA to run the...

  7. Running jobs

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    has 8 GB (8192 MB) of physical memory, but, not all that memory is available to user programs. A user can change MPI buffer sizes by setting certain MPICH environment variables....

  8. Antineutrino Running

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Timescale Outline Images of the MiniBooNE horn MiniBooNE Overview 8 GeV KE protons from Fermilab Booster Accelerator 1.7 ! beryllium target (HARP results coming soon!) horn focusses + sign mesons " and K Can reverse polarity (anti-# beam) 50 m decay region >99% pure # µ flavor beam 490 m dirt berm 800 ton CH 2 detector 1520 PMTs 1280 + 240 in veto NuFact05, 22 June 2005 Jocelyn Monroe, Columbia University MiniBooNE Overview 8 GeV KE protons from Fermilab Booster Accelerator 1.7 !

  9. Run Rules

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    for the DOE's NNSA and Office of Science platform roadmaps. As such, application benchmarking and performance analysis will play a critical role in evaluation of the Offeror's...

  10. Running Jobs

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Software Policies User Surveys NERSC Users Group User Announcements Help Staff Blogs Request Repository Mailing List Operations for: Passwords & Off-Hours Status...

  11. Antineutrino Running

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Overview 8 GeV KE protons from Fermilab Booster Accelerator 1.7 beryllium target (HARP results coming soon) horn focusses + sign mesons " and K Can reverse polarity (anti-...

  12. Have We Run Out of Oil Yet? Oil Peaking Analysis from an Optimist's Perspective

    SciTech Connect (OSTI)

    Greene, David L; Hopson, Dr Janet L; Li, Jia

    2005-01-01

    This study addresses several questions concerning the peaking of conventional oil production from an optimist's perspective. Is the oil peak imminent? What is the range of uncertainty? What are the key determining factors? Will a transition to unconventional oil undermine or strengthen OPEC's influence over world oil markets? These issues are explored using a model combining alternative world energy scenarios with an accounting of resource depletion and a market-based simulation of transition to unconventional oil resources. No political or environmental constraints are allowed to hinder oil production, geological constraints on the rates at which oil can be produced are not represented, and when USGS resource estimates are used, more than the mean estimate of ultimately recoverable resources is assumed to exist. The issue is framed not as a question of "running out" of conventional oil, but in terms of the timing and rate of transition from conventional to unconventional oil resources. Unconventional oil is chosen because production from Venezuela's heavy-oil fields and Canada's Athabascan oil sands is already underway on a significant scale and unconventional oil is most consistent with the existing infrastructure for producing, refining, distributing and consuming petroleum. However, natural gas or even coal might also prove to be economical sources of liquid hydrocarbon fuels. These results indicate a high probability that production of conventional oil from outside of the Middle East region will peak, or that the rate of increase of production will become highly constrained before 2025. If world consumption of hydrocarbon fuels is to continue growing, massive development of unconventional resources will be required. While there are grounds for pessimism and optimism, it is certainly not too soon for extensive, detailed analysis of transitions to alternative energy sources.

  13. RHIC PERFORMANCE DURING THE FY10 200 GeV Au+Au HEAVY ION RUN

    SciTech Connect (OSTI)

    Brown, K.A.; Ahrens, L.; Bai, M.; Beebe-Wang, J.; Blaskiewicz, M.; Brennan, J.; Bruno, D.; Carlson, C.; Connolly, R.; de Maria, R.; DOttavio, T.; Drees, A.; Fischer, W.; Fu, W.; Gardner, C.; Gassner, D.; Glenn, J.W.; Hao, Y.; Harvey, M.; Hayes, T.; Hoff, L.; Huang, H.; Laster, J.; Lee, R.; Litvinenko, V.; Luo, Y.; MacKay, W.; Marr, G.; Marusic, A.; Mernick, K.; Michnoff, R.; Minty, M.; Montag, C.; Morris, J.; Nemesure, S.; Oerter, B.; Pilat, F.; Ptitsyn, V.; Robert-Demolaize, G.; Roser, T.; Russo, T.; Sampson, P.; Sandberg, J.; Satogata, T.; Severino, F.; Schoefer, V.; Schultheiss, C.; Smith, K.; Steski, D.; Tepikian, S.; Theisen, C.; Thieberger, P.; Trbojevic, D.; Tsoupas, N.; Tuozzolo, J.; Wang, G.; Wilinski, M.; Zaltsman, A.; Zeno, K.; Zhang, S.Y.

    2010-05-23

    Since the last successful RHIC Au+Au run in 2007 (Run-7), the RHIC experiments have made numerous detector improvements and upgrades. In order to benefit from the enhanced detector capabilities and to increase the yield of rare events in the acquired heavy ion data a significant increase in luminosity is essential. In Run-7 RHIC achieved an average store luminosity of = 12 x 10{sup 26} cm{sup -2} s{sup -1} by operating with 103 bunches (out of 111 possible), and by squeezing to {beta}* = 0.85 m. This year, Run-10, we achieved = 20 x 10{sup 26} cm{sup -2} s{sup -1}, which put us an order of magnitude above the RHIC design luminosity. To reach these luminosity levels we decreased {beta}* to 0.75 m, operated with 111 bunches per ring, and reduced longitudinal and transverse emittances by means of bunched-beam stochastic cooling. In addition we introduced a lattice to suppress intra-beam scattering (IBS) in both RHIC rings, upgraded the RF control system, and separated transition crossing times in the two rings. We present an overview of the changes and the results of Run-10 performance.

  14. Longitudinal emittance measurements in the Booster and AGS during the 2014 RHIC gold run

    SciTech Connect (OSTI)

    Zeno, K.

    2014-08-18

    This note describes longitudinal emittance measurements that were made in the Booster and AGS during the 2014 RHIC Gold run. It also contains an overview of the longitudinal aspects of their setup during this run. Each bunch intended for RHIC is composed of beam from 4 Booster cycles, and there are two of them per AGS cycle. For each of the 8 Booster cycles required to produce the 2 bunches in the AGS, a beam pulse from EVIS is injected into the Booster and captured in four h=4 buckets. Then those bunches are accelerated to a porch where they are merged into 2 bunches and then into 1 bunch.

  15. SEE HOW WE RUN...At WIPP, We Really Mean Business

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    SEE HOW WE RUN At WIPP, We Really Mean Business CARLSBAD, N.M., December 10, 2001 - Early this year, Dr. Inés Triay, manager of the U.S. Department of Energy's (DOE) Carlsbad Field Office, challenged Waste Isolation Pilot Plant (WIPP) employees to operate the nation's first nuclear waste repository like a business. So, what do we mean when we say we run WIPP "like a business?" Is it the organizational charts? Production schedules? It's more about the innovators who are WIPP employees.

  16. RHIC POWER SUPPLIES-FAILURE STATISTICS FOR RUNS 4, 5, AND 6

    SciTech Connect (OSTI)

    BRUNO,D.; GANETIS, G.; SANDBERG, J.; LOUIE, W.; HEPPNER, G.; SCHULTHEISS, C.

    2007-06-25

    The two rings in the Relativistic Heavy Ion Collider (RFIIC) require a total of 933 power supplies to supply current to highly inductive superconducting magnets. Failure statistics for the RHIC power supplies will be failure associated with the CEPS group's responsibilities. presented for the last three RHIC runs. The failures of the power supplies will be analyzed. The statistics associated with the power supply failures will be presented. Comparisons of the failure statistics for the last three RHIC runs will be shown. Improvements that have increased power supply availability will be discussed.

  17. GridRun: A lightweight packaging and execution environment forcompact, multi-architecture binaries

    SciTech Connect (OSTI)

    Shalf, John; Goodale, Tom

    2004-02-01

    GridRun offers a very simple set of tools for creating and executing multi-platform binary executables. These ''fat-binaries'' archive native machine code into compact packages that are typically a fraction the size of the original binary images they store, enabling efficient staging of executables for heterogeneous parallel jobs. GridRun interoperates with existing distributed job launchers/managers like Condor and the Globus GRAM to greatly simplify the logic required launching native binary applications in distributed heterogeneous environments.

  18. Low-energy run of Fermilab Electron Cooler's beam generation system

    SciTech Connect (OSTI)

    Prost, Lionel; Shemyakin, Alexander; Fedotov, Alexei; Kewisch, Jorg; /Brookhaven

    2010-08-01

    As a part of a feasibility study of using the Fermilab Electron Cooler for a low-energy Relativistic Heavy Ion Collider (RHIC) run at Brookhaven National Laboratory (BNL), the cooler operation at 1.6 MeV electron beam energy was tested in a short beam line configuration. The main result of the study is that the cooler beam generation system is suitable for BNL needs. In a striking difference with running 4.3 MeV beam, no unprovoked beam recirculation interruptions were observed.

  19. Run 2 Upgrades to the CMS Level-1 Calorimeter Trigger (Conference) |

    Office of Scientific and Technical Information (OSTI)

    SciTech Connect Run 2 Upgrades to the CMS Level-1 Calorimeter Trigger Citation Details In-Document Search Title: Run 2 Upgrades to the CMS Level-1 Calorimeter Trigger Authors: Kreis, B. ; et al. Publication Date: 2015-11-18 OSTI Identifier: 1230049 Report Number(s): FERMILAB-CONF-15-503-CMS arXiv eprint number arXiv:1511.05855 DOE Contract Number: AC02-07CH11359 Resource Type: Conference Resource Relation: Journal Name: JINST; Conference: Topical Workshop on Electronics for Particle Physics.

  20. Builds in U.S. natural gas storage running above five-year average

    Gasoline and Diesel Fuel Update (EIA)

    Builds in U.S. natural gas storage running above five-year average The amount of natural gas put into underground storage since the beginning of the so-called "injection season" in April has been above the five-year average by a wide margin. In its new forecast, the U.S. Energy Information Administration said natural gas inventories, which are running more than 50% above year ago levels, are on track to reach almost 4 trillion cubic feet by the end of October which marks the start of

  1. Tips for Running an Air Conditioner Without Breaking the Bank | Department

    Energy Savers [EERE]

    of Energy for Running an Air Conditioner Without Breaking the Bank Tips for Running an Air Conditioner Without Breaking the Bank July 22, 2014 - 3:15pm Addthis Cooling your home doesn't have to break the bank, with these tips you can save money and stay comfortable.| Photo courtesy of ©iStockphoto.com/galinast Cooling your home doesn't have to break the bank, with these tips you can save money and stay comfortable.| Photo courtesy of ©iStockphoto.com/galinast Elizabeth Spencer

  2. ARM XDC Datastreams

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    NCEPGFS: National Centers for Environment Prediction Global Forecast System Point Reyes CA, USA; Mobile Facility SONDE: Balloon-Borne Sounding System ECMWFDIAG: European...

  3. Modeling

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    PVLibMatlab Permalink Gallery Sandia Labs Releases New Version of PVLib Toolbox Modeling, News, Photovoltaic, Solar Sandia Labs Releases New Version of PVLib Toolbox Sandia has released version 1.3 of PVLib, its widely used Matlab toolbox for modeling photovoltaic (PV) power systems. The version 1.3 release includes the following added functions: functions to estimate parameters for popular PV module models, including PVsyst and the CEC '5 parameter' model a new model of the effects of solar

  4. Operation of the DC current transformer intensity monitors at FNAL during run II

    SciTech Connect (OSTI)

    Crisp, J.; Fellenz, B.; Heikkinen, D.; Ibrahim, M.A.; Meyer, T.; Vogel, G.; /Fermilab

    2012-01-01

    Circulating beam intensity measurements at FNAL are provided by five DC current transformers (DCCT), one per machine. With the exception of the DCCT in the Recycler, all DCCT systems were designed and built at FNAL. This paper presents an overview of both DCCT systems, including the sensor, the electronics, and the front-end instrumentation software, as well as their performance during Run II.

  5. A Simplified Method for Implementing Run-Time Polymorphism in Fortran95

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Decyk, Viktor K.; Norton, Charles D.

    2004-01-01

    This paper discusses a simplified technique for software emulation of inheritance and run-time polymorphism in Fortran95. This technique involves retaining the same type throughout an inheritance hierarchy, so that only functions which are modified in a derived class need to be implemented.

  6. Negative running of the spectral index, hemispherical asymmetry and the consistency of Planck with large r

    SciTech Connect (OSTI)

    McDonald, John

    2014-11-01

    Planck favours a negative running of the spectral index, with the likelihood being dominated by low multipoles l?<50 and no preference for running at higher l. A negative spectral index is also necessary for the 2- Planck upper bound on the tensor-to-scalar ratio r to be consistent with values significantly larger than 0.1. Planck has also observed a hemispherical asymmetry of the CMB power spectrum, again mostly at low multipoles. Here we consider whether the physics responsible for the hemispherical asymmetry could also account for the negative running of the spectral index and the consistency of Planck with a large value of r. A negative running of the spectral index can be generated if the hemispherical asymmetry is due to a scale- and space-dependent modulation which suppresses the CMB power spectrum at low multipoles. We show that the observed hemispherical asymmetry at low l can be generated while satisfying constraints on the asymmetry at higher l and generating a negative spectral index of the right magnitude to account for the Planck observation and to allow Planck to be consistent with a large value of r.

  7. Modeling

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Engine Combustion/Modeling - Modelingadmin2015-10-28T01:54:52+00:00 Modelers at the CRF are developing high-fidelity simulation tools for engine combustion and detailed micro-kinetic, surface chemistry modeling tools for catalyst-based exhaust aftertreatment systems. The engine combustion modeling is focused on developing Large Eddy Simulation (LES). LES is being used with closely coupled key target experiments to reveal new understanding of the fundamental processes involved in engine

  8. Modeling

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Reacting Flow/Modeling - Modelingadmin2015-10-28T02:39:13+00:00 Turbulence models typically involve coarse-graining and/or time averaging. Though adequate for modeling mean transport, this approach does not address turbulence-microphysics interactions that are important in combustion processes. Subgrid models are developed to represent these interactions. The CRF has developed a fundamentally different representation of these interactions that does not involve distinct coarse-grained and subgrid

  9. Modeling

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Widespread Hydrogen Fueling Infrastructure Is the Goal of H2FIRST Project Capabilities, Center for Infrastructure Research and Innovation (CIRI), Computational Modeling & Simulation, Energy, Energy Storage, Energy Storage Systems, Facilities, Infrastructure Security, Materials Science, Modeling, Modeling & Analysis, News, News & Events, Partnership, Research & Capabilities, Systems Analysis, Systems Engineering, Transportation Energy Widespread Hydrogen Fueling Infrastructure Is

  10. Modeling

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    WVMinputs-outputs Permalink Gallery Sandia Labs releases wavelet variability model (WVM) Modeling, News, Photovoltaic, Solar Sandia Labs releases wavelet variability model (WVM) When a single solar photovoltaic (PV) module is in full sunlight, then is shaded by a cloud, and is back in full sunlight in a matter of seconds, a sharp dip then increase in power output will result. However, over an entire PV plant, clouds will often uncover some modules even as they cover others, [...] By Andrea

  11. Modeling

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    New Project Is the ACME of Computer Science to Address Climate Change Analysis, Climate, Global Climate & Energy, Modeling, Modeling & Analysis, News, News & Events, Partnership New Project Is the ACME of Computer Science to Address Climate Change Sandia high-performance computing (HPC) researchers are working with DOE and 14 other national laboratories and institutions to develop and apply the most complete climate and Earth system model, to address the most challenging and

  12. Modeling

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    A rail tank car of the type used to transport crude oil across North America. Recent incidents have raised concerns about the safety of this practice, which the DOE-DOT-sponsored team is investigating. (photo credit: Harvey Henkelmann) Permalink Gallery Expansion of DOE-DOT Tight Oil Research Work Capabilities, Carbon Capture & Storage, Carbon Storage, Energy, Energy Assurance, Energy Assurance, Fuel Options, Infrastructure Assurance, Infrastructure Security, Modeling, Modeling, Modeling

  13. Running coupling constant from lattice studies of gluon and ghost propagators

    SciTech Connect (OSTI)

    Cucchieri, A.; Mendes, T.

    2004-12-02

    We present a numerical study of the running coupling constant in four-dimensional pure-SU(2) lattice gauge theory. The running coupling is evaluated by fitting data for the gluon and ghost propagators in minimal Landau gauge. Following Refs. [1, 2], the fitting formulae are obtained by a simultaneous integration of the {beta} function and of a function coinciding with the anomalous dimension of the propagator in the momentum subtraction scheme. We consider these formulae at three and four loops. The fitting method works well, especially for the ghost case, for which statistical error and hyper-cubic effects are very small. Our present result for {lambda}MS is 200{sub -40}{sup +60} MeV, where the error is purely systematic. We are currently extending this analysis to five loops in order to reduce this systematic error.

  14. Operation of the intensity monitors in beam transport lines at Fermilab during Run II¹

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Crisp, J.; Fellenz, B.; Fitzgerald, J.; Heikkinen, D.; Ibrahim, M. A.

    2011-10-06

    The intensity of charged particle beams at Fermilab must be kept within pre-determined safety and operational envelopes in part by assuring all beam within a few percent has been transported from any source to destination. Beam instensity monitors with toroidial pickups provide such beam intensity measurements in the transport lines between accelerators at FNAL. With Run II, much effort was made to continually improve the resolution and accuracy of the system.

  15. Princeton and PPPL projects selected to run on super-powerful computer to

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    be delivered to Oak Ridge Leadership Computing Facility | Princeton Plasma Physics Lab Princeton and PPPL projects selected to run on super-powerful computer to be delivered to Oak Ridge Leadership Computing Facility By John Greenwald June 1, 2015 Tweet Widget Google Plus One Share on Facebook Computer simulation and visualization of edge turbulence in a fusion plasma. (Simulation: Seung-Hoe Ku/PPPL. Visualization: David Pugmire/ORNL) Computer simulation and visualization of edge turbulence

  16. Princeton and PPPL projects selected to run on super-powerful computer to

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    be delivered to Oak Ridge Leadership Computing Facility | Princeton Plasma Physics Lab Princeton and PPPL projects selected to run on super-powerful computer to be delivered to Oak Ridge Leadership Computing Facility By John Greenwald June 1, 2015 Tweet Widget Google Plus One Share on Facebook Computer simulation and visualization of edge turbulence in a fusion plasma. (Simulation: Seung-Hoe Ku/PPPL. Visualization: David Pugmire/ORNL) Computer simulation and visualization of edge turbulence

  17. Experimental "Wind to Hydrogen" System Up and Running - News Releases |

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    NREL Experimental "Wind to Hydrogen" System Up and Running December 14, 2006 Xcel Energy (NYSE:XEL) and the U.S. Department of Energy's National Renewable Energy Laboratory today unveiled a unique facility that uses electricity from wind turbines to produce and store pure hydrogen, offering what may become an important new template for future energy production. Several dozen journalists, environmental leaders, government officials and Xcel Energy managers today toured the joint

  18. Apex Gold discussion fosters international cooperation in run-up to 2016

    National Nuclear Security Administration (NNSA)

    Nuclear Security Summit | National Nuclear Security Administration discussion fosters international cooperation in run-up to 2016 Nuclear Security Summit | National Nuclear Security Administration Facebook Twitter Youtube Flickr RSS People Mission Managing the Stockpile Preventing Proliferation Powering the Nuclear Navy Emergency Response Recapitalizing Our Infrastructure Countering Nuclear Terrorism About Our Programs Our History Who We Are Our Leadership Our Locations Budget Our Operations

  19. Accelerating Innovation: PowerAmerica Is Up and Running | Department of

    Energy Savers [EERE]

    Energy 2, 2015 - 2:00pm Addthis Accelerating Innovation: PowerAmerica Is Up and Running -Rob Ivester, Deputy Director, Advanced Manufacturing Office The excitement and drive to deliver was evident to me last week when I joined nearly 100 PowerAmerica members for their kick-off meeting at NC State University in Raleigh, North Carolina. PowerAmerica, also called the Next Generation Power Electronics Manufacturing Innovation Institute, will develop advanced manufacturing processes and work to

  20. Forced running exercise attenuates hippocampal neurogenesis impairment and the neurocognitive deficits induced by whole-brain irradiation via the BDNF-mediated pathway

    SciTech Connect (OSTI)

    Ji, Jian-feng; Ji, Sheng-jun; Sun, Rui; Li, Kun; Zhang, Yuan; Zhang, Li-yuan; Tian, Ye

    2014-01-10

    Highlights: Forced exercise can ameliorate WBI induced cognitive impairment in our rat model. Mature BDNF plays an important role in the effects of forced exercise. Exercise may be a possible treatment of the radiation-induced cognitive impairment. -- Abstract: Cranial radiotherapy induces progressive and debilitating cognitive deficits, particularly in long-term cancer survivors, which may in part be caused by the reduction of hippocampal neurogenesis. Previous studies suggested that voluntary exercise can reduce the cognitive impairment caused by radiation therapy. However, there is no study on the effect of forced wheel exercise and little is known about the molecular mechanisms mediating the effect of exercise. In the present study, we investigated whether the forced running exercise after irradiation had the protective effects of the radiation-induced cognitive impairment. Sixty-four Male SpragueDawley rats received a single dose of 20 Gy or sham whole-brain irradiation (WBI), behavioral test was evaluated using open field test and Morris water maze at 2 months after irradiation. Half of the rats accepted a 3-week forced running exercise before the behavior detection. Immunofluorescence was used to evaluate the changes in hippocampal neurogenesis and Western blotting was used to assess changes in the levels of mature brain-derived neurotrophic factor (BDNF), phosphorylated tyrosine receptor kinase B (TrkB) receptor, protein kinase B (Akt), extracellular signal-regulated kinase (ERK), calcium-calmodulin dependent kinase (CaMKII), cAMP-calcium response element binding protein (CREB) in the BDNFpCREB signaling. We found forced running exercise significantly prevented radiation-induced cognitive deficits, ameliorated the impairment of hippocampal neurogenesis and attenuated the down-regulation of these proteins. Moreover, exercise also increased behavioral performance, hippocampal neurogenesis and elevated BDNFpCREB signaling in non-irradiation group. These results suggest that forced running exercise offers a potentially effective treatment for radiation-induced cognitive deficits.

  1. Smart Grid Technology Interactive Model | Argonne National Laboratory

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Smart Grid Technology Interactive Model Share Description As our attention turns to new cars that run partially or completely on electricity, how can we redesign our electric grid...

  2. Making the most of the relic density for dark matter searches at the LHC 14 TeV Run

    SciTech Connect (OSTI)

    Busoni, Giorgio; Simone, Andrea De; Jacques, Thomas; Morgante, Enrico; Riotto, Antonio

    2015-03-12

    As the LHC continues to search for new weakly interacting particles, it is important to remember that the search is strongly motivated by the existence of dark matter. In view of a possible positive signal, it is essential to ask whether the newly discovered weakly interacting particle can be be assigned the label “dark matter”. Within a given set of simplified models and modest working assumptions, we reinterpret the relic abundance bound as a relic abundance range, and compare the parameter space yielding the correct relic abundance with projections of the Run II exclusion regions. Assuming that dark matter is within the reach of the LHC, we also make the comparison with the potential 5σ discovery regions. Reversing the logic, relic density calculations can be used to optimize dark matter searches by motivating choices of parameters where the LHC can probe most deeply into the dark matter parameter space. In the event that DM is seen outside of the region giving the correct relic abundance, we will learn that either thermal relic DM is ruled out in that model, or the DM-quark coupling is suppressed relative to the DM coupling strength to other SM particles.

  3. Sandia Material Model Driver

    Energy Science and Technology Software Center (OSTI)

    2005-09-28

    The Sandia Material Model Driver (MMD) software package allows users to run material models from a variety of different Finite Element Model (FEM) codes in a standalone fashion, independent of the host codes. The MMD software is designed to be run on a variety of different operating system platforms as a console application. Initial development efforts have resulted in a package that has been shown to be fast, convenient, and easy to use, with substantialmore » growth potential.« less

  4. Modeling

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    in warm dense matter experiments with diffuse interface methods in the ALE-AMR code Wangyi Liu ∗ , John Barnard, Alex Friedman, Nathan Masters, Aaron Fisher, Velemir Mlaker, Alice Koniges, David Eder † August 4, 2011 Abstract In this paper we describe an implementation of a single-fluid inter- face model in the ALE-AMR code to simulate surface tension effects. The model does not require explicit information on the physical state of the two phases. The only change to the existing fluid

  5. Direct liquefaction proof-of-concept program: Bench Run 05 (227-97). Final report

    SciTech Connect (OSTI)

    Comolli, A.G.; Pradhan, V.R.; Lee, T.L.K.; Karolkiewicz, W.F.; Popper, G.

    1997-04-01

    This report presents the results Bench Run PB-05, conducted under the DOE Proof of Concept - Bench Option Program in direct coal liquefaction at Hydrocarbon Technologies, Inc. in Lawrenceville, New Jersey. Bench Run PB-05 was the fifth of the nine runs planned in the POC Bench Option Contract between the U.S. DOE and included the evaluation of the effect of using dispersed slurry catalyst in direct liquefaction of a high volatile bituminous Illinois No. 6 coal and in combined coprocessing of coal with organic wastes, such as heavy petroleum resid, MSW plastics, and auto-shredder residue. PB-05 employed a two-stage, back-mixed, slurry reactor system with an interstage V/L separator and an in-line fixed-bed hydrotreater. Coprocessing of waste plastics with Illinois No. 6 coal did not result in the improvement observed earlier with a subbituminous coal. In particular, decreases in light gas yield and hydrogen consumption were not observed with Illinois No. 6 coal as they were with Black Thunder Mine coal. The higher thermal severity during PB-05 is a possible reason for this discrepancy, plastics being more sensitive to temperatures (cracking) than either coal or heavy resid. The ASR material was poorer than MSW plastics in terms of increasing conversions and yields. HTI`s new dispersed catalyst formulation, containing phosphorus-promoted iron gel, was highly effective for the direct liquefaction of Illinois No. 6 coal under the reaction conditions employed; over 95% coal conversion was obtained, along with over 85% residuum conversion and over 73% distillate yields.

  6. THE RHIC INJECTOR ACCELERATORS CONFIGURATIONS, AND PERFORMANCE FOR THE RHIC 2003 AU - D PHYSICS RUN.

    SciTech Connect (OSTI)

    Ahrens, L; Benjamin, J; Blaskiewicz, M; Brennan, J M; Brown, K A; Carlson, K A; Delong, J; D' Ottavio, T; Frak, B; Gardner, C J; Glenn, J W; Harvey, M; Hayes, T; Hseuh, H- C; Ingrassia, P; Lowenstein, D; Mackay, W; Marr, G; Morris, J; Roser, T; Satogata, T; Smith, G; Smith, K S; Steski, D; Tsoupas, N; Thieberger, P; Zeno, K

    2003-05-12

    The RHIC 2003 Physics Run [1] required collisions between gold ions and deuterons. The injector necessarily had to deliver adequate quality (transverse and longitudinal emittance) and quantity of both species. For gold this was a continuing evolution from past work [2]. For deuterons it was new territory. For the filling of the RHIC the injector not only had to deliver quality beams but also had to switch between these species quickly. This paper details the collider requirements and our success in meeting these. Some details of the configurations employed are given.

  7. BIG RU N INDIANA LAKESHORE RUN E LUMBER CIT Y WARSAW JOHNST

    Gasoline and Diesel Fuel Update (EIA)

    RU N INDIANA LAKESHORE RUN E LUMBER CIT Y WARSAW JOHNST OWN BU RNSIDE MILLSTONE FROSTBUR G JUN EAU PLU MVILLE CHERRY HILL KAN E BOSWELL MAR ION CENT ER CREEKSIDE SALTSBUR G POINT N BLAIR SVILL E COU NCIL RU N SIGEL LEWISVILLE BEAR C REEK AR MBRUST OHIOPYLE HALLT ON BR OOKVILLE MAR KTON NOL O RAT HMEL COR SICA MAR CHAND SMIC KSBU RG HOWE APOLLO SEVEN SPRIN GS YAT ESBORO MCNEES LUCIND A GEORGE PIN EY LEEPER TIMBLIN WILL ET FERGUSON CLIMAX PANIC DAVY HILL TIDIOUT E GRAMPIAN SLIGO ROC KVI LLE

  8. Diffraction and forward physics results of the ATLAS experiment from the Run I

    SciTech Connect (OSTI)

    Taevsk, Marek

    2015-04-10

    Various aspects of forward physics have been studied by the ATLAS collaboration using data from Run I at the LHC. In this text, main results of four published analyses are summarized, all based on data from proton-proton collisions at ?(s)=7 TeV collected in 2010 or 2011. Two analyses deal with the diffractive signature, one based on single-sided events, the other on large rapidity gaps in soft events. In addition, a recent measurement of the total pp cross section using the ALFA subdetector and a recent study of higher-order QCD effects using a jet veto are discussed.

  9. MiniBooNE: Up and Running Morgan Wascko Morgan Wascko Louisiana State University

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Up and Running Morgan Wascko Morgan Wascko Louisiana State University Louisiana State University Morgan O. Wascko, LSU Yang Institute Conference 11 October, 2002 MiniBooNE detector at Fermi National Accelerator Lab Outline Motivation MiniBooNE Overview Physics at MiniBooNE Current Status First Data! Morgan O. Wascko, LSU Yang Institute Conference 11 October, 2002 Neutrino Oscillations The Evidence So Far ... Solar Solar ∆ ∆ m m 2 2 ~ ~ 10 10 -(4~5) -(4~5) Atmospheric Atmospheric ∆ ∆ m m

  10. modeling

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    modeling - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Fuel Cycle Defense Waste Management Programs Advanced Nuclear Energy

  11. Modeling

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    NASA Earth at Night Video EC, Energy, Energy Efficiency, Global, Modeling, News & Events, Solid-State Lighting, Videos NASA Earth at Night Video Have you ever wondered what the Earth looks like at night? NASA provides a clear, cloud-free view of the Earth at night using the Suomi National Polar-orbiting Partnership Satellite. The satellite utilizes an instrument known as the Visible Infrared Radiometer Suite (VIIRS), which allows the satellite to capture images of a "remarkably detailed

  12. RHIC polarized proton-proton operation at 100 GeV in Run 15

    SciTech Connect (OSTI)

    Schoefer, V.; Aschenauer, E. C.; Atoian, G.; Blaskiewicz, M.; Brown, K. A.; Bruno, D.; Connolly, R.; D Ottavio, T.; Drees, K. A.; Dutheil, Y.; Fischer, W.; Gardner, C.; Gu, X.; Hayes, T.; Huang, H.; Laster, J.; Liu, C.; Luo, Y.; Makdisi, Y.; Marr, G.; Marusic, A.; Meot, F.; Mernick, K.; Michnoff, R.; Marusic, A.; Minty, M.; Montag, C.; Morris, J.; Narayan, G.; Nemesure, S.; Pile, P.; Poblaguev, A.; Ranjbar, V.; Robert-Demolaize, G.; Roser, T.; Schmidke, W. B.; Severino, F.; Shrey, T.; Smith, K.; Steski, D.; Tepikian, S.; Trbojevic, D.; Tsoupas, N.; Tuozzolo, J.; Wang, G.; White, S.; Yip, K.; Zaltsman, A.; Zelenski, A.; Zeno, K.; Zhang, S. Y.

    2015-05-03

    The first part of RHIC Run 15 consisted of ten weeks of polarized proton on proton collisions at a beam energy of 100 GeV at two interaction points. In this paper we discuss several of the upgrades to the collider complex that allowed for improved performance. The largest effort consisted in commissioning of the electron lenses, one in each ring, which are designed to compensate one of the two beam-beam interactions experienced by the proton bunches. The e-lenses raise the per bunch intensity at which luminosity becomes beam-beam limited. A new lattice was designed to create the phase advances necessary for a beam-beam compensation with the e-lens, which also has an improved off-momentum dynamic aperture relative to previous runs. In order to take advantage of the new, higher intensity limit without suffering intensity driven emittance deterioration, other features were commissioned including a continuous transverse bunch-by-bunch damper in RHIC and a double harmonic RF cature scheme in the Booster. Other high intensity protections include improvements to the abort system and the installation of masks to intercept beam lost due to abort kicker pre-fires.

  13. Experimental Results of NWCF Run H4 Calcine Dissolution Studies Performed in FY-98 and -99

    SciTech Connect (OSTI)

    Garn, Troy Gerry; Herbst, Ronald Scott; Batcheller, Thomas Aquinas; Sierra, Tracy Laureena

    2001-08-01

    Dissolution experiments were performed on actual samples of NWCF Run H-4 radioactive calcine in fiscal years 1998 and 1999. Run H-4 is an aluminum/sodium blend calcine. Typical dissolution data indicates that between 90-95 wt% of H-4 calcine can be dissolved using 1gram of calcine per 10 mLs of 5-8M nitric acid at boiling temperature. Two liquid raffinate solutions composed of a WM-188/aluminum nitrate blend and a WM-185/aluminum nitrate blend were converted into calcine at the NWCF. Calcine made from each blend was collected and transferred to RAL for dissolution studies. The WM-188/aluminum nitrate blend calcine was dissolved with resultant solutions used as feed material for separation treatment experimentation. The WM-185/aluminum nitrate blend calcine dissolution testing was performed to determine compositional analyses of the dissolved solution and generate UDS for solid/liquid separation experiments. Analytical fusion techniques were then used to determine compositions of the solid calcine and UDS from dissolution. The results from each of these analyses were used to calculate elemental material balances around the dissolution process, validating the experimental data. This report contains all experimental data from dissolution experiments performed using both calcine blends.

  14. Modeling

    SciTech Connect (OSTI)

    Loth, E.; Tryggvason, G.; Tsuji, Y.; Elghobashi, S. E.; Crowe, Clayton T.; Berlemont, A.; Reeks, M.; Simonin, O.; Frank, Th; Onishi, Yasuo; Van Wachem, B.

    2005-09-01

    Slurry flows occur in many circumstances, including chemical manufacturing processes, pipeline transfer of coal, sand, and minerals; mud flows; and disposal of dredged materials. In this section we discuss slurry flow applications related to radioactive waste management. The Hanford tank waste solids and interstitial liquids will be mixed to form a slurry so it can be pumped out for retrieval and treatment. The waste is very complex chemically and physically. The ARIEL code is used to model the chemical interactions and fluid dynamics of the waste.

  15. Modeling

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    diffuse interface methods in ALE-AMR code with application in modeling NDCX-II experiments Wangyi Liu 1 , John Barnard 2 , Alex Friedman 2 , Nathan Masters 2 , Aaron Fisher 2 , Alice Koniges 2 , David Eder 2 1 LBNL, USA, 2 LLNL, USA This work was part of the Petascale Initiative in Computational Science at NERSC, supported by the Director, Office of Science, Advanced Scientific Computing Research, of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. This work was performed

  16. THE MATRYOSHKA RUN. II. TIME-DEPENDENT TURBULENCE STATISTICS, STOCHASTIC PARTICLE ACCELERATION, AND MICROPHYSICS IMPACT IN A MASSIVE GALAXY CLUSTER

    SciTech Connect (OSTI)

    Miniati, Francesco

    2015-02-10

    We use the Matryoshka run to study the time-dependent statistics of structure-formation-driven turbulence in the intracluster medium of a 10{sup 15} M {sub ?} galaxy cluster. We investigate the turbulent cascade in the inner megaparsec for both compressional and incompressible velocity components. The flow maintains approximate conditions of fully developed turbulence, with departures thereof settling in about an eddy-turnover time. Turbulent velocity dispersion remains above 700kms{sup 1} even at low mass accretion rate, with the fraction of compressional energy between 10% and 40%. The normalization and the slope of the compressional turbulence are susceptible to large variations on short timescales, unlike the incompressible counterpart. A major merger occurs around redshift z ? 0 and is accompanied by a long period of enhanced turbulence, ascribed to temporal clustering of mass accretion related to spatial clustering of matter. We test models of stochastic acceleration by compressional modes for the origin of diffuse radio emission in galaxy clusters. The turbulence simulation model constrains an important unknown of this complex problem and brings forth its dependence on the elusive microphysics of the intracluster plasma. In particular, the specifics of the plasma collisionality and the dissipation physics of weak shocks affect the cascade of compressional modes with strong impact on the acceleration rates. In this context radio halos emerge as complex phenomena in which a hierarchy of processes acting on progressively smaller scales are at work. Stochastic acceleration by compressional modes implies statistical correlation of radio power and spectral index with merging cores distance, both testable in principle with radio surveys.

  17. Emissions of Volatile Particulate Components from Turboshaft Engines running JP-8 and Fischer-Tropsch Fuels

    SciTech Connect (OSTI)

    Cheng, Mengdawn; Corporan, E.; DeWitt, M.; Landgraf, Bradley J

    2009-01-01

    Rotating-wing aircraft or helicopters are heavily used by the US military and also a wide range of commercial applications around the world, but emissions data for this class of engines are limited. In this study, we focus on emissions from T700-GE-700 and T700-GE-701C engines; T700 engine was run with military JP-8 and T701C run with both JP-8 and Fischer-Tropsch (FT) fuels. Each engine was run at three engine power settings from the idle to maximum power in sequence. Exhaust particles measured at the engine exhaust plane (EEP) have a peak mobility diameter less than 50nm in all engine power settings. At a 4-m downstream location, sulfate/sulfur measurements indicate all particulate sulfur exists practically as sulfate, and the particulate sulfur and sulfate contents increased as the engine power increased. The conversion of sulfur to sulfate was found not to be dependent on engine power setting. Analysis also showed that conversion of sulfur to sulfate was not by the adsorption of sulfur dioxide gas on the soot particles and then subsequently oxidized to form sulfate, but by gas-phase conversion of SO2 via OH or O then subsequently forming H2SO4 and condensing on soot particles. Without the sulfur and aromatic components, use of the FT fuel led to significant reduction of soot emissions as compared to that of the JP-8 fuel producing less number of particles than that of the JP-8 fuel; however, the FT fuel produced much higher number concentrations of particles smaller than 7nm than that of JP-8 in all engine power settings. This indicates non-aromatics components in the FT fuel could have contributed to the enhancement of emissions of particles smaller than 7nm. These small particles are volatile, not observed at the EEP, and may be important in playing a role for the formation of secondary particles in the atmosphere or serving as a site for effective cloud nuclei condensation to occur.

  18. Measurement of the t tbar cross section at the Run II Tevatron using Support Vector Machines

    SciTech Connect (OSTI)

    Whitehouse, Benjamin Eric; /Tufts U.

    2010-08-01

    This dissertation measures the t{bar t} production cross section at the Run II CDF detector using data from early 2001 through March 2007. The Tevatron at Fermilab is a p{bar p} collider with center of mass energy {radical}s = 1.96 TeV. This data composes a sample with a time-integrated luminosity measured at 2.2 {+-} 0.1 fb{sup -1}. A system of learning machines is developed to recognize t{bar t} events in the 'lepton plus jets' decay channel. Support Vector Machines are described, and their ability to cope with a multi-class discrimination problem is provided. The t{bar t} production cross section is then measured in this framework, and found to be {sigma}{sub t{bar t}} = 7.14 {+-} 0.25 (stat){sub -0.86}{sup +0.61}(sys) pb.

  19. The Manuel Lujan, Jr. Neutron Scattering Center, LANSCE experiment reports: 1990 Run Cycle

    SciTech Connect (OSTI)

    DiStravolo, M.A.

    1991-10-01

    This year was the third in which LANSCE ran a formal user program. A call for proposals was issued before the scheduled run cycles, and experiment proposals were submitted by scientists from universities, industry, and other research facilities around the world. An external program advisory committee, which LANSCE shares with the Intense Pulsed Neutron Source (IPNS), Argonne National Laboratory examined the proposals and made recommendations. At LANSCE, neutrons are produced by spallation when a pulsed, 800-MeV proton beam impinges on a tungsten target. The proton pulses are provided by the Clinton P. Anderson Meson Physics Facility (LAMPF) accelerator and an associated Proton Storage Ring (PSR), which can alter the intensity, time structure, and repetition rate of the pulses. The LAMPF protons of Line D are shared between the LANSCE target and the Weapons Neutron Research facility, which results in LANSCE spectrometers being available to external users for unclassified research about 80% of each six-month LAMPF run cycle. Measurements of interest to the Los Alamos National Laboratory may also be performed and may occupy up to an additional 20% of the available beam time. These experiments are reviewed by an internal program advisory committee. One hundred thirty-four proposals were submitted for unclassified research and twelve proposals for research of a programmatic nature to the Laboratory. Our definition of beam availability is when the proton current from the PSR exceeds 50% of the planned value. The PSR ran at 65{mu}A current (average) at 20 Hz for most of 1990. All of the scheduled experiments were performed and experiments in support of the LANSCE research program were accomplished during the discretionary periods.

  20. The Manuel Lujan, Jr. Neutron Scattering Center (LANSCE) experiment reports 1992 run cycle. Progress report

    SciTech Connect (OSTI)

    DiStravolo, M.A.

    1993-09-01

    This year was the fifth in which LANSCE ran a formal user program. A call for proposals was issued before the scheduled run cycles, and experiment proposals were submitted by scientists from universities, industry, and other research facilities around the world. An external program advisory committee, which LANSCE shares with the Intense Pulsed Neutron Source (IPNS), Argonne National Laboratory, examined the proposals and made recommendations. At LANSCE, neutrons are produced by spallation when a pulsed, 800-MeV proton beam impinges on a tungsten target. The proton pulses are provided by the Clinton P. Anderson Meson Physics Facility (LAMPF) accelerator and an associated Proton Storage Ring (PSR), which can alter the intensity, time structure, and repetition rate of the pulses. The LAMPF protons of Line D are shared between the LANSCE target and the Weapons Neutron Research (WNR) facility, which results in LANSCE spectrometers being available to external users for unclassified research about 80% of each annual LAMPF run cycle. Measurements of interest to the Los Alamos National Laboratory may also be performed and may occupy up to an additional 20% of the available beam time. These experiments are reviewed by an internal program advisory committee. One hundred sixty-seven proposals were submitted for unclassified research and twelve proposals for research of a programmatic interest to the Laboratory; six experiments in support of the LANSCE research program were accomplished during the discretionary periods. Oversubscription for instrument beam time by a factor of three was evident with 839 total days requested and only 371 available for allocation.

  1. Regions in Energy Market Models

    SciTech Connect (OSTI)

    Short, W.

    2007-02-01

    This report explores the different options for spatial resolution of an energy market model--and the advantages and disadvantages of models with fine spatial resolution. It examines different options for capturing spatial variations, considers the tradeoffs between them, and presents a few examples from one particular model that has been run at different levels of spatial resolution.

  2. Regions in Energy Market Models

    SciTech Connect (OSTI)

    2009-01-18

    This report explores the different options for spatial resolution of an energy market model and the advantages and disadvantages of models with fine spatial resolution. It examines different options for capturing spatial variations, considers the tradeoffs between them, and presents a few examples from one particular model that has been run at different levels of spatial resolution.

  3. Modeling particle loss in ventilation ducts

    SciTech Connect (OSTI)

    Sippola, Mark R.; Nazaroff, William W.

    2003-04-01

    Empirical equations were developed and applied to predict losses of 0.01-100 {micro}m airborne particles making a single pass through 120 different ventilation duct runs typical of those found in mid-sized office buildings. For all duct runs, losses were negligible for submicron particles and nearly complete for particles larger than 50 {micro}m. The 50th percentile cut-point diameters were 15 {micro}m in supply runs and 25 {micro}m in return runs. Losses in supply duct runs were higher than in return duct runs, mostly because internal insulation was present in portions of supply duct runs, but absent from return duct runs. Single-pass equations for particle loss in duct runs were combined with models for predicting ventilation system filtration efficiency and particle deposition to indoor surfaces to evaluate the fates of particles of indoor and outdoor origin in an archetypal mechanically ventilated building. Results suggest that duct losses are a minor influence for determining indoor concentrations for most particle sizes. Losses in ducts were of a comparable magnitude to indoor surface losses for most particle sizes. For outdoor air drawn into an unfiltered ventilation system, most particles smaller than 1 {micro}m are exhausted from the building. Large particles deposit within the building, mostly in supply ducts or on indoor surfaces. When filters are present, most particles are either filtered or exhausted. The fates of particles generated indoors follow similar trends as outdoor particles drawn into the building.

  4. Effects of the running of the QCD coupling on the energy loss in the quark-gluon plasma

    SciTech Connect (OSTI)

    Braun, Jens; Pirner, Hans-Juergen

    2007-03-01

    Finite temperature modifies the running of the QCD coupling {alpha}{sub s}(k,T) with resolution k. After calculating the thermal quark and gluon masses self-consistently, we determine the quark-quark and quark-gluon cross sections in the plasma based on the running coupling. We find that the running coupling enhances these cross sections by factors of two to four depending on the temperature. We also compute the energy loss (dE/dx) of a high-energy quark in the plasma as a function of temperature. Our study suggests that, beside t-channel processes, inverse Compton scattering is a relevant process for a quantitative understanding of the energy loss of an incident quark in a hot plasma.

  5. Short run effects of a price on carbon dioxide emissions from U.S. electric generators

    SciTech Connect (OSTI)

    Adam Newcomer; Seth A. Blumsack; Jay Apt; Lester B. Lave; M. Granger Morgan [Carnegie Mellon University, Pittsburgh, PA (United States). Carnegie Mellon Electricity Industry Center

    2008-05-01

    The price of delivered electricity will rise if generators have to pay for carbon dioxide emissions through an implicit or explicit mechanism. There are two main effects that a substantial price on CO{sub 2} emissions would have in the short run (before the generation fleet changes significantly). First, consumers would react to increased price by buying less, described by their price elasticity of demand. Second, a price on CO{sub 2} emissions would change the order in which existing generators are economically dispatched, depending on their carbon dioxide emissions and marginal fuel prices. Both the price increase and dispatch changes depend on the mix of generation technologies and fuels in the region available for dispatch, although the consumer response to higher prices is the dominant effect. We estimate that the instantaneous imposition of a price of $35 per metric ton on CO{sub 2} emissions would lead to a 10% reduction in CO{sub 2} emissions in PJM and MISO at a price elasticity of -0.1. Reductions in ERCOT would be about one-third as large. Thus, a price on CO{sub 2} emissions that has been shown in earlier work to stimulate investment in new generation technology also provides significant CO{sub 2} reductions before new technology is deployed at large scale. 39 refs., 4 figs., 2 tabs.

  6. A Run-Time Verification Framework for Smart Grid Applications Implemented on Simulation Frameworks

    SciTech Connect (OSTI)

    Ciraci, Selim; Sozer, Hasan; Tekinerdogan, Bedir

    2013-05-18

    Smart grid applications are implemented and tested with simulation frameworks as the developers usually do not have access to large sensor networks to be used as a test bed. The developers are forced to map the implementation onto these frameworks which results in a deviation between the architecture and the code. On its turn this deviation makes it hard to verify behavioral constraints that are de- scribed at the architectural level. We have developed the ConArch toolset to support the automated verification of architecture-level behavioral constraints. A key feature of ConArch is programmable mapping for architecture to the implementation. Here, developers implement queries to identify the points in the target program that correspond to architectural interactions. ConArch generates run- time observers that monitor the flow of execution between these points and verifies whether this flow conforms to the behavioral constraints. We illustrate how the programmable mappings can be exploited for verifying behavioral constraints of a smart grid appli- cation that is implemented with two simulation frameworks.

  7. Improved NLDAS-2 Noah-simulated Hydrometeorological Products with an Interim Run

    SciTech Connect (OSTI)

    Xia, Youlong; Peter-Lidard, Christa; Huang, Maoyi; Wei, Helin; Ek, Michael

    2015-02-28

    In NLDAS-2 Noah simulation, the NLDAS team introduced an intermediate fix suggested by Slater et al. (2007) and Livneh et al. (2010) to reduce large sublimation. The fix is used to constraint surface exchange coefficient (CH) using CH =CHoriginal x max (1.0-RiB/0.5, 0.05) when atmospheric boundary layer is stable. RiB is Richardson number. In NLDAS-2 Noah version, this fix was used for all stable cases including snow-free grid cells. In this study, we simply applied this fix to the grid cells in which both stable atmospheric boundary layer and snow exist simultaneously excluding the snow-free grid cells as we recognize that the fix constraint in NLDAS-2 is too strong. We make a 31-year (1979-2009) Noah NLDAS-2 interim (NoahI) run. We use observed streamflow, evapotranspiration, land surface temperature, soil temperature, and ground heat flux to evaluate the results simulated from NoahI and make the reasonable comparison with those simulated from NLDAS-2 Noah (Xia et al., 2012). The results show that NoahI has the same performance as Noah does for snow water equivalent simulation. However, NoahI significantly improved the other hydrometeorological products simulation as described above when compared to Noah and the observations. This simple modification is being installed to the next Noah version. The hydrometeorological products simulated from NoahI will be staged on NCEP public server for the public in future.

  8. Liquid phase fluid dynamic (methanol) run in the LaPorte alternative fuels development unit

    SciTech Connect (OSTI)

    Bharat L. Bhatt

    1997-05-01

    A fluid dynamic study was successfully completed in a bubble column at DOE's Alternative Fuels Development Unit (AFDU) in LaPorte, Texas. Significant fluid dynamic information was gathered at pilot scale during three weeks of Liquid Phase Methanol (LPMEOJP) operations in June 1995. In addition to the usual nuclear density and temperature measurements, unique differential pressure data were collected using Sandia's high-speed data acquisition system to gain insight on flow regime characteristics and bubble size distribution. Statistical analysis of the fluctuations in the pressure data suggests that the column was being operated in the churn turbulent regime at most of the velocities considered. Dynamic gas disengagement experiments showed a different behavior than seen in low-pressure, cold-flow work. Operation with a superficial gas velocity of 1.2 ft/sec was achieved during this run, with stable fluid dynamics and catalyst performance. Improvements included for catalyst activation in the design of the Clean Coal III LPMEOH{trademark} plant at Kingsport, Tennessee, were also confirmed. In addition, an alternate catalyst was demonstrated for LPMEOH{trademark}.

  9. Trial Run of a Junction-Box Attachment Test for Use in Photovoltaic Module Qualification (Presentation)

    SciTech Connect (OSTI)

    Miller, D.; Deibert, S.; Wohlgemuth, J.

    2014-06-01

    Engineering robust adhesion of the junction-box (j-box) is a hurdle typically encountered by photovoltaic (PV) module manufacturers during product development and manufacturing process control. There are historical incidences of adverse effects (e.g., fires), caused when the j-box/adhesive/module system has failed in the field. The addition of a weight to the j-box during the 'damp-heat', 'thermal-cycle', or 'creep' tests within the IEC qualification protocol is proposed to verify the basic robustness of the adhesion system. The details of the proposed test are described, in addition to a trial run of the test procedure. The described experiments examine 4 moisture-cured silicones, 4 foam tapes, and a hot-melt adhesive used in conjunction with glass, KPE, THV, and TPE substrates. For the purpose of validating the experiment, j-boxes were adhered to a substrate, loaded with a prescribed weight, and then subjected to aging. The replicate mock-modules were aged in an environmental chamber (at 85 deg C/85% relative humidity for 1000 hours; then 100 degrees C/<10% relative humidity for 200 hours) or fielded in Golden, Miami, and Phoenix for 1 year. Attachment strength tests, including pluck and shear test geometries, were also performed on smaller component specimens.

  10. Communication library for run-time visualization of distributed, asynchronous data

    SciTech Connect (OSTI)

    Rowlan, J.; Wightman, B.T.

    1994-04-01

    In this paper we present a method for collecting and visualizing data generated by a parallel computational simulation during run time. Data distributed across multiple processes is sent across parallel communication lines to a remote workstation, which sorts and queues the data for visualization. We have implemented our method in a set of tools called PORTAL (for Parallel aRchitecture data-TrAnsfer Library). The tools comprise generic routines for sending data from a parallel program (callable from either C or FORTRAN), a semi-parallel communication scheme currently built upon Unix Sockets, and a real-time connection to the scientific visualization program AVS. Our method is most valuable when used to examine large datasets that can be efficiently generated and do not need to be stored on disk. The PORTAL source libraries, detailed documentation, and a working example can be obtained by anonymous ftp from info.mcs.anl.gov from the file portal.tar.Z from the directory pub/portal.

  11. The Manuel Lujan Jr. Neutron Scattering Center (LANSCE) experiment reports 1993 run cycle. Progress report

    SciTech Connect (OSTI)

    Farrer, R.; Longshore, A.

    1995-06-01

    This year the Manuel Lujan Jr. Neutron Scattering Center (LANSCE) ran an informal user program because the US Department of Energy planned to close LANSCE in FY1994. As a result, an advisory committee recommended that LANSCE scientists and their collaborators complete work in progress. At LANSCE, neutrons are produced by spallation when a pulsed, 800-MeV proton beam impinges on a tungsten target. The proton pulses are provided by the Clinton P. Anderson Meson Physics Facility (LAMPF) accelerator and a associated Proton Storage Ring (PSR), which can Iter the intensity, time structure, and repetition rate of the pulses. The LAMPF protons of Line D are shared between the LANSCE target and the Weapons Neutron Research (WNR) facility, which results in LANSCE spectrometers being available to external users for unclassified research about 80% of each annual LAMPF run cycle. Measurements of interest to the Los Alamos National Laboratory (LANL) may also be performed and may occupy up to an additional 20% of the available beam time. These experiments are reviewed by an internal program advisory committee. This year, a total of 127 proposals were submitted. The proposed experiments involved 229 scientists, 57 of whom visited LANSCE to participate in measurements. In addition, 3 (nuclear physics) participating research teams, comprising 44 scientists, carried out experiments at LANSCE. Instrument beam time was again oversubscribed, with 552 total days requested an 473 available for allocation.

  12. Searching for R-parity violation at run-II of the tevatron.

    SciTech Connect (OSTI)

    Allanach, B.; Banerjee, S.; Berger, E. L.; Chertok, M.; Diaz, M. A.; Dreiner, H.; Eboli, O. J. P.; Harris, B. W.; Hewett, J.; Magro, M. B.; Mondal, N. K.; Narasimham, V. S.; Navarro, L.; Parua, N.; Porod, W.; Restrepo, D. A.; Richardson, P.; Rizzo, T.; Seymour, M. H.; Sullivan, Z.; Valle, J. W. F.; de Campos, F.

    1999-06-22

    The authors present an outlook for possible discovery of supersymmetry with broken R-parity at Run II of the Tevatron. They first present a review of the literature and an update of the experimental bounds. In turn they then discuss the following processes: (1) resonant slepton production followed by R{sub P} decay, (a) via LQD{sup c} and (b) via LLE{sup c}; (2) how to distinguish resonant slepton production from Z{prime} or W{prime} production; (3) resonant slepton production followed by the decay to neutralino LSP, which decays via LQD{sup c}; (4) resonant stop production followed by the decay to a chargino, which cascades to the neutralino LSP; (5) gluino pair production followed by the cascade decay to charm squarks which decay directly via L{sub 1}Q{sub 2}D{sub 1}{sup c}; (6) squark pair production followed by the cascade decay to the neutralino LSP which decays via L{sub 1}Q{sub 2}D{sub 1}{sup c}; (7) MSSM pair production followed by the cascade decay to the LSP which decays (a) via LLE{sup c}, (b) via LQD{sup c}, and (c) via U{sup c}D{sup c}D{sup c}, respectively; and (8) top quark and top squark decays in spontaneous R{sub P}.

  13. Effect of CNG start - gasoline run on emissions from a 3/4 ton pick-up truck

    SciTech Connect (OSTI)

    Springer, K.J.; Smith, L.R.; Dickinson, A.G.

    1994-10-01

    This paper describes experiments to determine the effect on exhaust emissions of starting on compressed natural gas (CNG) and then switching to gasoline once the catalyst reaches operating temperature. Carbon monoxide, oxides of nitrogen, and detailed exhaust hydrocarbon speciation data were obtained for dedicated CNG, then unleaded gasoline, and finally CNG start - gasoline run using the Federal Test Procedure at 24{degree}C and at -7{degree}C. The results was a reductiopn in emissions from the gasoline baseline, especially at -7{degree}C. It was estimated that CNG start - gasoline run resulted in a 71 percent reduction in potential ozone formation per mile. 3 refs., 6 figs., 11 tabs.

  14. Methods, media and systems for managing a distributed application running in a plurality of digital processing devices

    DOE Patents [OSTI]

    Laadan, Oren; Nieh, Jason; Phung, Dan

    2012-10-02

    Methods, media and systems for managing a distributed application running in a plurality of digital processing devices are provided. In some embodiments, a method includes running one or more processes associated with the distributed application in virtualized operating system environments on a plurality of digital processing devices, suspending the one or more processes, and saving network state information relating to network connections among the one or more processes. The method further include storing process information relating to the one or more processes, recreating the network connections using the saved network state information, and restarting the one or more processes using the stored process information.

  15. The PDF4LHC report on PDFs and LHC data: results from Run I and...

    Office of Scientific and Technical Information (OSTI)

    Hadron Collider (LHC) program. PDF uncertainties impact a wide range of processes, from Higgs boson characterization and precision Standard Model measurements to New Physics...

  16. Non-Kyoto Radiative Forcing in Long-Run Greenhouse Gas Emissions and Climate Change Scenarios

    SciTech Connect (OSTI)

    Rose, Steven K.; Richels, Richard G.; Smith, Steven J.; Riahi, Keywan; Stefler, Jessica; Van Vuuren, Detlef

    2014-04-27

    Climate policies designed to achieve climate change objectives must consider radiative forcing from the Kyoto greenhouse gas, as well as other forcing constituents, such as aerosols and tropospheric ozone. Net positive forcing leads to global average temperature increases. Modeling of non-Kyoto forcing is a relatively new component of climate management scenarios. Five of the nineteen models in the EMF-27 Study model both Kyoto and non-Kyoto forcing. This paper describes and assesses current non-Kyoto radiative forcing modeling within these integrated assessment models. The study finds negative forcing from aerosols masking significant positive forcing in reference non-climate policy projections. There are however large differences across models in projected non-Kyoto emissions and forcing, with differences stemming from differences in relationships between Kyoto and non-Kyoto emissions and fundamental differences in modeling structure and assumptions. Air pollution and non-Kyoto forcing decline in the climate policy scenarios. However, non-Kyoto forcing appears to be influencing mitigation results, including allowable carbon dioxide emissions, and further evaluation is merited. Overall, there is substantial uncertainty related to non-Kyoto forcing that must be considered.

  17. Simple ocean carbon cycle models

    SciTech Connect (OSTI)

    Caldeira, K.; Hoffert, M.I.; Siegenthaler, U.

    1994-02-01

    Simple ocean carbon cycle models can be used to calculate the rate at which the oceans are likely to absorb CO{sub 2} from the atmosphere. For problems involving steady-state ocean circulation, well calibrated ocean models produce results that are very similar to results obtained using general circulation models. Hence, simple ocean carbon cycle models may be appropriate for use in studies in which the time or expense of running large scale general circulation models would be prohibitive. Simple ocean models have the advantage of being based on a small number of explicit assumptions. The simplicity of these ocean models facilitates the understanding of model results.

  18. AGR-2 irradiation test final as-run report, Rev. 1

    SciTech Connect (OSTI)

    Collin, Blaise

    2014-08-01

    This document presents the as-run analysis of the AGR-2 irradiation experiment. AGR-2 is the second of the planned irradiations for the Advanced Gas Reactor (AGR) Fuel Development and Qualification Program. Funding for this program is provided by the U.S. Department of Energy as part of the Very High Temperature Reactor (VHTR) Technical Development Office (TDO) program. The objectives of the AGR-2 experiment are to: (a) Irradiate UCO (uranium oxycarbide) and UO2 (uranium dioxide) fuel produced in a large coater. Fuel attributes are based on results obtained from the AGR-1 test and other project activities; (b) Provide irradiated fuel samples for post-irradiation experiment (PIE) and safety testing; and, (c) Support the development of an understanding of the relationship between fuel fabrication processes, fuel product properties, and irradiation performance. The primary objective of the test was to irradiate both UCO and UO2 TRISO (tri-structural isotropic) fuel produced from prototypic scale equipment to obtain normal operation and accident condition fuel performance data. The UCO compacts were subjected to a range of burnups and temperatures typical of anticipated prismatic reactor service conditions in three capsules. The test train also includes compacts containing UO2 particles produced independently by the United States, South Africa, and France in three separate capsules. The range of burnups and temperatures in these capsules were typical of anticipated pebble bed reactor service conditions. The results discussed in this report pertain only to U.S. produced fuel. In order to achieve the test objectives, the AGR-2 experiment was irradiated in the B-12 position of the Advanced Test Reactor (ATR) at Idaho National Laboratory (INL) for a total irradiation duration of 559.2 effective full power days (EFPD). Irradiation began on June 22, 2010, and ended on October 16, 2013, spanning 12 ATR power cycles and approximately three and a half calendar years. The test contained six independently controlled and monitored capsules. Each U.S. capsule contained 12 compacts of either UCO or UO2 AGR coated fuel. No fuel particles failed during the AGR-2 irradiation. Final burnup values on a per compact basis ranged from 7.26 to 13.15% FIMA (fissions per initial heavy-metal atom) for UCO fuel, and 9.01 to 10.69% FIMA for UO2 fuel, while fast fluence values ranged from 1.94 to 3.471025 n/m2 (E >0.18 MeV) for UCO fuel, and from 3.05 to 3.531025 n/m2 (E >0.18 MeV) for UO2 fuel. Time-average volume-average (TAVA) temperatures on a capsule basis at the end of irradiation ranged from 987C in Capsule 6 to 1296C in Capsule 2 for UCO, and from 996 to 1062C in UO2-fueled Capsule 3. By the end of the irradiation, all of the installed thermocouples (TCs) had failed. Fission product release-to-birth (R/B) ratios were quite low. In the UCO capsules, R/B values during the first three cycles were below 10-6 with the exception of the hotter Capsule 2, in which the R/Bs reached 210-6. In the UO2 capsule (Capsule 3), the R/B values during the first three cycles were below 10-7. R/B values for all following cycles are not reliable due to gas flow and cross talk issues.

  19. AGR-2 Irradiation Test Final As-Run Report, Rev 2

    SciTech Connect (OSTI)

    Blaise Collin

    2014-08-01

    This document presents the as-run analysis of the AGR-2 irradiation experiment. AGR-2 is the second of the planned irradiations for the Advanced Gas Reactor (AGR) Fuel Development and Qualification Program. Funding for this program is provided by the U.S. Department of Energy as part of the Very High Temperature Reactor (VHTR) Technical Development Office (TDO) program. The objectives of the AGR-2 experiment are to: (a) Irradiate UCO (uranium oxycarbide) and UO2 (uranium dioxide) fuel produced in a large coater. Fuel attributes are based on results obtained from the AGR-1 test and other project activities. (b) Provide irradiated fuel samples for post-irradiation experiment (PIE) and safety testing. (c) Support the development of an understanding of the relationship between fuel fabrication processes, fuel product properties, and irradiation performance. The primary objective of the test was to irradiate both UCO and UO2 TRISO (tri-structural isotropic) fuel produced from prototypic scale equipment to obtain normal operation and accident condition fuel performance data. The UCO compacts were subjected to a range of burnups and temperatures typical of anticipated prismatic reactor service conditions in three capsules. The test train also includes compacts containing UO2 particles produced independently by the United States, South Africa, and France in three separate capsules. The range of burnups and temperatures in these capsules were typical of anticipated pebble bed reactor service conditions. The results discussed in this report pertain only to U.S. produced fuel. In order to achieve the test objectives, the AGR-2 experiment was irradiated in the B-12 position of the Advanced Test Reactor (ATR) at Idaho National Laboratory (INL) for a total irradiation duration of 559.2 effective full power days (EFPD). Irradiation began on June 22, 2010, and ended on October 16, 2013, spanning 12 ATR power cycles and approximately three and a half calendar years. The test contained six independently controlled and monitored capsules. Each U.S. capsule contained 12 compacts of either UCO or UO2 AGR coated fuel. No fuel particles failed during the AGR-2 irradiation. Final burnup values on a per compact basis ranged from 7.26 to 13.15% FIMA (fissions per initial heavy-metal atom) for UCO fuel, and 9.01 to 10.69% FIMA for UO2 fuel, while fast fluence values ranged from 1.94 to 3.471025 n/m2 (E >0.18 MeV) for UCO fuel, and from 3.05 to 3.531025 n/m2 (E >0.18 MeV) for UO2 fuel. Time-average volume-average (TAVA) temperatures on a capsule basis at the end of irradiation ranged from 987C in Capsule 6 to 1296C in Capsule 2 for UCO, and from 996 to 1062C in UO2-fueled Capsule 3. By the end of the irradiation, all of the installed thermocouples (TCs) had failed. Fission product release-to-birth (R/B) ratios were quite low. In the UCO capsules, R/B values during the first three cycles were below 10-6 with the exception of the hotter Capsule 2, in which the R/Bs reached 210-6. In the UO2 capsule (Capsule 3), the R/B values during the first three cycles were below 10-7. R/B values for all following cycles are not reliable due to gas flow and cross talk issues.

  20. AGR-1 Irradiation Test Final As-Run Report, Rev. 3

    SciTech Connect (OSTI)

    Collin, Blaise P.

    2015-01-01

    This document presents the as-run analysis of the AGR-1 irradiation experiment. AGR-1 is the first of eight planned irradiations for the Advanced Gas Reactor (AGR) Fuel Development and Qualification Program. Funding for this program is provided by the US Department of Energy (DOE) as part of the Next-Generation Nuclear Plant (NGNP) project. The objectives of the AGR-1 experiment are: 1. To gain experience with multi-capsule test train design, fabrication, and operation with the intent to reduce the probability of capsule or test train failure in subsequent irradiation tests. 2. To irradiate fuel produced in conjunction with the AGR fuel process development effort. 3. To provide data that will support the development of an understanding of the relationship between fuel fabrication processes, fuel product properties, and irradiation performance. In order to achieve the test objectives, the AGR-1 experiment was irradiated in the B-10 position of the Advanced Test Reactor (ATR) at the Idaho National Laboratory (INL) for a total duration of 620 effective full power days of irradiation. Irradiation began on December 24, 2006 and ended on November 6, 2009 spanning 13 ATR cycles and approximately three calendar years. The test contained six independently controlled and monitored capsules. Each capsule contained 12 compacts of a single type, or variant, of the AGR coated fuel. No fuel particles failed during the AGR-1 irradiation. Final burnup values on a per compact basis ranged from 11.5 to 19.6 %FIMA, while fast fluence values ranged from 2.21 to 4.39 x 1025 n/m2 (E >0.18 MeV). Well say something here about temperatures once thermal recalc is done. Thermocouples performed well, failing at a lower rate than expected. At the end of the irradiation, nine of the originally-planned 19 TCs were considered functional. Fission product release-to-birth (R/B) ratios were quite low. In most capsules, R/B values at the end of the irradiation were at or below 10-7 with only one capsule significantly exceeding this value. A maximum R/B of around 2 x 10-7 was reached at the end of the irradiation in Capsule 5. Several shakedown issues were encountered and resolved during the first three cycles. These include the repair of minor gas line leaks; repair of faulty gas line valves; the need to position moisture monitors in regions of low radiation fields for proper functioning; the enforcement of proper on-line data storage and backup, the need to monitor thermocouple performance, correcting for detector spectral gain shift, and a change in the mass flow rate range of the neon flow controllers.

  1. Search for Supersymmetry in the Dilepton Final State with Taus at CDF Run II

    SciTech Connect (OSTI)

    Forrest, Robert David; /California U., Davis

    2011-09-01

    This thesis presents the results a search for chargino and neutralino supersymmetric particles yielding same signed dilepton final states including one hadronically decaying tau lepton using 6.0 fb{sup -1} of data collected by the the CDF II detector. This signature is important in SUSY models where, at high tan {beta}, the branching ratio of charginos and neutralinos to tau leptons becomes dominant. We study event acceptance, lepton identification cuts, and efficiencies. We set limits on the production cross section as a function of SUSY particle mass for certain generic models.

  2. Effect of CNG start-gasoline run on emissions from a 3/4 ton pick-up truck

    SciTech Connect (OSTI)

    Springer, K.J.; Smith, L.R.; Dickinson, A.G.

    1994-10-01

    This paper describes experiments to determine the effect on exhaust emissions of starting on compressed natural gas (CNG) and then switching to gasoline once the catalyst reaches operating temperature. Carbon monoxide, oxides of nitrogen, and detailed exhaust hydrocarbon speciation data were obtained for dedicated CNG, then unleaded gasoline, and finally CNG start-gasoline run using the Federal Test Procedure at 24{degree}C and at -7{degree}C. The result was a reduction in emissions from the gasoline baseline, especially at -7{degree}C. It was estimated that CNG start - gasoline run resulted in a 71 percent reduction in potential ozone formation per mile. 3 refs., 6 figs., 11 tabs.

  3. Tomče Runčevski | Center for Gas SeparationsRelevant to Clean Energy

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Technologies | Blandine Jerome Tomče Runčevski Previous Next List Runchevski Postdoctoral Researcher Department of Chemistry University of California, Berkeley Email: runcevski@berkeley.edu Phone: 510-708-2455 PhD in Materials Chemistry, Max Planck Institute for Solid State Research and Stuttgart University, Germany BS and MS in Applied Chemistry, Ss. Cyril and Methodius University, Macedonia EFRC Research Structural characterization of materials is the key step in understanding their

  4. Model Analysis ToolKit

    Energy Science and Technology Software Center (OSTI)

    2015-05-15

    MATK provides basic functionality to facilitate model analysis within the Python computational environment. Model analysis setup within MATK includes: - define parameters - define observations - define model (python function) - define samplesets (sets of parameter combinations) Currently supported functionality includes: - forward model runs - Latin-Hypercube sampling of parameters - multi-dimensional parameter studies - parallel execution of parameter samples - model calibration using internal Levenberg-Marquardt algorithm - model calibration using lmfit package - modelmore » calibration using levmar package - Markov Chain Monte Carlo using pymc package MATK facilitates model analysis using: - scipy - calibration (scipy.optimize) - rpy2 - Python interface to R« less

  5. Draft Test Plan for Brine Migration Experimental Studies in Run-of-Mine Salt Backfill

    SciTech Connect (OSTI)

    Jordan, Amy B.; Stauffer, Philip H.; Reed, Donald T.; Boukhalfa, Hakim; Caporuscio, Florie Andre; Robinson, Bruce Alan

    2015-02-02

    The primary objective of the experimental effort described here is to aid in understanding the complex nature of liquid, vapor, and solid transport occurring around heated nuclear waste in bedded salt. In order to gain confidence in the predictive capability of numerical models, experimental validation must be performed to ensure that (a) hydrological and physiochemical parameters and (b) processes are correctly simulated. The experiments proposed here are designed to study aspects of the system that have not been satisfactorily quantified in prior work. In addition to exploring the complex coupled physical processes in support of numerical model validation, lessons learned from these experiments will facilitate preparations for larger-scale experiments that may utilize similar instrumentation techniques.

  6. Community Climate System Model (CCSM) Experiments and Output Data

    DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]

    The CCSM web makes the source code of various versions of the model freely available and provides access to experiments that have been run and the resulting output data.

  7. Running Out of and Into Oil: Analyzing Global Oil Depletion and Transition Through 2050

    SciTech Connect (OSTI)

    Greene, D.L.

    2003-11-14

    This report presents a risk analysis of world conventional oil resource production, depletion, expansion, and a possible transition to unconventional oil resources such as oil sands, heavy oil and shale oil over the period 2000 to 2050. Risk analysis uses Monte Carlo simulation methods to produce a probability distribution of outcomes rather than a single value. Probability distributions are produced for the year in which conventional oil production peaks for the world as a whole and the year of peak production from regions outside the Middle East. Recent estimates of world oil resources by the United States Geological Survey (USGS), the International Institute of Applied Systems Analysis (IIASA), the World Energy Council (WEC) and Dr. C. Campbell provide alternative views of the extent of ultimate world oil resources. A model of oil resource depletion and expansion for twelve world regions is combined with a market equilibrium model of conventional and unconventional oil supply and demand to create a World Energy Scenarios Model (WESM). The model does not make use of Hubbert curves but instead relies on target reserve-to-production ratios to determine when regional output will begin to decline. The authors believe that their analysis has a bias toward optimism about oil resource availability because it does not attempt to incorporate political or environmental constraints on production, nor does it explicitly include geologic constraints on production rates. Global energy scenarios created by IIASA and WEC provide the context for the risk analysis. Key variables such as the quantity of undiscovered oil and rates of technological progress are treated as probability distributions, rather than constants. Analyses based on the USGS and IIASA resource assessments indicate that conventional oil production outside the Middle East is likely to peak sometime between 2010 and 2030. The most important determinants of the date are the quantity of undiscovered oil, the rate at which unconventional oil production can be expanded, and the rate of growth of reserves and enhanced recovery. Analysis based on data produced by Campbell indicates that the peak of non-Middle East production will occur before 2010. For total world conventional oil production, the results indicate a peak somewhere between 2020 and 2050. Key determinants of the peak in world oil production are the rate at which the Middle East region expands its output and the minimum reserves-to-production ratios producers will tolerate. Once world conventional oil production peaks, first oil sands and heavy oil from Canada, Venezuela and Russia, and later some other source such as shale oil from the United States must expand if total world oil consumption is to continue to increase. Alternative sources of liquid hydrocarbon fuels, such as coal or natural gas are also possible resources but not considered in this analysis nor is the possibility of transition to a hydrogen economy. These limitations were adopted to simplify the transition analysis. Inspection of the paths of conventional oil production indicates that even if world oil production does not peak before 2020, output of conventional oil is likely to increase at a substantially slower rate after that date. The implication is that there will have to be increased production of unconventional oil after that date if world petroleum consumption is to grow.

  8. The modeling of RF stacking of protons in the Accumulator (Technical...

    Office of Scientific and Technical Information (OSTI)

    The modeling of RF stacking of protons in the Accumulator Citation Details In-Document Search Title: The modeling of RF stacking of protons in the Accumulator When the Run2...

  9. The fate of long-lived superparticles with hadronic decays after LHC Run 1

    SciTech Connect (OSTI)

    Liu, Zhen; Tweedie, Brock

    2015-06-08

    Supersymmetry searches at the LHC are both highly varied and highly constraining, but the vast majority are focused on cases where the final-stage visible decays are prompt. Scenarios featuring superparticles with detector-scale lifetimes have therefore remained a tantalizing possibility for sub-TeV SUSY, since explicit limits are relatively sparse. Nonetheless, the extremely low backgrounds of the few existing searches for collider-stable and displaced new particles facilitates recastings into powerful long-lived superparticle searches, even for models for which those searches are highly non-optimized. In this paper, we assess the status of such models in the context of baryonic R-parity violation, gauge mediation, and mini-split SUSY. We explore a number of common simplified spectra where hadronic decays can be important, employing recasts of LHC searches that utilize different detector systems and final-state objects. The LSP/NLSP possibilities considered here include generic colored superparticles such as the gluino and light-flavor squarks, as well as the lighter stop and the quasi-degenerate Higgsino multiplet motivated by naturalness. We find that complementary coverage over large swaths of mass and lifetime is achievable by superimposing limits, particularly from CMSs tracker-based displaced dijet search and heavy stable charged particle searches. Adding in prompt searches, we find many cases where a range of sparticle masses is now excluded from zero lifetime to infinite lifetime with no gaps. In other cases, the displaced searches furnish the only extant limits at any lifetime.

  10. The fate of long-lived superparticles with hadronic decays after LHC Run 1

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Liu, Zhen; Tweedie, Brock

    2015-06-08

    Supersymmetry searches at the LHC are both highly varied and highly constraining, but the vast majority are focused on cases where the final-stage visible decays are prompt. Scenarios featuring superparticles with detector-scale lifetimes have therefore remained a tantalizing possibility for sub-TeV SUSY, since explicit limits are relatively sparse. Nonetheless, the extremely low backgrounds of the few existing searches for collider-stable and displaced new particles facilitates recastings into powerful long-lived superparticle searches, even for models for which those searches are highly non-optimized. In this paper, we assess the status of such models in the context of baryonic R-parity violation, gauge mediation,more »and mini-split SUSY. We then explore a number of common simplified spectra where hadronic decays can be important, employing recasts of LHC searches that utilize different detector systems and final-state objects. The LSP/NLSP possibilities considered here include generic colored superparticles such as the gluino and light-flavor squarks, as well as the lighter stop and the quasi-degenerate Higgsino multiplet motivated by naturalness. We find that complementary coverage over large swaths of mass and lifetime is achievable by superimposing limits, particularly from CMS’s tracker-based displaced dijet search and heavy stable charged particle searches. By adding in prompt searches, we find many cases where a range of sparticle masses is now excluded from zero lifetime to infinite lifetime with no gaps. In other cases, the displaced searches furnish the only extant limits at any lifetime.« less

  11. The fate of long-lived superparticles with hadronic decays after LHC Run 1

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Liu, Zhen; Tweedie, Brock

    2015-06-08

    Supersymmetry searches at the LHC are both highly varied and highly constraining, but the vast majority are focused on cases where the final-stage visible decays are prompt. Scenarios featuring superparticles with detector-scale lifetimes have therefore remained a tantalizing possibility for sub-TeV SUSY, since explicit limits are relatively sparse. Nonetheless, the extremely low backgrounds of the few existing searches for collider-stable and displaced new particles facilitates recastings into powerful long-lived superparticle searches, even for models for which those searches are highly non-optimized. In this paper, we assess the status of such models in the context of baryonic R-parity violation, gauge mediation,moreand mini-split SUSY. We explore a number of common simplified spectra where hadronic decays can be important, employing recasts of LHC searches that utilize different detector systems and final-state objects. The LSP/NLSP possibilities considered here include generic colored superparticles such as the gluino and light-flavor squarks, as well as the lighter stop and the quasi-degenerate Higgsino multiplet motivated by naturalness. We find that complementary coverage over large swaths of mass and lifetime is achievable by superimposing limits, particularly from CMSs tracker-based displaced dijet search and heavy stable charged particle searches. Adding in prompt searches, we find many cases where a range of sparticle masses is now excluded from zero lifetime to infinite lifetime with no gaps. In other cases, the displaced searches furnish the only extant limits at any lifetime.less

  12. A Precision Measurement of the W Boson Mass with 1 Inverse Femtobarn of DZero Run IIa Data

    SciTech Connect (OSTI)

    Osta, Jyotsna; /Notre Dame U.

    2009-10-01

    This thesis is a detailed presentation of a precision measurement of the mass of the W boson. It has been obtained by analyzing W {yields} e{nu} decays. The data used for this analysis was collected from 2002 to 2006 with the D0 detector, during Run IIa of the Fermilab Tevatron collider. It corresponds to a total integrated luminosity of 1 fb{sup -1}. With a sample of 499,830 W {yields} e{nu} candidate events, we obtain a mass measurement of M{sub W} = 80.401 {+-} 0.043 GeV. This is the most precise measurement from a single experiment to date.

  13. A Flexible Atmospheric Modeling Framework for the CESM

    SciTech Connect (OSTI)

    Randall, David; Heikes, Ross; Konor, Celal

    2014-11-12

    We have created two global dynamical cores based on the unified system of equations and Z-grid staggering on an icosahedral grid, which are collectively called UZIM (Unified Z-grid Icosahedral Model). The z-coordinate version (UZIM-height) can be run in hydrostatic and nonhydrostatic modes. The sigma-coordinate version (UZIM-sigma) runs in only hydrostatic mode. The super-parameterization has been included as a physics option in both models. The UZIM versions with the super-parameterization are called SUZI. With SUZI-height, we have completed aquaplanet runs. With SUZI-sigma, we are making aquaplanet runs and realistic climate simulations. SUZI-sigma includes realistic topography and a SiB3 model to parameterize the land-surface processes.

  14. NREL: Jobs and Economic Development Impacts (JEDI) Models - About JEDI

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Concentrating Solar Power Model CSP Model The Jobs and Economic Development Impacts (JEDI) Concentrating Solar Power (CSP) model allows users to estimate economic development impacts from CSP projects. JEDI CSP has default information that can be utilized to run a generic impacts analysis assuming industry averages. Model users are encouraged to enter as much project-specific data as possible. Download the JEDI CSP Model Printable Version JEDI Home About JEDI Biofuels Models Coal Model

  15. Measurement of the top-quark mass in the tt dilepton channel using the full CDF Run II data set

    SciTech Connect (OSTI)

    Aaltonen, T.

    2015-08-06

    We present a measurement of the top-quark mass in events containing two leptons (electrons or muons) with a large transverse momentum, two or more energetic jets, and a transverse-momentum imbalance. We use the full proton-antiproton collision data set collected by the CDF experiment during the Fermilab Tevatron Run II at center-of-mass energy ?s = 1.96 TeV, corresponding to an integrated luminosity of 9.1 fb1. A special observable is exploited for an optimal reduction of the dominant systematic uncertainty, associated with the knowledge of the absolute energy of the hadronic jets. The distribution of this observable in the selected events is compared to simulated distributions of tt dilepton signal and background. We measure a value for the top-quark mass of 171.51.9 (stat)2.5 (syst) GeV/c2.

  16. Free-running InGaAs single photon detector with 1 dark count per second at 10% efficiency

    SciTech Connect (OSTI)

    Korzh, B. Walenta, N.; Lunghi, T.; Gisin, N.; Zbinden, H.

    2014-02-24

    We present a free-running single photon detector for telecom wavelengths based on a negative feedback avalanche photodiode (NFAD). A dark count rate as low as 1?cps was obtained at a detection efficiency of 10%, with an afterpulse probability of 2.2% for 20??s of deadtime. This was achieved by using an active hold-off circuit and cooling the NFAD with a free-piston stirling cooler down to temperatures of ?110?C. We integrated two detectors into a practical, 625?MHz clocked quantum key distribution system. Stable, real-time key distribution in the presence of 30?dB channel loss was possible, yielding a secret key rate of 350?bps.

  17. Running Grid Jobs

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    support job submission via Grid interfaces. Remote job submission is based on Globus GRAM. Jobs can be submitted either to the fork jobmanager (default) which will fork and...

  18. Running Jobs Intermittently Slow

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    happen to jobs having inputoutput on global file systems (project, globalhomes, globalscratch2). It could also happen to aplications using shared libraries, or CCM jobs...

  19. Running Large Scale Jobs

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    peta-scale production systems. For example, certain applications may not have enough memory per core, the default environment variables may need to be adjusted, or IO dominates...

  20. Running Large Scale Jobs

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    try on their large scale applications on Hopper for better performance. Try different compilers and compiler options The available compilers on Hopper are PGI, Cray, Intel, GNU,...

  1. Running.pptx

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    IO * PROJECT * Sharing between peoplesystems * By request only File Systems % cc hello.c % .a.out Hello, world * Login nodes are not intended for computation * No MPI...

  2. Running Jobs.ppt

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ... wwkkkSkk kkkkkkSk n2 k kkkkk - --kmm kkk k--c ----k--- cccccccc ---c c-- ccccc -- ... hhhk kk cccccekk ekkkyycc cccccc c ccccckk kkk yyyc n1 bddbbb eee eec bdoo oo eeeeeeee ...

  3. Fall Run | Jefferson Lab

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    exchanges of several cooling towers. The past several months have also seen the refurbishment of the Hall D refrigerator. This appears to have been a major success, and the...

  4. Running Interactive Batch Jobs

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    if the node of choice is not immediately available Start an interactive session in the debug queue qsh -l debug1 -now no qlogin -l debug1 -now no This is useful when the cluster...

  5. NREL: Jobs and Economic Development Impacts (JEDI) Models - About JEDI

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Photovoltaics Model Photovoltaics Model The Jobs and Economic Development Impacts (JEDI) Photovoltaics (PV) model allows users to estimate economic development impacts from PV projects. JEDI PV has default information that can be utilized to run a generic impacts analysis assuming industry averages. Model users are encouraged to enter as much project-specific data as possible. The PV JEDI model is designed for use on a PC and has very limited functionality on a Mac. However, this model is

  6. Trial-Run of a Junction-Box Attachment Test for Use in Photovoltaic Module Qualification: Preprint

    SciTech Connect (OSTI)

    Miller, D. C.; Deibert, S. L.; Wohlgemuth, J. H.

    2014-06-01

    Engineering robust adhesion of the junction box (j-box) is a hurdle typically encountered by photovoltaic module manufacturers during product development and manufacturing process control. There are historical incidences of adverse effects (e.g., fires) caused when the j-box/adhesive/module system has failed in the field. The addition of a weight to the j-box during the 'damp-heat,' 'thermal-cycle,' or 'creep' tests within the IEC qualification protocol is proposed to verify the basic robustness of the adhesion system. The details of the proposed test are described, in addition to a trial-run of the test procedure. The described experiments examine four moisture-cured silicones, four foam tapes, and a hot-melt adhesive used in conjunction with glass, KPE, THV, and TPE substrates. For the purpose of validating the experiment, j-boxes were adhered to a substrate, loaded with a prescribed weight, and then subjected to aging. The replicate mock-modules were aged in an environmental chamber (at 85 degrees C/85% relative humidity for 1000 hours; then 100 degrees C/<10% relative humidity for 200 hours) or fielded in Golden (CO), Miami (FL), and Phoenix (AZ) for one year. Attachment strength tests, including pluck and shear test geometries, were also performed on smaller component specimens.

  7. Critical Infrastructure Modeling System

    Energy Science and Technology Software Center (OSTI)

    2004-10-01

    The Critical Infrastructure Modeling System (CIMS) is a 3D modeling and simulation environment designed to assist users in the analysis of dependencies within individual infrastructure and also interdependencies between multiple infrastructures. Through visual cuing and textual displays, a use can evaluate the effect of system perturbation and identify the emergent patterns that evolve. These patterns include possible outage areas from a loss of power, denial of service or access, and disruption of operations. Method ofmore » Solution: CIMS allows the user to model a system, create an overlay of information, and create 3D representative images to illustrate key infrastructure elements. A geo-referenced scene, satellite, aerial images or technical drawings can be incorporated into the scene. Scenarios of events can be scripted, and the user can also interact during run time to alter system characteristics. CIMS operates as a discrete event simulation engine feeding a 3D visualization.« less

  8. Data Plots of Run I - III Results from SLAC E-158: A precision Measurement of the Weak Mixing Angle in Moller Scattering

    DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]

    Three physics runs were made in 2002 and 2003 by E-158. As a result, the E-158 Collaboration announced that it had made "the first observation of Parity Violation in electron-electron (Moller) scattering). This precise Parity Violation measurement gives the best determination of the electron's weak charge at low energy (low momentum transfer between interacting particles). E158's measurement tests the predicted running (or evolution) of this weak charge with energy, and searches for new phenomena at TeV energy scales (one thousand times the proton-mass energy scale).[Copied from the experiment's public home page at http://www-project slac.stanford.edu/3158/Default.htm] See also the E158 page for collaborators at http://www.slac.stanford.edu/exp/e158/. Both websites provide data and detailed information.

  9. Search for Excited or Exotic Electron Production Using the Dielectron + Photon Signature at CDF in Run II

    SciTech Connect (OSTI)

    Gerberich, Heather Kay; /Duke U.

    2004-07-01

    The author presents a search for excited or exotic electrons decaying to an electron and a photon with high transverse momentum. An oppositely charged electron is produced in association with the excited electron, yielding a final state dielectron + photon signature. The discovery of excited electrons would be a first indication of lepton compositeness. They use {approx} 202 pb{sup -1} of data collected in p{bar p} collisions at {radical}s = 1.96 TeV with the Collider Detector at Fermilab during March 2001 through September 2003. The data are consistent with standard model expectations. Upper limits are set on the experimental cross-section {sigma}({bar p}p {yields} ee* {yields} ee{gamma}) at the 95% confidence level in a contact-interaction model and a gauge-mediated interaction model. Limits are also presented as exclusion regions in the parameter space of the excited electron mass (M{sub e*}) and the compositeness energy scale ({Lambda}). In the contact-interaction model, for which there are no previously published limits, they find M{sub e*} < 906 GeV is excluded for M{sub e*} = {Lambda}. In the gauge-mediated model, the exclusion region in the M{sub e*} versus the phenomenological coupling f/{Lambda} parameter space is extended to M{sub e*} < 430 GeV for f/{Lambda} {approx} 10{sup -2} GeV{sup -1}. In comparison, other experiments have excluded M{sub e*} < 280 GeV for f/{Lambda} {approx} 10{sup -2} GeV{sup -1}.

  10. Summary of the ATLAS experiment’s sensitivity to supersymmetry after LHC Run 1 -- interpreted in the phenomenological MSSM

    SciTech Connect (OSTI)

    Aad, G.; Abbott, B.; Abdallah, J.; Abdinov, O.; Aben, R.; Abolins, M.; AbouZeid, O. S.; Abramowicz, H.; Abreu, H.; Abreu, R.; Abulaiti, Y.; Acharya, B. S.; Adamczyk, L.; Adams, D. L.; Adelman, J.; Adomeit, S.; Adye, T.; Affolder, A. A.; Agatonovic-Jovin, T.; Agricola, J.; Aguilar-Saavedra, J. A.; Ahlen, S. P.; Ahmadov, F.; Aielli, G.; Akerstedt, H.; Åkesson, T. P. A.; Akimov, A. V.; Alberghi, G. L.; Albert, J.; Albrand, S.; Alconada Verzini, M. J.; Aleksa, M.; Aleksandrov, I. N.; Alexa, C.; Alexander, G.; Alexopoulos, T.; Alhroob, M.; Alimonti, G.; Alio, L.; Alison, J.; Alkire, S. P.; Allbrooke, B. M. M.; Allport, P. P.; Aloisio, A.; Alonso, A.; Alonso, F.; Alpigiani, C.; Altheimer, A.; Alvarez Gonzalez, B.; Álvarez Piqueras, D.; Alviggi, M. G.; Amadio, B. T.; Amako, K.; Amaral Coutinho, Y.; Amelung, C.; Amidei, D.; Amor Dos Santos, S. P.; Amorim, A.; Amoroso, S.; Amram, N.; Amundsen, G.; Anastopoulos, C.; Ancu, L. S.; Andari, N.; Andeen, T.; Anders, C. F.; Anders, G.; Anders, J. K.; Anderson, K. J.; Andreazza, A.; Andrei, V.; Angelidakis, S.; Angelozzi, I.; Anger, P.; Angerami, A.; Anghinolfi, F.; Anisenkov, A. V.; Anjos, N.; Annovi, A.; Antonelli, M.; Antonov, A.; Antos, J.; Anulli, F.; Aoki, M.; Aperio Bella, L.; Arabidze, G.; Arai, Y.; Araque, J. P.; Arce, A. T. H.; Arduh, F. A.; Arguin, J-F.; Argyropoulos, S.; Arik, M.; Armbruster, A. J.; Arnaez, O.; Arnold, H.; Arratia, M.; Arslan, O.; Artamonov, A.; Artoni, G.; Asai, S.; Asbah, N.; Ashkenazi, A.; Åsman, B.; Asquith, L.; Assamagan, K.; Astalos, R.; Atkinson, M.; Atlay, N. B.; Augsten, K.; Aurousseau, M.; Avolio, G.; Axen, B.; Ayoub, M. K.; Azuelos, G.; Baak, M. A.; Baas, A. E.; Baca, M. J.; Bacci, C.; Bachacou, H.; Bachas, K.; Backes, M.; Backhaus, M.; Bagiacchi, P.; Bagnaia, P.; Bai, Y.; Bain, T.; Baines, J. T.; Baker, O. K.; Baldin, E. M.; Balek, P.; Balestri, T.; Balli, F.; Balunas, W. K.; Banas, E.; Banerjee, Sw.; Bannoura, A. A. E.; Barak, L.; Barberio, E. L.; Barberis, D.; Barbero, M.; Barillari, T.; Barisonzi, M.; Barklow, T.; Barlow, N.; Barnes, S. L.; Barnett, B. M.; Barnett, R. M.; Barnovska, Z.; Baroncelli, A.; Barone, G.; Barr, A. J.; Barreiro, F.; Barreiro Guimarães da Costa, J.; Bartoldus, R.; Barton, A. E.; Bartos, P.; Basalaev, A.; Bassalat, A.; Basye, A.; Bates, R. L.; Batista, S. J.; Batley, J. R.; Battaglia, M.; Bauce, M.; Bauer, F.; Bawa, H. S.; Beacham, J. B.; Beattie, M. D.; Beau, T.; Beauchemin, P. H.; Beccherle, R.; Bechtle, P.; Beck, H. P.; Becker, K.; Becker, M.; Beckingham, M.; Becot, C.; Beddall, A. J.; Beddall, A.; Bednyakov, V. A.; Bee, C. P.; Beemster, L. J.; Beermann, T. A.; Begel, M.; Behr, J. K.; Belanger-Champagne, C.; Bell, W. H.; Bella, G.; Bellagamba, L.; Bellerive, A.; Bellomo, M.; Belotskiy, K.; Beltramello, O.; Benary, O.; Benchekroun, D.; Bender, M.; Bendtz, K.; Benekos, N.; Benhammou, Y.; Benhar Noccioli, E.; Benitez Garcia, J. A.; Benjamin, D. P.; Bensinger, J. R.; Bentvelsen, S.; Beresford, L.; Beretta, M.; Berge, D.; Bergeaas Kuutmann, E.; Berger, N.; Berghaus, F.; Beringer, J.; Bernard, C.; Bernard, N. R.; Bernius, C.; Bernlochner, F. U.; Berry, T.; Berta, P.; Bertella, C.; Bertoli, G.; Bertolucci, F.; Bertsche, C.; Bertsche, D.; Besana, M. I.; Besjes, G. J.; Bessidskaia Bylund, O.; Bessner, M.; Besson, N.; Betancourt, C.; Bethke, S.; Bevan, A. J.; Bhimji, W.; Bianchi, R. M.; Bianchini, L.; Bianco, M.; Biebel, O.; Biedermann, D.; Bieniek, S. P.; Biesuz, N. V.; Biglietti, M.; Bilbao De Mendizabal, J.; Bilokon, H.; Bindi, M.; Binet, S.; Bingul, A.; Bini, C.; Biondi, S.; Bjergaard, D. M.; Black, C. W.; Black, J. E.; Black, K. M.; Blackburn, D.; Blair, R. E.; Blanchard, J. -B.; Blanco, J. E.; Blazek, T.; Bloch, I.; Blocker, C.; Blum, W.; Blumenschein, U.; Blunier, S.; Bobbink, G. J.; Bobrovnikov, V. S.; Bocchetta, S. S.; Bocci, A.; Bock, C.; Boehler, M.; Bogaerts, J. A.; Bogavac, D.; Bogdanchikov, A. G.; Bohm, C.; Boisvert, V.; Bold, T.; Boldea, V.; Boldyrev, A. S.; Bomben, M.; Bona, M.; Boonekamp, M.; Borisov, A.; Borissov, G.; Borroni, S.; Bortfeldt, J.; Bortolotto, V.; Bos, K.; Boscherini, D.; Bosman, M.; Boudreau, J.; Bouffard, J.; Bouhova-Thacker, E. V.; Boumediene, D.; Bourdarios, C.; Bousson, N.; Boutle, S. K.; Boveia, A.; Boyd, J.; Boyko, I. R.; Bozic, I.; Bracinik, J.; Brandt, A.; Brandt, G.; Brandt, O.; Bratzler, U.; Brau, B.; Brau, J. E.; Braun, H. M.; Breaden Madden, W. D.; Brendlinger, K.; Brennan, A. J.; Brenner, L.; Brenner, R.; Bressler, S.; Bristow, K.; Bristow, T. M.; Britton, D.; Britzger, D.; Brochu, F. M.; Brock, I.; Brock, R.; Bronner, J.; Brooijmans, G.; Brooks, T.; Brooks, W. K.; Brosamer, J.; Brost, E.; Bruckman de Renstrom, P. A.; Bruncko, D.; Bruneliere, R.; Bruni, A.; Bruni, G.; Bruschi, M.; Bruscino, N.; Bryngemark, L.; Buanes, T.; Buat, Q.; Buchholz, P.; Buckley, A. G.; Buda, S. I.; Budagov, I. A.; Buehrer, F.; Bugge, L.; Bugge, M. K.; Bulekov, O.; Bullock, D.; Burckhart, H.; Burdin, S.; Burgard, C. D.; Burghgrave, B.; Burke, S.; Burmeister, I.; Busato, E.; Büscher, D.; Büscher, V.; Bussey, P.; Butler, J. M.; Butt, A. I.; Buttar, C. M.; Butterworth, J. M.; Butti, P.; Buttinger, W.; Buzatu, A.; Buzykaev, A. R.; Cabrera Urbán, S.; Caforio, D.; Cairo, V. M.; Cakir, O.; Calace, N.; Calafiura, P.; Calandri, A.; Calderini, G.; Calfayan, P.; Caloba, L. P.; Calvet, D.; Calvet, S.; Camacho Toro, R.; Camarda, S.; Camarri, P.; Cameron, D.; Caminal Armadans, R.; Campana, S.; Campanelli, M.; Campoverde, A.; Canale, V.; Canepa, A.; Cano Bret, M.; Cantero, J.; Cantrill, R.; Cao, T.; Capeans Garrido, M. D. M.; Caprini, I.; Caprini, M.; Capua, M.; Caputo, R.; Carbone, R. M.; Cardarelli, R.; Cardillo, F.; Carli, T.; Carlino, G.; Carminati, L.; Caron, S.; Carquin, E.; Carrillo-Montoya, G. D.; Carter, J. R.; Carvalho, J.; Casadei, D.; Casado, M. P.; Casolino, M.; Castaneda-Miranda, E.; Castelli, A.; Castillo Gimenez, V.; Castro, N. F.; Catastini, P.; Catinaccio, A.; Catmore, J. R.; Cattai, A.; Caudron, J.; Cavaliere, V.; Cavalli, D.; Cavalli-Sforza, M.; Cavasinni, V.; Ceradini, F.; Cerio, B. C.; Cerny, K.; Cerqueira, A. S.; Cerri, A.; Cerrito, L.; Cerutti, F.; Cerv, M.; Cervelli, A.; Cetin, S. A.; Chafaq, A.; Chakraborty, D.; Chalupkova, I.; Chang, P.; Chapman, J. D.; Charlton, D. G.; Chau, C. C.; Chavez Barajas, C. A.; Cheatham, S.; Chegwidden, A.; Chekanov, S.; Chekulaev, S. V.; Chelkov, G. A.; Chelstowska, M. A.; Chen, C.; Chen, H.; Chen, K.; Chen, L.; Chen, S.; Chen, S.; Chen, X.; Chen, Y.; Cheng, H. C.; Cheng, Y.; Cheplakov, A.; Cheremushkina, E.; Cherkaoui El Moursli, R.; Chernyatin, V.; Cheu, E.; Chevalier, L.; Chiarella, V.; Chiarelli, G.; Chiodini, G.; Chisholm, A. S.; Chislett, R. T.; Chitan, A.; Chizhov, M. V.; Choi, K.; Chouridou, S.; Chow, B. K. B.; Christodoulou, V.; Chromek-Burckhart, D.; Chudoba, J.; Chuinard, A. J.; Chwastowski, J. J.; Chytka, L.; Ciapetti, G.; Ciftci, A. K.; Cinca, D.; Cindro, V.; Cioara, I. A.; Ciocio, A.; Cirotto, F.; Citron, Z. H.; Ciubancan, M.; Clark, A.; Clark, B. L.; Clark, P. J.; Clarke, R. N.; Clement, C.; Coadou, Y.; Cobal, M.; Coccaro, A.; Cochran, J.; Coffey, L.; Cogan, J. G.; Colasurdo, L.; Cole, B.; Cole, S.; Colijn, A. P.; Collot, J.; Colombo, T.; Compostella, G.; Conde Muiño, P.; Coniavitis, E.; Connell, S. H.; Connelly, I. A.; Consorti, V.; Constantinescu, S.; Conta, C.; Conti, G.; Conventi, F.; Cooke, M.; Cooper, B. D.; Cooper-Sarkar, A. M.; Cornelissen, T.; Corradi, M.; Corriveau, F.; Corso-Radu, A.; Cortes-Gonzalez, A.; Cortiana, G.; Costa, G.; Costa, M. J.; Costanzo, D.; Côté, D.; Cottin, G.; Cowan, G.; Cox, B. E.; Cranmer, K.; Cree, G.; Crépé-Renaudin, S.; Crescioli, F.; Cribbs, W. A.; Crispin Ortuzar, M.; Cristinziani, M.; Croft, V.; Crosetti, G.; Cuhadar Donszelmann, T.; Cummings, J.; Curatolo, M.; Cúth, J.; Cuthbert, C.; Czirr, H.; Czodrowski, P.; D’Auria, S.; D’Onofrio, M.; Da Cunha Sargedas De Sousa, M. J.; Da Via, C.; Dabrowski, W.; Dafinca, A.; Dai, T.; Dale, O.; Dallaire, F.; Dallapiccola, C.; Dam, M.; Dandoy, J. R.; Dang, N. P.; Daniells, A. C.; Danninger, M.; Dano Hoffmann, M.; Dao, V.; Darbo, G.; Darmora, S.; Dassoulas, J.; Dattagupta, A.; Davey, W.; David, C.; Davidek, T.; Davies, E.; Davies, M.; Davison, P.; Davygora, Y.; Dawe, E.; Dawson, I.; Daya-Ishmukhametova, R. K.; De, K.; de Asmundis, R.; De Benedetti, A.; De Castro, S.; De Cecco, S.; De Groot, N.; de Jong, P.; De la Torre, H.; De Lorenzi, F.; De Pedis, D.; De Salvo, A.; De Sanctis, U.; De Santo, A.; De Vivie De Regie, J. B.; Dearnaley, W. J.; Debbe, R.; Debenedetti, C.; Dedovich, D. V.; Deigaard, I.; Del Peso, J.; Del Prete, T.; Delgove, D.; Deliot, F.; Delitzsch, C. M.; Deliyergiyev, M.; Dell’Acqua, A.; Dell’Asta, L.; Dell’Orso, M.; Della Pietra, M.; della Volpe, D.; Delmastro, M.; Delsart, P. A.; Deluca, C.; DeMarco, D. A.; Demers, S.; Demichev, M.; Demilly, A.; Denisov, S. P.; Derendarz, D.; Derkaoui, J. E.; Derue, F.; Dervan, P.; Desch, K.; Deterre, C.; Deviveiros, P. O.; Dewhurst, A.; Dhaliwal, S.; Di Ciaccio, A.; Di Ciaccio, L.; Di Domenico, A.; Di Donato, C.; Di Girolamo, A.; Di Girolamo, B.; Di Mattia, A.; Di Micco, B.; Di Nardo, R.; Di Simone, A.; Di Sipio, R.; Di Valentino, D.; Diaconu, C.; Diamond, M.; Dias, F. A.; Diaz, M. A.; Diehl, E. B.; Dietrich, J.; Diglio, S.; Dimitrievska, A.; Dingfelder, J.; Dita, P.; Dita, S.; Dittus, F.; Djama, F.; Djobava, T.; Djuvsland, J. I.; do Vale, M. A. B.; Dobos, D.; Dobre, M.; Doglioni, C.; Dohmae, T.; Dolejsi, J.; Dolezal, Z.; Dolgoshein, B. A.; Donadelli, M.; Donati, S.; Dondero, P.; Donini, J.; Dopke, J.; Doria, A.; Dova, M. T.; Doyle, A. T.; Drechsler, E.; Dris, M.; Dubreuil, E.; Duchovni, E.; Duckeck, G.; Ducu, O. A.; Duda, D.; Dudarev, A.; Duflot, L.; Duguid, L.; Dührssen, M.; Dunford, M.; Duran Yildiz, H.; Düren, M.; Durglishvili, A.; Duschinger, D.; Dutta, B.; Dyndal, M.; Eckardt, C.; Ecker, K. M.; Edgar, R. C.; Edson, W.; Edwards, N. C.; Ehrenfeld, W.; Eifert, T.; Eigen, G.; Einsweiler, K.; Ekelof, T.; El Kacimi, M.; Ellert, M.; Elles, S.; Ellinghaus, F.; Elliot, A. A.; Ellis, N.; Elmsheuser, J.; Elsing, M.; Emeliyanov, D.; Enari, Y.; Endner, O. C.; Endo, M.; Erdmann, J.; Ereditato, A.; Ernis, G.; Ernst, J.; Ernst, M.; Errede, S.; Ertel, E.; Escalier, M.; Esch, H.; Escobar, C.; Esposito, B.; Etienvre, A. I.; Etzion, E.; Evans, H.; Ezhilov, A.; Fabbri, L.; Facini, G.; Fakhrutdinov, R. M.; Falciano, S.; Falla, R. J.; Faltova, J.; Fang, Y.; Fanti, M.; Farbin, A.; Farilla, A.; Farooque, T.; Farrell, S.; Farrington, S. M.; Farthouat, P.; Fassi, F.; Fassnacht, P.; Fassouliotis, D.; Faucci Giannelli, M.; Favareto, A.; Fawcett, W. J.; Fayard, L.; Fedin, O. L.; Fedorko, W.; Feigl, S.; Feligioni, L.; Feng, C.; Feng, E. J.; Feng, H.; Fenyuk, A. B.; Feremenga, L.; Fernandez Martinez, P.; Fernandez Perez, S.; Ferrando, J.; Ferrari, A.; Ferrari, P.; Ferrari, R.; Ferreira de Lima, D. E.; Ferrer, A.; Ferrere, D.; Ferretti, C.; Ferretto Parodi, A.; Fiascaris, M.; Fiedler, F.; Filipčič, A.; Filipuzzi, M.; Filthaut, F.; Fincke-Keeler, M.; Finelli, K. D.; Fiolhais, M. C. N.; Fiorini, L.; Firan, A.; Fischer, A.; Fischer, C.; Fischer, J.; Fisher, W. C.; Flaschel, N.; Fleck, I.; Fleischmann, P.; Fletcher, G. T.; Fletcher, G.; Fletcher, R. R. M.; Flick, T.; Floderus, A.; Flores Castillo, L. R.; Flowerdew, M. J.; Formica, A.; Forti, A.; Fournier, D.; Fox, H.; Fracchia, S.; Francavilla, P.; Franchini, M.; Francis, D.; Franconi, L.; Franklin, M.; Frate, M.; Fraternali, M.; Freeborn, D.; French, S. T.; Friedrich, F.; Froidevaux, D.; Frost, J. A.; Fukunaga, C.; Fullana Torregrosa, E.; Fulsom, B. G.; Fusayasu, T.; Fuster, J.; Gabaldon, C.; Gabizon, O.; Gabrielli, A.; Gabrielli, A.; Gach, G. P.; Gadatsch, S.; Gadomski, S.; Gagliardi, G.; Gagnon, P.; Galea, C.; Galhardo, B.; Gallas, E. J.; Gallop, B. J.; Gallus, P.; Galster, G.; Gan, K. K.; Gao, J.; Gao, Y.; Gao, Y. S.; Garay Walls, F. M.; Garberson, F.; García, C.; García Navarro, J. E.; Garcia-Sciveres, M.; Gardner, R. W.; Garelli, N.; Garonne, V.; Gatti, C.; Gaudiello, A.; Gaudio, G.; Gaur, B.; Gauthier, L.; Gauzzi, P.; Gavrilenko, I. L.; Gay, C.; Gaycken, G.; Gazis, E. N.; Ge, P.; Gecse, Z.; Gee, C. N. P.; Geich-Gimbel, Ch.; Geisler, M. P.; Gemme, C.; Genest, M. H.; Gentile, S.; George, M.; George, S.; Gerbaudo, D.; Gershon, A.; Ghasemi, S.; Ghazlane, H.; Giacobbe, B.; Giagu, S.; Giangiobbe, V.; Giannetti, P.; Gibbard, B.; Gibson, S. M.; Gignac, M.; Gilchriese, M.; Gillam, T. P. S.; Gillberg, D.; Gilles, G.; Gingrich, D. M.; Giokaris, N.; Giordani, M. P.; Giorgi, F. M.; Giorgi, F. M.; Giraud, P. F.; Giromini, P.; Giugni, D.; Giuliani, C.; Giulini, M.; Gjelsten, B. K.; Gkaitatzis, S.; Gkialas, I.; Gkougkousis, E. L.; Gladilin, L. K.; Glasman, C.; Glatzer, J.; Glaysher, P. C. F.; Glazov, A.; Goblirsch-Kolb, M.; Goddard, J. R.; Godlewski, J.; Goldfarb, S.; Golling, T.; Golubkov, D.; Gomes, A.; Gonçalo, R.; Goncalves Pinto Firmino Da Costa, J.; Gonella, L.; González de la Hoz, S.; Gonzalez Parra, G.; Gonzalez-Sevilla, S.; Goossens, L.; Gorbounov, P. A.; Gordon, H. A.; Gorelov, I.; Gorini, B.; Gorini, E.; Gorišek, A.; Gornicki, E.; Goshaw, A. T.; Gössling, C.; Gostkin, M. I.; Goujdami, D.; Goussiou, A. G.; Govender, N.; Gozani, E.; Grabas, H. M. X.; Graber, L.; Grabowska-Bold, I.; Gradin, P. O. J.; Grafström, P.; Grahn, K-J.; Gramling, J.; Gramstad, E.; Grancagnolo, S.; Gratchev, V.; Gray, H. M.; Graziani, E.; Greenwood, Z. D.; Grefe, C.; Gregersen, K.; Gregor, I. M.; Grenier, P.; Griffiths, J.; Grillo, A. A.; Grimm, K.; Grinstein, S.; Gris, Ph.; Grivaz, J. -F.; Grohs, J. P.; Grohsjean, A.; Gross, E.; Grosse-Knetter, J.; Grossi, G. C.; Grout, Z. J.; Guan, L.; Guenther, J.; Guescini, F.; Guest, D.; Gueta, O.; Guido, E.; Guillemin, T.; Guindon, S.; Gul, U.; Gumpert, C.; Guo, J.; Guo, Y.; Gupta, S.; Gustavino, G.; Gutierrez, P.; Gutierrez Ortiz, N. G.; Gutschow, C.; Guyot, C.; Gwenlan, C.; Gwilliam, C. B.; Haas, A.; Haber, C.; Hadavand, H. K.; Haddad, N.; Haefner, P.; Hageböck, S.; Hajduk, Z.; Hakobyan, H.; Haleem, M.; Haley, J.; Hall, D.; Halladjian, G.; Hallewell, G. D.; Hamacher, K.; Hamal, P.; Hamano, K.; Hamilton, A.; Hamity, G. N.; Hamnett, P. G.; Han, L.; Hanagaki, K.; Hanawa, K.; Hance, M.; Haney, B.; Hanke, P.; Hanna, R.; Hansen, J. B.; Hansen, J. D.; Hansen, M. C.; Hansen, P. H.; Hara, K.; Hard, A. S.; Harenberg, T.; Hariri, F.; Harkusha, S.; Harrington, R. D.; Harrison, P. F.; Hartjes, F.; Hasegawa, M.; Hasegawa, Y.; Hasib, A.; Hassani, S.; Haug, S.; Hauser, R.; Hauswald, L.; Havranek, M.; Hawkes, C. M.; Hawkings, R. J.; Hawkins, A. D.; Hayashi, T.; Hayden, D.; Hays, C. P.; Hays, J. M.; Hayward, H. S.; Haywood, S. J.; Head, S. J.; Heck, T.; Hedberg, V.; Heelan, L.; Heim, S.; Heim, T.; Heinemann, B.; Heinrich, L.; Hejbal, J.; Helary, L.; Hellman, S.; Hellmich, D.; Helsens, C.; Henderson, J.; Henderson, R. C. W.; Heng, Y.; Hengler, C.; Henkelmann, S.; Henrichs, A.; Henriques Correia, A. M.; Henrot-Versille, S.; Herbert, G. H.; Hernández Jiménez, Y.; Herten, G.; Hertenberger, R.; Hervas, L.; Hesketh, G. G.; Hessey, N. P.; Hetherly, J. W.; Hickling, R.; Higón-Rodriguez, E.; Hill, E.; Hill, J. C.; Hiller, K. H.; Hillier, S. J.; Hinchliffe, I.; Hines, E.; Hinman, R. R.; Hirose, M.; Hirschbuehl, D.; Hobbs, J.; Hod, N.; Hodgkinson, M. C.; Hodgson, P.; Hoecker, A.; Hoeferkamp, M. R.; Hoenig, F.; Hohlfeld, M.; Hohn, D.; Holmes, T. R.; Homann, M.; Hong, T. M.; Hopkins, W. H.; Horii, Y.; Horton, A. J.; Hostachy, J-Y.; Hou, S.; Hoummada, A.; Howard, J.; Howarth, J.; Hrabovsky, M.; Hristova, I.; Hrivnac, J.; Hryn’ova, T.; Hrynevich, A.; Hsu, C.; Hsu, P. J.; Hsu, S. -C.; Hu, D.; Hu, Q.; Hu, X.; Huang, Y.; Hubacek, Z.; Hubaut, F.; Huegging, F.; Huffman, T. B.; Hughes, E. W.; Hughes, G.; Huhtinen, M.; Hülsing, T. A.; Huseynov, N.; Huston, J.; Huth, J.; Iacobucci, G.; Iakovidis, G.; Ibragimov, I.; Iconomidou-Fayard, L.; Ideal, E.; Idrissi, Z.; Iengo, P.; Igonkina, O.; Iizawa, T.; Ikegami, Y.; Ikematsu, K.; Ikeno, M.; Ilchenko, Y.; Iliadis, D.; Ilic, N.; Ince, T.; Introzzi, G.; Ioannou, P.; Iodice, M.; Iordanidou, K.; Ippolito, V.; Irles Quiles, A.; Isaksson, C.; Ishino, M.; Ishitsuka, M.; Ishmukhametov, R.; Issever, C.; Istin, S.; Iturbe Ponce, J. M.; Iuppa, R.; Ivarsson, J.; Iwanski, W.; Iwasaki, H.; Izen, J. M.; Izzo, V.; Jabbar, S.; Jackson, B.; Jackson, M.; Jackson, P.; Jaekel, M. R.; Jain, V.; Jakobs, K.; Jakobsen, S.; Jakoubek, T.; Jakubek, J.; Jamin, D. O.; Jana, D. K.; Jansen, E.; Jansky, R.; Janssen, J.; Janus, M.; Jarlskog, G.; Javadov, N.; Javůrek, T.; Jeanty, L.; Jejelava, J.; Jeng, G. -Y.; Jennens, D.; Jenni, P.; Jentzsch, J.; Jeske, C.; Jézéquel, S.; Ji, H.; Jia, J.; Jiang, Y.; Jiggins, S.; Jimenez Pena, J.; Jin, S.; Jinaru, A.; Jinnouchi, O.; Joergensen, M. D.; Johansson, P.; Johns, K. A.; Johnson, W. J.; Jon-And, K.; Jones, G.; Jones, R. W. L.; Jones, T. J.; Jongmanns, J.; Jorge, P. M.; Joshi, K. D.; Jovicevic, J.; Ju, X.; Jussel, P.; Juste Rozas, A.; Kaci, M.; Kaczmarska, A.; Kado, M.; Kagan, H.; Kagan, M.; Kahn, S. J.; Kajomovitz, E.; Kalderon, C. W.; Kama, S.; Kamenshchikov, A.; Kanaya, N.; Kaneti, S.; Kantserov, V. A.; Kanzaki, J.; Kaplan, B.; Kaplan, L. S.; Kapliy, A.; Kar, D.; Karakostas, K.; Karamaoun, A.; Karastathis, N.; Kareem, M. J.; Karentzos, E.; Karnevskiy, M.; Karpov, S. N.; Karpova, Z. M.; Karthik, K.; Kartvelishvili, V.; Karyukhin, A. N.; Kasahara, K.; Kashif, L.; Kass, R. D.; Kastanas, A.; Kataoka, Y.; Kato, C.; Katre, A.; Katzy, J.; Kawade, K.; Kawagoe, K.; Kawamoto, T.; Kawamura, G.; Kazama, S.; Kazanin, V. F.; Keeler, R.; Kehoe, R.; Keller, J. S.; Kempster, J. J.; Keoshkerian, H.; Kepka, O.; Kerševan, B. P.; Kersten, S.; Keyes, R. A.; Khalil-zada, F.; Khandanyan, H.; Khanov, A.; Kharlamov, A. G.; Khoo, T. J.; Khovanskiy, V.; Khramov, E.; Khubua, J.; Kido, S.; Kim, H. Y.; Kim, S. H.; Kim, Y. K.; Kimura, N.; Kind, O. M.; King, B. T.; King, M.; King, S. B.; Kirk, J.; Kiryunin, A. E.; Kishimoto, T.; Kisielewska, D.; Kiss, F.; Kiuchi, K.; Kivernyk, O.; Kladiva, E.; Klein, M. H.; Klein, M.; Klein, U.; Kleinknecht, K.; Klimek, P.; Klimentov, A.; Klingenberg, R.; Klinger, J. A.; Klioutchnikova, T.; Kluge, E. -E.; Kluit, P.; Kluth, S.; Knapik, J.; Kneringer, E.; Knoops, E. B. F. G.; Knue, A.; Kobayashi, A.; Kobayashi, D.; Kobayashi, T.; Kobel, M.; Kocian, M.; Kodys, P.; Koffas, T.; Koffeman, E.; Kogan, L. A.; Kohlmann, S.; Kohout, Z.; Kohriki, T.; Koi, T.; Kolanoski, H.; Kolb, M.; Koletsou, I.; Komar, A. A.; Komori, Y.; Kondo, T.; Kondrashova, N.; Köneke, K.; König, A. C.; Kono, T.; Konoplich, R.; Konstantinidis, N.; Kopeliansky, R.; Koperny, S.; Köpke, L.; Kopp, A. K.; Korcyl, K.; Kordas, K.; Korn, A.; Korol, A. A.; Korolkov, I.; Korolkova, E. V.; Kortner, O.; Kortner, S.; Kosek, T.; Kostyukhin, V. V.; Kotov, V. M.; Kotwal, A.; Kourkoumeli-Charalampidi, A.; Kourkoumelis, C.; Kouskoura, V.; Koutsman, A.; Kowalewski, R.; Kowalski, T. Z.; Kozanecki, W.; Kozhin, A. S.; Kramarenko, V. A.; Kramberger, G.; Krasnopevtsev, D.; Krasny, M. W.; Krasznahorkay, A.; Kraus, J. K.; Kravchenko, A.; Kreiss, S.; Kretz, M.; Kretzschmar, J.; Kreutzfeldt, K.; Krieger, P.; Krizka, K.; Kroeninger, K.; Kroha, H.; Kroll, J.; Kroseberg, J.; Krstic, J.; Kruchonak, U.; Krüger, H.; Krumnack, N.; Kruse, A.; Kruse, M. C.; Kruskal, M.; Kubota, T.; Kucuk, H.; Kuday, S.; Kuehn, S.; Kugel, A.; Kuger, F.; Kuhl, A.; Kuhl, T.; Kukhtin, V.; Kukla, R.; Kulchitsky, Y.; Kuleshov, S.; Kuna, M.; Kunigo, T.; Kupco, A.; Kurashige, H.; Kurochkin, Y. A.; Kus, V.; Kuwertz, E. S.; Kuze, M.; Kvita, J.; Kwan, T.; Kyriazopoulos, D.; La Rosa, A.; La Rosa Navarro, J. L.; La Rotonda, L.; Lacasta, C.; Lacava, F.; Lacey, J.; Lacker, H.; Lacour, D.; Lacuesta, V. R.; Ladygin, E.; Lafaye, R.; Laforge, B.; Lagouri, T.; Lai, S.; Lambourne, L.; Lammers, S.; Lampen, C. L.; Lampl, W.; Lançon, E.; Landgraf, U.; Landon, M. P. J.; Lang, V. S.; Lange, J. C.; Lankford, A. J.; Lanni, F.; Lantzsch, K.; Lanza, A.; Laplace, S.; Lapoire, C.; Laporte, J. F.; Lari, T.; Lasagni Manghi, F.; Lassnig, M.; Laurelli, P.; Lavrijsen, W.; Law, A. T.; Laycock, P.; Lazovich, T.; Le Dortz, O.; Le Guirriec, E.; Le Menedeu, E.; LeBlanc, M.; LeCompte, T.; Ledroit-Guillon, F.; Lee, C. A.; Lee, S. C.; Lee, L.; Lefebvre, G.; Lefebvre, M.; Legger, F.; Leggett, C.; Lehan, A.; Lehmann Miotto, G.; Lei, X.; Leight, W. A.; Leisos, A.; Leister, A. G.; Leite, M. A. L.; Leitner, R.; Lellouch, D.; Lemmer, B.; Leney, K. J. C.; Lenz, T.; Lenzi, B.; Leone, R.; Leone, S.; Leonidopoulos, C.; Leontsinis, S.; Leroy, C.; Lester, C. G.; Levchenko, M.; Levêque, J.; Levin, D.; Levinson, L. J.; Levy, M.; Lewis, A.; Leyko, A. M.; Leyton, M.; Li, B.; Li, H.; Li, H. L.; Li, L.; Li, L.; Li, S.; Li, X.; Li, Y.; Liang, Z.; Liao, H.; Liberti, B.; Liblong, A.; Lichard, P.; Lie, K.; Liebal, J.; Liebig, W.; Limbach, C.; Limosani, A.; Lin, S. C.; Lin, T. H.; Linde, F.; Lindquist, B. E.; Linnemann, J. T.; Lipeles, E.; Lipniacka, A.; Lisovyi, M.; Liss, T. M.; Lissauer, D.; Lister, A.; Litke, A. M.; Liu, B.; Liu, D.; Liu, H.; Liu, J.; Liu, J. B.; Liu, K.; Liu, L.; Liu, M.; Liu, M.; Liu, Y.; Livan, M.; Lleres, A.; Llorente Merino, J.; Lloyd, S. L.; Lo Sterzo, F.; Lobodzinska, E.; Loch, P.; Lockman, W. S.; Loebinger, F. K.; Loevschall-Jensen, A. E.; Loew, K. M.; Loginov, A.; Lohse, T.; Lohwasser, K.; Lokajicek, M.; Long, B. A.; Long, J. D.; Long, R. E.; Looper, K. A.; Lopes, L.; Lopez Mateos, D.; Lopez Paredes, B.; Lopez Paz, I.; Lorenz, J.; Lorenzo Martinez, N.; Losada, M.; Lösel, P. J.; Lou, X.; Lounis, A.; Love, J.; Love, P. A.; Lu, N.; Lubatti, H. J.; Luci, C.; Lucotte, A.; Luedtke, C.; Luehring, F.; Lukas, W.; Luminari, L.; Lundberg, O.; Lund-Jensen, B.; Lynn, D.; Lysak, R.; Lytken, E.; Ma, H.; Ma, L. L.; Maccarrone, G.; Macchiolo, A.; Macdonald, C. M.; Maček, B.; Machado Miguens, J.; Macina, D.; Madaffari, D.; Madar, R.; Maddocks, H. J.; Mader, W. F.; Madsen, A.; Maeda, J.; Maeland, S.; Maeno, T.; Maevskiy, A.; Magradze, E.; Mahboubi, K.; Mahlstedt, J.; Maiani, C.; Maidantchik, C.; Maier, A. A.; Maier, T.; Maio, A.; Majewski, S.; Makida, Y.; Makovec, N.; Malaescu, B.; Malecki, Pa.; Maleev, V. P.; Malek, F.; Mallik, U.; Malon, D.; Malone, C.; Maltezos, S.; Malyshev, V. M.; Malyukov, S.; Mamuzic, J.; Mancini, G.; Mandelli, B.; Mandelli, L.; Mandić, I.; Mandrysch, R.; Maneira, J.; Manfredini, A.; Manhaes de Andrade Filho, L.; Manjarres Ramos, J.; Mann, A.; Manousakis-Katsikakis, A.; Mansoulie, B.; Mantifel, R.; Mantoani, M.; Mapelli, L.; March, L.; Marchiori, G.; Marcisovsky, M.; Marino, C. P.; Marjanovic, M.; Marley, D. E.; Marroquim, F.; Marsden, S. P.; Marshall, Z.; Marti, L. F.; Marti-Garcia, S.; Martin, B.; Martin, T. A.; Martin, V. J.; Martin dit Latour, B.; Martinez, M.; Martin-Haugh, S.; Martoiu, V. S.; Martyniuk, A. C.; Marx, M.; Marzano, F.; Marzin, A.; Masetti, L.; Mashimo, T.; Mashinistov, R.; Masik, J.; Maslennikov, A. L.; Massa, I.; Massa, L.; Mastrandrea, P.; Mastroberardino, A.; Masubuchi, T.; Mättig, P.; Mattmann, J.; Maurer, J.; Maxfield, S. J.; Maximov, D. A.; Mazini, R.; Mazza, S. M.; Mc Goldrick, G.; Mc Kee, S. P.; McCarn, A.; McCarthy, R. L.; McCarthy, T. G.; McCubbin, N. A.; McFarlane, K. W.; Mcfayden, J. A.; Mchedlidze, G.; McMahon, S. J.; McPherson, R. A.; Medinnis, M.; Meehan, S.; Mehlhase, S.; Mehta, A.; Meier, K.; Meineck, C.; Meirose, B.; Mellado Garcia, B. R.; Meloni, F.; Mengarelli, A.; Menke, S.; Meoni, E.; Mercurio, K. M.; Mergelmeyer, S.; Mermod, P.; Merola, L.; Meroni, C.; Merritt, F. S.; Messina, A.; Metcalfe, J.; Mete, A. S.; Meyer, C.; Meyer, C.; Meyer, J-P.; Meyer, J.; Meyer Zu Theenhausen, H.; Middleton, R. P.; Miglioranzi, S.; Mijović, L.; Mikenberg, G.; Mikestikova, M.; Mikuž, M.; Milesi, M.; Milic, A.; Miller, D. W.; Mills, C.; Milov, A.; Milstead, D. A.; Minaenko, A. A.; Minami, Y.; Minashvili, I. A.; Mincer, A. I.; Mindur, B.; Mineev, M.; Ming, Y.; Mir, L. M.; Mistry, K. P.; Mitani, T.; Mitrevski, J.; Mitsou, V. A.; Miucci, A.; Miyagawa, P. S.; Mjörnmark, J. U.; Moa, T.; Mochizuki, K.; Mohapatra, S.; Mohr, W.; Molander, S.; Moles-Valls, R.; Monden, R.; Mönig, K.; Monini, C.; Monk, J.; Monnier, E.; Montalbano, A.; Montejo Berlingen, J.; Monticelli, F.; Monzani, S.; Moore, R. W.; Morange, N.; Moreno, D.; Moreno Llácer, M.; Morettini, P.; Mori, D.; Mori, T.; Morii, M.; Morinaga, M.; Morisbak, V.; Moritz, S.; Morley, A. K.; Mornacchi, G.; Morris, J. D.; Mortensen, S. S.; Morton, A.; Morvaj, L.; Mosidze, M.; Moss, J.; Motohashi, K.; Mount, R.; Mountricha, E.; Mouraviev, S. V.; Moyse, E. J. W.; Muanza, S.; Mudd, R. D.; Mueller, F.; Mueller, J.; Mueller, R. S. P.; Mueller, T.; Muenstermann, D.; Mullen, P.; Mullier, G. A.; Murillo Quijada, J. A.; Murray, W. J.; Musheghyan, H.; Musto, E.; Myagkov, A. G.; Myska, M.; Nachman, B. P.; Nackenhorst, O.; Nadal, J.; Nagai, K.; Nagai, R.; Nagai, Y.; Nagano, K.; Nagarkar, A.; Nagasaka, Y.; Nagata, K.; Nagel, M.; Nagy, E.; Nairz, A. M.; Nakahama, Y.; Nakamura, K.; Nakamura, T.; Nakano, I.; Namasivayam, H.; Naranjo Garcia, R. F.; Narayan, R.; Narrias Villar, D. I.; Naumann, T.; Navarro, G.; Nayyar, R.; Neal, H. A.; Nechaeva, P. Yu.; Neep, T. J.; Nef, P. D.; Negri, A.; Negrini, M.; Nektarijevic, S.; Nellist, C.; Nelson, A.; Nemecek, S.; Nemethy, P.; Nepomuceno, A. A.; Nessi, M.; Neubauer, M. S.; Neumann, M.; Neves, R. M.; Nevski, P.; Newman, P. R.; Nguyen, D. H.; Nickerson, R. B.; Nicolaidou, R.; Nicquevert, B.; Nielsen, J.; Nikiforou, N.; Nikiforov, A.; Nikolaenko, V.; Nikolic-Audit, I.; Nikolopoulos, K.; Nilsen, J. K.; Nilsson, P.; Ninomiya, Y.; Nisati, A.; Nisius, R.; Nobe, T.; Nomachi, M.; Nomidis, I.; Nooney, T.; Norberg, S.; Nordberg, M.; Novgorodova, O.; Nowak, S.; Nozaki, M.; Nozka, L.; Ntekas, K.; Nunes Hanninger, G.; Nunnemann, T.; Nurse, E.; Nuti, F.; O’Brien, B. J.; O’grady, F.; O’Neil, D. C.; O’Shea, V.; Oakham, F. G.; Oberlack, H.; Obermann, T.; Ocariz, J.; Ochi, A.; Ochoa, I.; Ochoa-Ricoux, J. P.; Oda, S.; Odaka, S.; Ogren, H.; Oh, A.; Oh, S. H.; Ohm, C. C.; Ohman, H.; Oide, H.; Okamura, W.; Okawa, H.; Okumura, Y.; Okuyama, T.; Olariu, A.; Olivares Pino, S. A.; Oliveira Damazio, D.; Olszewski, A.; Olszowska, J.; Onofre, A.; Onogi, K.; Onyisi, P. U. E.; Oram, C. J.; Oreglia, M. J.; Oren, Y.; Orestano, D.; Orlando, N.; Oropeza Barrera, C.; Orr, R. S.; Osculati, B.; Ospanov, R.; Otero y Garzon, G.; Otono, H.; Ouchrif, M.; Ould-Saada, F.; Ouraou, A.; Oussoren, K. P.; Ouyang, Q.; Ovcharova, A.; Owen, M.; Owen, R. E.; Ozcan, V. E.; Ozturk, N.; Pachal, K.; Pacheco Pages, A.; Padilla Aranda, C.; Pagáčová, M.; Pagan Griso, S.; Paganis, E.; Paige, F.; Pais, P.; Pajchel, K.; Palacino, G.; Palestini, S.; Palka, M.; Pallin, D.; Palma, A.; Pan, Y. B.; St. Panagiotopoulou, E.; Pandini, C. E.; Panduro Vazquez, J. G.; Pani, P.; Panitkin, S.; Pantea, D.; Paolozzi, L.; Papadopoulou, Th. D.; Papageorgiou, K.; Paramonov, A.; Paredes Hernandez, D.; Parker, M. A.; Parker, K. A.; Parodi, F.; Parsons, J. A.; Parzefall, U.; Pasqualucci, E.; Passaggio, S.; Pastore, F.; Pastore, Fr.; Pásztor, G.; Pataraia, S.; Patel, N. D.; Pater, J. R.; Pauly, T.; Pearce, J.; Pearson, B.; Pedersen, L. E.; Pedersen, M.; Pedraza Lopez, S.; Pedro, R.; Peleganchuk, S. V.; Pelikan, D.; Penc, O.; Peng, C.; Peng, H.; Penning, B.; Penwell, J.; Perepelitsa, D. V.; Perez Codina, E.; Pérez García-Estañ, M. T.; Perini, L.; Pernegger, H.; Perrella, S.; Peschke, R.; Peshekhonov, V. D.; Peters, K.; Peters, R. F. Y.; Petersen, B. A.; Petersen, T. C.; Petit, E.; Petridis, A.; Petridou, C.; Petroff, P.; Petrolo, E.; Petrucci, F.; Pettersson, N. E.; Pezoa, R.; Phillips, P. W.; Piacquadio, G.; Pianori, E.; Picazio, A.; Piccaro, E.; Piccinini, M.; Pickering, M. A.; Piegaia, R.; Pignotti, D. T.; Pilcher, J. E.; Pilkington, A. D.; Pin, A. W. J.; Pina, J.; Pinamonti, M.; Pinfold, J. L.; Pingel, A.; Pires, S.; Pirumov, H.; Pitt, M.; Pizio, C.; Plazak, L.; Pleier, M. -A.; Pleskot, V.; Plotnikova, E.; Plucinski, P.; Pluth, D.; Poettgen, R.; Poggioli, L.; Pohl, D.; Polesello, G.; Poley, A.; Policicchio, A.; Polifka, R.; Polini, A.; Pollard, C. S.; Polychronakos, V.; Pommès, K.; Pontecorvo, L.; Pope, B. G.; Popeneciu, G. A.; Popovic, D. S.; Poppleton, A.; Pospisil, S.; Potamianos, K.; Potrap, I. N.; Potter, C. J.; Potter, C. T.; Poulard, G.; Poveda, J.; Pozdnyakov, V.; Pralavorio, P.; Pranko, A.; Prasad, S.; Prell, S.; Price, D.; Price, L. E.; Primavera, M.; Prince, S.; Proissl, M.; Prokofiev, K.; Prokoshin, F.; Protopapadaki, E.; Protopopescu, S.; Proudfoot, J.; Przybycien, M.; Ptacek, E.; Puddu, D.; Pueschel, E.; Puldon, D.; Purohit, M.; Puzo, P.; Qian, J.; Qin, G.; Qin, Y.; Quadt, A.; Quarrie, D. R.; Quayle, W. B.; Queitsch-Maitland, M.; Quilty, D.; Raddum, S.; Radeka, V.; Radescu, V.; Radhakrishnan, S. K.; Radloff, P.; Rados, P.; Ragusa, F.; Rahal, G.; Rajagopalan, S.; Rammensee, M.; Rangel-Smith, C.; Rauscher, F.; Rave, S.; Ravenscroft, T.; Raymond, M.; Read, A. L.; Readioff, N. P.; Rebuzzi, D. M.; Redelbach, A.; Redlinger, G.; Reece, R.; Reeves, K.; Rehnisch, L.; Reichert, J.; Reisin, H.; Rembser, C.; Ren, H.; Renaud, A.; Rescigno, M.; Resconi, S.; Rezanova, O. L.; Reznicek, P.; Rezvani, R.; Richter, R.; Richter, S.; Richter-Was, E.; Ricken, O.; Ridel, M.; Rieck, P.; Riegel, C. J.; Rieger, J.; Rifki, O.; Rijssenbeek, M.; Rimoldi, A.; Rinaldi, L.; Ristić, B.; Ritsch, E.; Riu, I.; Rizatdinova, F.; Rizvi, E.; Rizzo, T. G.; Robertson, S. H.; Robichaud-Veronneau, A.; Robinson, D.; Robinson, J. E. M.; Robson, A.; Roda, C.; Roe, S.; Røhne, O.; Rolli, S.; Romaniouk, A.; Romano, M.; Romano Saez, S. M.; Romero Adam, E.; Rompotis, N.; Ronzani, M.; Roos, L.; Ros, E.; Rosati, S.; Rosbach, K.; Rose, P.; Rosendahl, P. L.; Rosenthal, O.; Rossetti, V.; Rossi, E.; Rossi, L. P.; Rosten, J. H. N.; Rosten, R.; Rotaru, M.; Roth, I.; Rothberg, J.; Rousseau, D.; Royon, C. R.; Rozanov, A.; Rozen, Y.; Ruan, X.; Rubbo, F.; Rubinskiy, I.; Rud, V. I.; Rudolph, C.; Rudolph, M. S.; Rühr, F.; Ruiz-Martinez, A.; Rurikova, Z.; Rusakovich, N. A.; Ruschke, A.; Russell, H. L.; Rutherfoord, J. P.; Ruthmann, N.; Ryabov, Y. F.; Rybar, M.; Rybkin, G.; Ryder, N. C.; Saavedra, A. F.; Sabato, G.; Sacerdoti, S.; Saddique, A.; Sadrozinski, H. F-W.; Sadykov, R.; Safai Tehrani, F.; Saha, P.; Sahinsoy, M.; Saimpert, M.; Saito, T.; Sakamoto, H.; Sakurai, Y.; Salamanna, G.; Salamon, A.; Salazar Loyola, J. E.; Saleem, M.; Salek, D.; Sales De Bruin, P. H.; Salihagic, D.; Salnikov, A.; Salt, J.; Salvatore, D.; Salvatore, F.; Salvucci, A.; Salzburger, A.; Sammel, D.; Sampsonidis, D.; Sanchez, A.; Sánchez, J.; Sanchez Martinez, V.; Sandaker, H.; Sandbach, R. L.; Sander, H. G.; Sanders, M. P.; Sandhoff, M.; Sandoval, C.; Sandstroem, R.; Sankey, D. P. C.; Sannino, M.; Sansoni, A.; Santoni, C.; Santonico, R.; Santos, H.; Santoyo Castillo, I.; Sapp, K.; Sapronov, A.; Saraiva, J. G.; Sarrazin, B.; Sasaki, O.; Sasaki, Y.; Sato, K.; Sauvage, G.; Sauvan, E.; Savage, G.; Savard, P.; Sawyer, C.; Sawyer, L.; Saxon, J.; Sbarra, C.; Sbrizzi, A.; Scanlon, T.; Scannicchio, D. A.; Scarcella, M.; Scarfone, V.; Schaarschmidt, J.; Schacht, P.; Schaefer, D.; Schaefer, R.; Schaeffer, J.; Schaepe, S.; Schaetzel, S.; Schäfer, U.; Schaffer, A. C.; Schaile, D.; Schamberger, R. D.; Scharf, V.; Schegelsky, V. A.; Scheirich, D.; Schernau, M.; Schiavi, C.; Schillo, C.; Schioppa, M.; Schlenker, S.; Schmieden, K.; Schmitt, C.; Schmitt, S.; Schmitt, S.; Schneider, B.; Schnellbach, Y. J.; Schnoor, U.; Schoeffel, L.; Schoening, A.; Schoenrock, B. D.; Schopf, E.; Schorlemmer, A. L. S.; Schott, M.; Schouten, D.; Schovancova, J.; Schramm, S.; Schreyer, M.; Schuh, N.; Schultens, M. J.; Schultz-Coulon, H. -C.; Schulz, H.; Schumacher, M.; Schumm, B. A.; Schune, Ph.; Schwanenberger, C.; Schwartzman, A.; Schwarz, T. A.; Schwegler, Ph.; Schweiger, H.; Schwemling, Ph.; Schwienhorst, R.; Schwindling, J.; Schwindt, T.; Sciacca, F. G.; Scifo, E.; Sciolla, G.; Scuri, F.; Scutti, F.; Searcy, J.; Sedov, G.; Sedykh, E.; Seema, P.; Seidel, S. C.; Seiden, A.; Seifert, F.; Seixas, J. M.; Sekhniaidze, G.; Sekhon, K.; Sekula, S. J.; Seliverstov, D. M.; Semprini-Cesari, N.; Serfon, C.; Serin, L.; Serkin, L.; Serre, T.; Sessa, M.; Seuster, R.; Severini, H.; Sfiligoj, T.; Sforza, F.; Sfyrla, A.; Shabalina, E.; Shamim, M.; Shan, L. Y.; Shang, R.; Shank, J. T.; Shapiro, M.; Shatalov, P. B.; Shaw, K.; Shaw, S. M.; Shcherbakova, A.; Shehu, C. Y.; Sherwood, P.; Shi, L.; Shimizu, S.; Shimmin, C. O.; Shimojima, M.; Shiyakova, M.; Shmeleva, A.; Shoaleh Saadi, D.; Shochet, M. J.; Shojaii, S.; Shrestha, S.; Shulga, E.; Shupe, M. A.; Shushkevich, S.; Sicho, P.; Sidebo, P. E.; Sidiropoulou, O.; Sidorov, D.; Sidoti, A.; Siegert, F.; Sijacki, Dj.; Silva, J.; Silver, Y.; Silverstein, S. B.; Simak, V.; Simard, O.; Simic, Lj.; Simion, S.; Simioni, E.; Simmons, B.; Simon, D.; Sinervo, P.; Sinev, N. B.; Sioli, M.; Siragusa, G.; Sisakyan, A. N.; Sivoklokov, S. Yu.; Sjölin, J.; Sjursen, T. B.; Skinner, M. B.; Skottowe, H. P.; Skubic, P.; Slater, M.; Slavicek, T.; Slawinska, M.; Sliwa, K.; Smakhtin, V.; Smart, B. H.; Smestad, L.; Smirnov, S. Yu.; Smirnov, Y.; Smirnova, L. N.; Smirnova, O.; Smith, M. N. K.; Smith, R. W.; Smizanska, M.; Smolek, K.; Snesarev, A. A.; Snidero, G.; Snyder, S.; Sobie, R.; Socher, F.; Soffer, A.; Soh, D. A.; Sokhrannyi, G.; Solans, C. A.; Solar, M.; Solc, J.; Soldatov, E. Yu.; Soldevila, U.; Solodkov, A. A.; Soloshenko, A.; Solovyanov, O. V.; Solovyev, V.; Sommer, P.; Song, H. Y.; Soni, N.; Sood, A.; Sopczak, A.; Sopko, B.; Sopko, V.; Sorin, V.; Sosa, D.; Sosebee, M.; Sotiropoulou, C. L.; Soualah, R.; Soukharev, A. M.; South, D.; Sowden, B. C.; Spagnolo, S.; Spalla, M.; Spangenberg, M.; Spanò, F.; Spearman, W. R.; Sperlich, D.; Spettel, F.; Spighi, R.; Spigo, G.; Spiller, L. A.; Spousta, M.; St. Denis, R. D.; Stabile, A.; Staerz, S.; Stahlman, J.; Stamen, R.; Stamm, S.; Stanecka, E.; Stanescu, C.; Stanescu-Bellu, M.; Stanitzki, M. M.; Stapnes, S.; Starchenko, E. A.; Stark, J.; Staroba, P.; Starovoitov, P.; Staszewski, R.; Steinberg, P.; Stelzer, B.; Stelzer, H. J.; Stelzer-Chilton, O.; Stenzel, H.; Stewart, G. A.; Stillings, J. A.; Stockton, M. C.; Stoebe, M.; Stoicea, G.; Stolte, P.; Stonjek, S.; Stradling, A. R.; Straessner, A.; Stramaglia, M. E.; Strandberg, J.; Strandberg, S.; Strandlie, A.; Strauss, E.; Strauss, M.; Strizenec, P.; Ströhmer, R.; Strom, D. M.; Stroynowski, R.; Strubig, A.; Stucci, S. A.; Stugu, B.; Styles, N. A.; Su, D.; Su, J.; Subramaniam, R.; Succurro, A.; Sugaya, Y.; Suk, M.; Sulin, V. V.; Sultansoy, S.; Sumida, T.; Sun, S.; Sun, X.; Sundermann, J. E.; Suruliz, K.; Susinno, G.; Sutton, M. R.; Suzuki, S.; Svatos, M.; Swiatlowski, M.; Sykora, I.; Sykora, T.; Ta, D.; Taccini, C.; Tackmann, K.; Taenzer, J.; Taffard, A.; Tafirout, R.; Taiblum, N.; Takai, H.; Takashima, R.; Takeda, H.; Takeshita, T.; Takubo, Y.; Talby, M.; Talyshev, A. A.; Tam, J. Y. C.; Tan, K. G.; Tanaka, J.; Tanaka, R.; Tanaka, S.; Tannenwald, B. B.; Tannoury, N.; Tapia Araya, S.; Tapprogge, S.; Tarem, S.; Tarrade, F.; Tartarelli, G. F.; Tas, P.; Tasevsky, M.; Tashiro, T.; Tassi, E.; Tavares Delgado, A.; Tayalati, Y.; Taylor, F. E.; Taylor, G. N.; Taylor, P. T. E.; Taylor, W.; Teischinger, F. A.; Teixeira Dias Castanheira, M.; Teixeira-Dias, P.; Temming, K. K.; Temple, D.; Ten Kate, H.; Teng, P. K.; Teoh, J. J.; Tepel, F.; Terada, S.; Terashi, K.; Terron, J.; Terzo, S.; Testa, M.; Teuscher, R. J.; Theveneaux-Pelzer, T.; Thomas, J. P.; Thomas-Wilsker, J.; Thompson, E. N.; Thompson, P. D.; Thompson, R. J.; Thompson, A. S.; Thomsen, L. A.; Thomson, E.; Thomson, M.; Thun, R. P.; Tibbetts, M. J.; Ticse Torres, R. E.; Tikhomirov, V. O.; Tikhonov, Yu. A.; Timoshenko, S.; Tiouchichine, E.; Tipton, P.; Tisserant, S.; Todome, K.; Todorov, T.; Todorova-Nova, S.; Tojo, J.; Tokár, S.; Tokushuku, K.; Tollefson, K.; Tolley, E.; Tomlinson, L.; Tomoto, M.; Tompkins, L.; Toms, K.; Torrence, E.; Torres, H.; Torró Pastor, E.; Toth, J.; Touchard, F.; Tovey, D. R.; Trefzger, T.; Tremblet, L.; Tricoli, A.; Trigger, I. M.; Trincaz-Duvoid, S.; Tripiana, M. F.; Trischuk, W.; Trocmé, B.; Troncon, C.; Trottier-McDonald, M.; Trovatelli, M.; Truong, L.; Trzebinski, M.; Trzupek, A.; Tsarouchas, C.; Tseng, J. C-L.; Tsiareshka, P. V.; Tsionou, D.; Tsipolitis, G.; Tsirintanis, N.; Tsiskaridze, S.; Tsiskaridze, V.; Tskhadadze, E. G.; Tsukerman, I. I.; Tsulaia, V.; Tsuno, S.; Tsybychev, D.; Tudorache, A.; Tudorache, V.; Tuna, A. N.; Tupputi, S. A.; Turchikhin, S.; Turecek, D.; Turra, R.; Turvey, A. J.; Tuts, P. M.; Tykhonov, A.; Tylmad, M.; Tyndel, M.; Ueda, I.; Ueno, R.; Ughetto, M.; Ugland, M.; Ukegawa, F.; Unal, G.; Undrus, A.; Unel, G.; Ungaro, F. C.; Unno, Y.; Unverdorben, C.; Urban, J.; Urquijo, P.; Urrejola, P.; Usai, G.; Usanova, A.; Vacavant, L.; Vacek, V.; Vachon, B.; Valderanis, C.; Valencic, N.; Valentinetti, S.; Valero, A.; Valery, L.; Valkar, S.; Vallecorsa, S.; Valls Ferrer, J. A.; Van Den Wollenberg, W.; Van Der Deijl, P. C.; van der Geer, R.; van der Graaf, H.; van Eldik, N.; van Gemmeren, P.; Van Nieuwkoop, J.; van Vulpen, I.; van Woerden, M. C.; Vanadia, M.; Vandelli, W.; Vanguri, R.; Vaniachine, A.; Vannucci, F.; Vardanyan, G.; Vari, R.; Varnes, E. W.; Varol, T.; Varouchas, D.; Vartapetian, A.; Varvell, K. E.; Vazeille, F.; Vazquez Schroeder, T.; Veatch, J.; Veloce, L. M.; Veloso, F.; Velz, T.; Veneziano, S.; Ventura, A.; Ventura, D.; Venturi, M.; Venturi, N.; Venturini, A.; Vercesi, V.; Verducci, M.; Verkerke, W.; Vermeulen, J. C.; Vest, A.; Vetterli, M. C.; Viazlo, O.; Vichou, I.; Vickey, T.; Vickey Boeriu, O. E.; Viehhauser, G. H. A.; Viel, S.; Vigne, R.; Villa, M.; Villaplana Perez, M.; Vilucchi, E.; Vincter, M. G.; Vinogradov, V. B.; Vivarelli, I.; Vives Vaque, F.; Vlachos, S.; Vladoiu, D.; Vlasak, M.; Vogel, M.; Vokac, P.; Volpi, G.; Volpi, M.; von der Schmitt, H.; von Radziewski, H.; von Toerne, E.; Vorobel, V.; Vorobev, K.; Vos, M.; Voss, R.; Vossebeld, J. H.; Vranjes, N.; Vranjes Milosavljevic, M.; Vrba, V.; Vreeswijk, M.; Vuillermet, R.; Vukotic, I.; Vykydal, Z.; Wagner, P.; Wagner, W.; Wahlberg, H.; Wahrmund, S.; Wakabayashi, J.; Walder, J.; Walker, R.; Walkowiak, W.; Wang, C.; Wang, F.; Wang, H.; Wang, H.; Wang, J.; Wang, J.; Wang, K.; Wang, R.; Wang, S. M.; Wang, T.; Wang, T.; Wang, X.; Wanotayaroj, C.; Warburton, A.; Ward, C. P.; Wardrope, D. R.; Washbrook, A.; Wasicki, C.; Watkins, P. M.; Watson, A. T.; Watson, I. J.; Watson, M. F.; Watts, G.; Watts, S.; Waugh, B. M.; Webb, S.; Weber, M. S.; Weber, S. W.; Webster, J. S.; Weidberg, A. R.; Weinert, B.; Weingarten, J.; Weiser, C.; Weits, H.; Wells, P. S.; Wenaus, T.; Wengler, T.; Wenig, S.; Wermes, N.; Werner, M.; Werner, P.; Wessels, M.; Wetter, J.; Whalen, K.; Wharton, A. M.; White, A.; White, M. J.; White, R.; White, S.; Whiteson, D.; Wickens, F. J.; Wiedenmann, W.; Wielers, M.; Wienemann, P.; Wiglesworth, C.; Wiik-Fuchs, L. A. M.; Wildauer, A.; Wilkens, H. G.; Williams, H. H.; Williams, S.; Willis, C.; Willocq, S.; Wilson, A.; Wilson, J. A.; Wingerter-Seez, I.; Winklmeier, F.; Winter, B. T.; Wittgen, M.; Wittkowski, J.; Wollstadt, S. J.; Wolter, M. W.; Wolters, H.; Wosiek, B. K.; Wotschack, J.; Woudstra, M. J.; Wozniak, K. W.; Wu, M.; Wu, M.; Wu, S. L.; Wu, X.; Wu, Y.; Wyatt, T. R.; Wynne, B. M.; Xella, S.; Xu, D.; Xu, L.; Yabsley, B.; Yacoob, S.; Yakabe, R.; Yamada, M.; Yamaguchi, D.; Yamaguchi, Y.; Yamamoto, A.; Yamamoto, S.; Yamanaka, T.; Yamauchi, K.; Yamazaki, Y.; Yan, Z.; Yang, H.; Yang, H.; Yang, Y.; Yao, W-M.; Yap, Y. C.; Yasu, Y.; Yatsenko, E.; Yau Wong, K. H.; Ye, J.; Ye, S.; Yeletskikh, I.; Yen, A. L.; Yildirim, E.; Yorita, K.; Yoshida, R.; Yoshihara, K.; Young, C.; Young, C. J. S.; Youssef, S.; Yu, D. R.; Yu, J.; Yu, J. M.; Yu, J.; Yuan, L.; Yuen, S. P. Y.; Yurkewicz, A.; Yusuff, I.; Zabinski, B.; Zaidan, R.; Zaitsev, A. M.; Zalieckas, J.; Zaman, A.; Zambito, S.; Zanello, L.; Zanzi, D.; Zeitnitz, C.; Zeman, M.; Zemla, A.; Zeng, Q.; Zengel, K.; Zenin, O.; Ženiš, T.; Zerwas, D.; Zhang, D.; Zhang, F.; Zhang, G.; Zhang, H.; Zhang, J.; Zhang, L.; Zhang, R.; Zhang, X.; Zhang, Z.; Zhao, X.; Zhao, Y.; Zhao, Z.; Zhemchugov, A.; Zhong, J.; Zhou, B.; Zhou, C.; Zhou, L.; Zhou, L.; Zhou, M.; Zhou, N.; Zhu, C. G.; Zhu, H.; Zhu, J.; Zhu, Y.; Zhuang, X.; Zhukov, K.; Zibell, A.; Zieminska, D.; Zimine, N. I.; Zimmermann, C.; Zimmermann, S.; Zinonos, Z.; Zinser, M.; Ziolkowski, M.; Živković, L.; Zobernig, G.; Zoccoli, A.; zur Nedden, M.; Zurzolo, G.; Zwalinski, L.

    2015-10-21

    A summary of the constraints from the ATLAS experiment on R -parity-conserving supersymmetry is presented. Results from 22 separate ATLAS searches are considered, each based on analysis of up to 20.3 fb–1 of proton-proton collision data at centre-of-mass energies of √s =7 and 8 TeV at the Large Hadron Collider. The results are interpreted in the context of the 19-parameter phenomenological minimal supersymmetric standard model, in which the lightest supersymmetric particle is a neutralino, taking into account constraints from previous precision electroweak and flavour measurements as well as from dark matter related measurements. The results are presented in terms of constraints on supersymmetric particle masses and are compared to limits from simplified models. The impact of ATLAS searches on parameters such as the dark matter relic density, the couplings of the observed Higgs boson, and the degree of electroweak fine-tuning is also shown. As a result, spectra for surviving supersymmetry model points with low fine-tunings are presented.

  11. Summary of the ATLAS experiment’s sensitivity to supersymmetry after LHC Run 1 -- interpreted in the phenomenological MSSM

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Aad, G.; Abbott, B.; Abdallah, J.; Abdinov, O.; Aben, R.; Abolins, M.; AbouZeid, O. S.; Abramowicz, H.; Abreu, H.; Abreu, R.; et al

    2015-10-21

    A summary of the constraints from the ATLAS experiment on R -parity-conserving supersymmetry is presented. Results from 22 separate ATLAS searches are considered, each based on analysis of up to 20.3 fb–1 of proton-proton collision data at centre-of-mass energies of √s =7 and 8 TeV at the Large Hadron Collider. The results are interpreted in the context of the 19-parameter phenomenological minimal supersymmetric standard model, in which the lightest supersymmetric particle is a neutralino, taking into account constraints from previous precision electroweak and flavour measurements as well as from dark matter related measurements. The results are presented in terms ofmore » constraints on supersymmetric particle masses and are compared to limits from simplified models. The impact of ATLAS searches on parameters such as the dark matter relic density, the couplings of the observed Higgs boson, and the degree of electroweak fine-tuning is also shown. As a result, spectra for surviving supersymmetry model points with low fine-tunings are presented.« less

  12. Search for Third Generation Squarks in the Missing Transverse Energy plus Jet Sample at CDF Run II

    SciTech Connect (OSTI)

    Vidal Marono, Miguel; /Madrid, CIEMAT /Madrid U.

    2010-03-01

    The twentieth century leaves behind one of the most impressive legacies, in terms of human knowledge, ever achieved. In particular the StandardModel (SM) of particle physics has proven to be one of the most accurate descriptions of Nature. The level of accuracy of some theoretical predictions has never been attained before. It includes the electromagnetic interaction, and the weak and strong force, developing the Lagrangian from symmetry principles. There are two different types of fundamental constituents of Nature, in the framework of the Standard Model: bosons and fermions. Bosons are those particles responsible for carrying the interactions among the fermions, which constitute matter. Fermions are divide into six quarks and six leptons, forming a three-folded structure. All these fermions and bosons have an antimatter partner. However, several difficulties point along with the idea that the Standard Model is only an effective low energy theory. These limitations include the difficulty to incorporate gravity and the lack of justification to fine tuning of some perturbative corrections. Moreover, some regions of the theory are not understood, like the mass spectrum of the Standard Model or the mechanism for electroweak symmetry breaking. Supersymmetry is a newer theoretical framework, thought to adress the problems found in the Standard Model, while preserving all its predictive power. It introduces a new symmetry that relates a new boson to each SM fermion and a new fermion to each SM boson. In this way, for every existing boson in the SM it must exist a fermionic super-partner (named with a sufix ino), and likewise, for every fermion a bosonic super-partner (named with a prefix s) must also exist. Moreover, another symmetry called R-parity is introduced to prevent baryon and lepton number violating interactions. If R-parity is conserved, super-particles can only be pair-produced and they cannot decay completely in SM particles. This implies the existence of a lightest SUSY particle (LSP) which would provide a candidate for cold dark matter, that account for 23% of the universe content, as strongly suggested by recent astrophysical data [1]. The Tevatron is a hadron collider operating at Fermilab, USA. This accelerator provides proton-antiproton (p{bar p}) collisions with a center of mass energy of {radical}s = 1.96 TeV. CDF and D0 are the detectors built to analyse the products of the collisions provided by the Tevatron. Both experiments have produced a very significant scientific output in the last few years, like the discovery of the top quark or the measurement of the B{sub s} mixing. The Tevatron experiments are also reaching sensitivity to the SM Higgs boson. The scientific program of CDF includes a broad spectrum on searches for physics signatures beyond the Standard Model. Tevatron is still the energy frontier, what means an unique opportunity to produce a discovery in physic beyond the Standard Model. The analyses presented in this thesis focus on the search for third generation squarks in the missing transverse energy plus jets final state. The production of sbottom ({tilde b}) and stop ({tilde t}) quarks could be highly enhanced at the Tevatron, giving the possibility of discovering new physics or limiting the parameter space available in the theory. No signal is found over the predicted Standard Model background in both searches. Instead, 95% confidence level limits are set on the production cross section, and then translated into the mass plane of the hypothetical particles. This thesis sketches the basic theory concepts of the Standard Model and the Minimal Supersymmetric Extension in Chapter 2. Chapter 3, describes the Tevatron and CDF. Based on the CDF subsystems information, Chapter 4 and 5 describe the analysis objet reconstruction and the heavy flavor tagging tools. The development of the analyses is shown in Chapter 6 and Chapter 7. Finally, Chapter 8 is devoted to discuss the results and conclusions of this work, and future prospects.

  13. Directory of Energy Information Administration Models 1993

    SciTech Connect (OSTI)

    Not Available

    1993-07-06

    This directory contains descriptions about each model, including the title, acronym, purpose, followed by more detailed information on characteristics, uses, and requirements. Sources for additional information are identified. Included in this directory are 35 EIA models active as of May 1, 1993. Models that run on personal computers are identified by ``PC`` as part of the acronym. EIA is developing new models, a National Energy Modeling System (NEMS), and is making changes to existing models to include new technologies, environmental issues, conservation, and renewables, as well as extend forecast horizon. Other parts of the Department are involved in this modeling effort. A fully operational model is planned which will integrate completed segments of NEMS for its first official application--preparation of EIA`s Annual Energy Outlook 1994. Abstracts for the new models will be included in next year`s version of this directory.

  14. EMMA: Electromechanical Modeling in ALEGRA

    SciTech Connect (OSTI)

    1996-12-31

    To ensure high levels of deterrent capability in the 21st century, new stockpile stewardship principles are being embraced at Sandia National Laboratories. The Department of Energy Accelerated Strategic Computing Initiative (ASCI) program is providing the computational capacity and capability as well as funding the system and simulation software infrastructure necessary to provide accurate, precise and predictive modeling of important components and devices. An important class of components require modeling of piezoelectric and ferroceramic materials. The capability to run highly resolved simulations of these types of components on the ASCI parallel computers is being developed at Sandia in the ElectroMechanical Modeling in Alegra (EMMA) code. This a simulation capability being developed at Sandia National Laboratories for high-fidelity modeling of electromechanical devices. these devices can produce electrical current arising from material changes due to shock impact or explosive detonation.

  15. Climate Model Output Rewriter

    Energy Science and Technology Software Center (OSTI)

    2004-06-21

    CMOR comprises a set of FORTRAN 90 dunctions that can be used to produce CF-compliant netCDF files. The structure of the files created by CMOR and the metadata they contain fulfill the requirements of many of the climate community’s standard model experiments (which are referred to here as "MIPS", which stands for "model intercomparison project", including, for example, AMIP, CMIP, CFMIP, PMIP, APE, and IPCC scenario runs), CMOR was not designed to serve as anmore » all-purpose wfiter of CF-compliant netCDF files, but simply to reduce the effort required to prepare and manage MIP data. Although MIPs encourage systematic analysis of results across models, this is only easy to do if the model output is written in a common format with files structured similarly and with sufficient metadata uniformly stored according to a common standard. Individual modeling groups store their data in different ways. but if a group can read its own data with FORTRAN, then it should easily be able to transform the data, using CMOR, into the common format required by the MIPs, The adoption of CMOR as a standard code for exchanging climate data will facilitate participation in MIPs because after learning how to satisfy the output requirements of one MIP, it will be easy to prepare output for the other MIPs.« less

  16. Thermal-nutritional regulation of functional groups in running water ecosystems. Technical progress report, October 1, 1978-November 1, 1980

    SciTech Connect (OSTI)

    Cummins, K.W.

    1980-11-01

    The research encompassed three general areas: (1) characterization of stream macroinvertebrate functional feeding groups (shredders, collectors, scrapers, and predators) based on morphological and behavioral adaptations and food-source-specific growth responses of selected species; (2) demonstration of the relative importance of temperature and food quality (in which maximum quality is defined as that producing the most growth) in controlling growth rate and survivorship of stream functional groups; and (3) derivation and refinement of conceptual and quantitative models of stream ecosystem structure and function, with particular emphasis on detrital processing. Verification of the functional group concept as a tool for assessing and predicting is reflected in alterations of the relative dominance of various functional groups. Food quality can strongly influence the growth rates of shredders, collectors and scrapers and override the effects of temperature in a number of cases. Gathering collectors may select food particles by size (or at least be restricted to a limited portion of the total range available) but representative species do not appear to select for quality.

  17. Argonne National Laboratory Smart Grid Technology Interactive Model

    SciTech Connect (OSTI)

    Ted Bohn

    2009-10-13

    As our attention turns to new cars that run partially or completely on electricity, how can we redesign our electric grid to not only handle the new load, but make electricity cheap and efficient for everyone? Argonne engineer Ted Bohn explains a model of a "smart grid" that gives consumers the power to choose their own prices and sources of electricity.

  18. Argonne National Laboratory Smart Grid Technology Interactive Model

    ScienceCinema (OSTI)

    Ted Bohn

    2010-01-08

    As our attention turns to new cars that run partially or completely on electricity, how can we redesign our electric grid to not only handle the new load, but make electricity cheap and efficient for everyone? Argonne engineer Ted Bohn explains a model of a "smart grid" that gives consumers the power to choose their own prices and sources of electricity.

  19. Generalized Environment for Modeling Systems

    Energy Science and Technology Software Center (OSTI)

    2012-02-07

    GEMS is an integrated environment that allows technical analysts, modelers, researchers, etc. to integrate and deploy models and/or decision tools with associated data to the internet for direct use by customers. GEMS does not require that the model developer know how to code or script and therefore delivers this capability to a large group of technical specialists. Customers gain the benefit of being able to execute their own scenarios directly without need for technical support.more » GEMS is a process that leverages commercial software products with specialized codes that add connectivity and unique functions to support the overall capability. Users integrate pre-existing models with a commercial product and store parameters and input trajectories in a companion commercial database. The model is then exposed into a commercial web environment and a graphical user interface (GUI) is applied by the model developer. Users execute the model through the web based GUI and GEMS manages supply of proper inputs, execution of models, routing of data to models and display of results back to users. GEMS works in layers, the following description is from the bottom up. Modelers create models in the modeling tool of their choice such as Excel, Matlab, or Fortran. They can also use models from a library of previously wrapped legacy codes (models). Modelers integrate the models (or a single model) by wrapping and connecting the models using the Phoenix Integration tool entitled ModelCenter. Using a ModelCenter/SAS plugin (DOE copyright CW-10-08) the modeler gets data from either an SAS or SQL database and sends results back to SAS or SQL. Once the model is working properly, the ModelCenter file is saved and stored in a folder location to which a SharePoint server tool created at INL is pointed. This enables the ModelCenter model to be run from SharePoint. The modeler then goes into Microsoft SharePoint and creates a graphical user interface (GUI) using the ModelCenter WebPart (CW-12-04) created at INL to work inside SharePoint. The GUI tool links slider bars and drop downs to specific inputs and output of the ModelCenter model that is executable from SharePoint. The modeler also creates in SAS, dashboards, graphs and tables that are exposed by links and SAS and ModelCenter Web Parts into the SharePoint system. The user can then log into SharePoint, move slider bars and select drop down lists to configure the model parameters, click to run the model, and then view the output results that are based on their particular input choices. The main point is that GEMS eliminates the need for a programmer to connect and create the web artifacts necessary to implement and deliver an executable model or decision aid to customers.« less

  20. Detailed Physical Trough Model for NREL's Solar Advisor Model: Preprint

    SciTech Connect (OSTI)

    Wagner, M. J.; Blair, N.; Dobos, A.

    2010-10-01

    Solar Advisor Model (SAM) is a free software package made available by the National Renewable Energy Laboratory (NREL), Sandia National Laboratory, and the US Department of Energy. SAM contains hourly system performance and economic models for concentrating solar power (CSP) systems, photovoltaic, solar hot-water, and generic fuel-use technologies. Versions of SAM prior to 2010 included only the parabolic trough model based on Excelergy. This model uses top-level empirical performance curves to characterize plant behavior, and thus is limited in predictive capability for new technologies or component configurations. To address this and other functionality challenges, a new trough model; derived from physical first principles was commissioned to supplement the Excelergy-based empirical model. This new 'physical model' approaches the task of characterizing the performance of the whole parabolic trough plant by replacing empirical curve-fit relationships with more detailed calculations where practical. The resulting model matches the annual performance of the SAM empirical model (which has been previously verified with plant data) while maintaining run-times compatible with parametric analysis, adding additional flexibility in modeled system configurations, and providing more detailed performance calculations in the solar field, power block, piping, and storage subsystems.

  1. ISTUM PC: industrial sector technology use model for the IBM-PC

    SciTech Connect (OSTI)

    Roop, J.M.; Kaplan, D.T.

    1984-09-01

    A project to improve and enhance the Industrial Sector Technology Use Model (ISTUM) was originated in the summer of 1983. The project had dix identifiable objectives: update the data base; improve run-time efficiency; revise the reference base case; conduct case studies; provide technical and promotional seminars; and organize a service bureau. This interim report describes which of these objectives have been met and which tasks remain to be completed. The most dramatic achievement has been in the area of run-time efficiency. From a model that required a large proportion of the total resources of a mainframe computer and a great deal of effort to operate, the current version of the model (ISTUM-PC) runs on an IBM Personal Computer. The reorganization required for the model to run on a PC has additional advantages: the modular programs are somewhat easier to understand and the data base is more accessible and easier to use. A simple description of the logic of the model is given in this report. To generate the necessary funds for completion of the model, a multiclient project is proposed. This project will extend the industry coverage to all the industrial sectors, including the construction of process flow models for chemicals and petroleum refining. The project will also calibrate this model to historical data and construct a base case and alternative scenarios. The model will be delivered to clients and training provided. 2 references, 4 figures, 3 tables.

  2. Toolkit Model for SN-03 Final Proposal (ratecases/sn03)

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Model SN CRAC ToolKit Model Variable, Flat SN CRAC, 80% TPP (TK187SN-03FS3BPA-PropVariableFlatSNN24-Jun-03.xls, 3.1 MB) Data Input Files (required to run the above...

  3. Wave Tank Testing and Model Validation … An Integrated Approach

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Tank Testing and Model Validation - Lessons Learned Mirko Previsic 7-7-12 2 Representing the Full-Scale System P, V qv q T u q Generator Guide vanes Turbine Blades Configuration 3 Appropriate Modeling of Physics Run-time is important to make a model useful as an engineering and/or optimization tool. * Have to be selective about how the physics is represented in the model * Different physical phenomena are important to different WEC devices Subscale modeling allows to help us understand and

  4. Mathematical Models Shed New Light on Cancer Mutations

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Mathematical Models Shed New Light on Cancer Mutations Mathematical Models Shed New Light on Cancer Mutations Calculations Run at NERSC Pinpoint Rare Mutants More Quickly November 3, 2014 Contact: David Cameron, 617.432.0441, david_cameron@hms.harvard.edu cancermutations3 Heat map of the average magnitude of interaction energies projected onto a structural representation of SH2 domains (white) in complex with phosphopeptide (green). SH2 (Src Homology 2) is a protein domain found in many

  5. Experimental and Modelling Study of the Effect of Diffusional Limitations

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    on the NH3 SCR Activity | Department of Energy Modelling Study of the Effect of Diffusional Limitations on the NH3 SCR Activity Experimental and Modelling Study of the Effect of Diffusional Limitations on the NH3 SCR Activity Simulations different feed conditions and temperature variations were compared to experimental data collected in validation runs at the monolith scale; made direct comparison of experimental data on the same catalyst in the two configurations (powder vs. monolith) PDF

  6. Systems Modeling

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ... Model Nambe Pueblo Water Budget Model Hydrogen Futures Simulations Model Barton Springs ... & Analysis Project Algae Biofuels Techno-Economic Modeling and Analysis Project Climate ...

  7. Borehole Fluid Conductivity Model

    Energy Science and Technology Software Center (OSTI)

    2004-03-15

    Dynamic wellbore electrical conductivity logs provide a valuable means to determine the flow characteristics of fractures intersectin a wellbore, in order to study the hydrologic behavior of fractured rocks. To expedite the analysis of log data, a computer program called BORE II has been deveoloped that considers multiple inflow or outflow points along the wellbore, including the case of horizontal flow across the wellbore, BORE II calculates the evolution of fluid electrical conducivity (FEC) profilesmorein a wellbore or wellbore section, which may be pumped at a low rate, and compares model results to log data in a variety of ways. FEC variations may arise from inflow under natural-state conditions or due to tracer injected in a neighboring well (interference tests). BORE II has an interactive, graphical user interface and runs on a personal computer under the Windows operating system. BORE II is a modification and extension of older codes called BORE and BOREXT, which considered inflow points only. Finite difference solution of the one-dimensional advection-diffusion equation with explicit time stepping; feed points treated as prescribed-mass sources or sinks; assume quadratic relationship between fluid electrical conductivity and ion consentration. Graphical user interface; interactive modification of model parameters and graphical display of model results and filed data in a variety of ways. Can examine horizontal flow or arbitarily complicated combination of upflow, downflow, and horizontal flow. Feed point flow rate and/or concentration may vary in time.less

  8. 06 Run R1.xls

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    16 11 14 AP 7 8 8 15 16 13 AP 12 11 14 MA 1 6 AP 1 16 MA 6 8 9 2 9 12 MA AP 21 25 26 24 23 22 20 Dwn 4pm 24 26 29 28 27 27 28 29 30 2 4 3 10 3 8 9 26 27 23 24 25 3 MA 7 3 3 4 1 1...

  9. NREL: Jobs and Economic Development Impacts (JEDI) Models - About JEDI Wind

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Model Wind Model The Jobs and Economic Development Impacts (JEDI) Wind model allows the user to estimate economic development impacts from wind power generation projects. JEDI Wind has default information that can be used to run a generic impacts analysis assuming wind industry averages. Model users are encouraged to enter as much project-specific data as possible. User inputs specific to JEDI Wind include: Construction materials and labor costs Turbine, tower, blade costs, and local content

  10. Technical Review of the CENWP Computational Fluid Dynamics Model of the John Day Dam Forebay

    SciTech Connect (OSTI)

    Rakowski, Cynthia L.; Serkowski, John A.; Richmond, Marshall C.

    2010-12-01

    The US Army Corps of Engineers Portland District (CENWP) has developed a computational fluid dynamics (CFD) model of the John Day forebay on the Columbia River to aid in the development and design of alternatives to improve juvenile salmon passage at the John Day Project. At the request of CENWP, Pacific Northwest National Laboratory (PNNL) Hydrology Group has conducted a technical review of CENWP's CFD model run in CFD solver software, STAR-CD. PNNL has extensive experience developing and applying 3D CFD models run in STAR-CD for Columbia River hydroelectric projects. The John Day forebay model developed by CENWP is adequately configured and validated. The model is ready for use simulating forebay hydraulics for structural and operational alternatives. The approach and method are sound, however CENWP has identified some improvements that need to be made for future models and for modifications to this existing model.

  11. Documentation and Instructions for Running Two Python Scripts that Aid in Setting up 3D Measurements using the Polytec 3D Scanning Laser Doppler Vibrometer.

    SciTech Connect (OSTI)

    Rohe, Daniel Peter

    2015-08-24

    Sandia National Laboratories has recently purchased a Polytec 3D Scanning Laser Doppler Vibrometer for vibration measurement. This device has proven to be a very nice tool for making vibration measurements, and has a number of advantages over traditional sensors such as accelerometers. The non-contact nature of the laser vibrometer means there is no mass loading due to measuring the response. Additionally, the laser scanning heads can position the laser spot much more quickly and accurately than placing an accelerometer or performing a roving hammer impact. The disadvantage of the system is that a significant amount of time must be invested to align the lasers with each other and the part so that the laser spots can be accurately positioned. The Polytec software includes a number of nice tools to aid in this procedure; however, certain portions are still tedious. Luckily, the Polytec software is readily extensible by programming macros for the system, so tedious portions of the procedure can be made easier by automating the process. The Polytec Software includes a WinWrap (similar to Visual Basic) editor and interface to run macros written in that programming language. The author, however, is much more proficient in Python, and the latter also has a much larger set of libraries that can be used to create very complex macros, while taking advantage of Pythons inherent readability and maintainability.

  12. Software enhancements to the IVSEM model of the CTBTO IMS.

    SciTech Connect (OSTI)

    Damico, Joseph P.

    2011-03-01

    Sandia National Laboratories (SNL) developed the Integrated Verification System Evaluation Model (IVSEM) to estimate the performance of the International Monitoring System (IMS) operated by the Comprehensive Nuclear Test Ban Treaty Organization (CTBTO). IVSEM was developed in several phases between 1995 and 2000. The model was developed in FORTRAN with an IDL-based user interface and was compiled for Windows and UNIX operating systems. Continuing interest in this analysis capability, coupled with numerous advances in desktop computer hardware and software since IVSEM was written, enabled significant improvements to IVSEM run-time performance and data analysis capabilities. These improvements were implemented externally without modifying the FORTRAN executables, which had been previously verified. This paper describes the parallelization approach developed to significantly reduce IVSEM run-times and the new test setup and analysis tools developed to facilitate better IVSEM operation.

  13. A Bayesian Measurment Error Model for Misaligned Radiographic Data

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Lennox, Kristin P.; Glascoe, Lee G.

    2013-09-06

    An understanding of the inherent variability in micro-computed tomography (micro-CT) data is essential to tasks such as statistical process control and the validation of radiographic simulation tools. The data present unique challenges to variability analysis due to the relatively low resolution of radiographs, and also due to minor variations from run to run which can result in misalignment or magnification changes between repeated measurements of a sample. Positioning changes artificially inflate the variability of the data in ways that mask true physical phenomena. We present a novel Bayesian nonparametric regression model that incorporates both additive and multiplicative measurement error inmore » addition to heteroscedasticity to address this problem. We also use this model to assess the effects of sample thickness and sample position on measurement variability for an aluminum specimen. Supplementary materials for this article are available online.« less

  14. Review of dWindDS Model Initial Results; NREL (National Renewable Energy Laboratory)

    SciTech Connect (OSTI)

    Baring-Gould, Ian; Gleason, Michael; Preus, Robert; Sigrin, Ben

    2015-06-17

    The dWindDS model analyses the market diffusion of distributed wind generation for behind the meter applications. It is consumer decision based and uses a variety of data sets including a high resolution wind data set. It projects market development through 2050 based on input on specified by the user. This presentation covers some initial runs with draft base case assumptions.

  15. Ice Sheet Model Reveals Most Comprehensive Projections for West

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Antarctica's Future Most Comprehensive Projections for West Antarctica's Future Revealed Ice Sheet Model Reveals Most Comprehensive Projections for West Antarctica's Future BISICLES Simulations Run at NERSC Help Estimate Ice Loss, Sea Level Rise August 18, 2015 Contact: Linda Vu, +1 510 495 2402, lvu@lbl.gov IceSheet Retreat in the Amundsen Sea Embayment in 2154 (Credit: Cornford et al., The Cryosphere, 2015) A new international study is the first to use a high-resolution, large-scale

  16. Wind energy conversion system analysis model (WECSAM) computer program documentation

    SciTech Connect (OSTI)

    Downey, W T; Hendrick, P L

    1982-07-01

    Described is a computer-based wind energy conversion system analysis model (WECSAM) developed to predict the technical and economic performance of wind energy conversion systems (WECS). The model is written in CDC FORTRAN V. The version described accesses a data base containing wind resource data, application loads, WECS performance characteristics, utility rates, state taxes, and state subsidies for a six state region (Minnesota, Michigan, Wisconsin, Illinois, Ohio, and Indiana). The model is designed for analysis at the county level. The computer model includes a technical performance module and an economic evaluation module. The modules can be run separately or together. The model can be run for any single user-selected county within the region or looped automatically through all counties within the region. In addition, the model has a restart capability that allows the user to modify any data-base value written to a scratch file prior to the technical or economic evaluation. Thus, any user-supplied data for WECS performance, application load, utility rates, or wind resource may be entered into the scratch file to override the default data-base value. After the model and the inputs required from the user and derived from the data base are described, the model output and the various output options that can be exercised by the user are detailed. The general operation is set forth and suggestions are made for efficient modes of operation. Sample listings of various input, output, and data-base files are appended. (LEW)

  17. Programming models

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Task-based models Task-based models and abstractions (such as offered by CHARM++, Legion and HPX, for example) offer many attractive features for mapping computations onto...

  18. THE LOS ALAMOS NATIONAL LABORATORY ATMOSPHERIC TRANSPORT AND DIFFUSION MODELS

    SciTech Connect (OSTI)

    M. WILLIAMS

    1999-08-01

    The LANL atmospheric transport and diffusion models are composed of two state-of-the-art computer codes. The first is an atmospheric wind model called HOThlAC, Higher Order Turbulence Model for Atmospheric circulations. HOTMAC generates wind and turbulence fields by solving a set of atmospheric dynamic equations. The second is an atmospheric diffusion model called RAPTAD, Random Particle Transport And Diffusion. RAPTAD uses the wind and turbulence output from HOTMAC to compute particle trajectories and concentration at any location downwind from a source. Both of these models, originally developed as research codes on supercomputers, have been modified to run on microcomputers. Because the capability of microcomputers is advancing so rapidly, the expectation is that they will eventually become as good as today's supercomputers. Now both models are run on desktop or deskside computers, such as an IBM PC/AT with an Opus Pm 350-32 bit coprocessor board and a SUN workstation. Codes have also been modified so that high level graphics, NCAR Graphics, of the output from both models are displayed on the desktop computer monitors and plotted on a laser printer. Two programs, HOTPLT and RAPLOT, produce wind vector plots of the output from HOTMAC and particle trajectory plots of the output from RAPTAD, respectively. A third CONPLT provides concentration contour plots. Section II describes step-by-step operational procedures, specifically for a SUN-4 desk side computer, on how to run main programs HOTMAC and RAPTAD, and graphics programs to display the results. Governing equations, boundary conditions and initial values of HOTMAC and RAPTAD are discussed in Section III. Finite-difference representations of the governing equations, numerical solution procedures, and a grid system are given in Section IV.

  19. Lifecycle Model

    Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]

    1997-05-21

    This chapter describes the lifecycle model used for the Departmental software engineering methodology.

  20. Systematic approach to verification and validation: High explosive burn models

    SciTech Connect (OSTI)

    Menikoff, Ralph; Scovel, Christina A.

    2012-04-16

    Most material models used in numerical simulations are based on heuristics and empirically calibrated to experimental data. For a specific model, key questions are determining its domain of applicability and assessing its relative merits compared to other models. Answering these questions should be a part of model verification and validation (V and V). Here, we focus on V and V of high explosive models. Typically, model developers implemented their model in their own hydro code and use different sets of experiments to calibrate model parameters. Rarely can one find in the literature simulation results for different models of the same experiment. Consequently, it is difficult to assess objectively the relative merits of different models. This situation results in part from the fact that experimental data is scattered through the literature (articles in journals and conference proceedings) and that the printed literature does not allow the reader to obtain data from a figure in electronic form needed to make detailed comparisons among experiments and simulations. In addition, it is very time consuming to set up and run simulations to compare different models over sufficiently many experiments to cover the range of phenomena of interest. The first difficulty could be overcome if the research community were to support an online web based database. The second difficulty can be greatly reduced by automating procedures to set up and run simulations of similar types of experiments. Moreover, automated testing would be greatly facilitated if the data files obtained from a database were in a standard format that contained key experimental parameters as meta-data in a header to the data file. To illustrate our approach to V and V, we have developed a high explosive database (HED) at LANL. It now contains a large number of shock initiation experiments. Utilizing the header information in a data file from HED, we have written scripts to generate an input file for a hydro code, run a simulation, and generate a comparison plot showing simulated and experimental velocity gauge data. These scripts are then applied to several series of experiments and to several HE burn models. The same systematic approach is applicable to other types of material models; for example, equations of state models and material strength models.

  1. Cost and Performance Assumptions for Modeling Electricity Generation Technologies

    SciTech Connect (OSTI)

    Tidball, R.; Bluestein, J.; Rodriguez, N.; Knoke, S.

    2010-11-01

    The goal of this project was to compare and contrast utility scale power plant characteristics used in data sets that support energy market models. Characteristics include both technology cost and technology performance projections to the year 2050. Cost parameters include installed capital costs and operation and maintenance (O&M) costs. Performance parameters include plant size, heat rate, capacity factor or availability factor, and plant lifetime. Conventional, renewable, and emerging electricity generating technologies were considered. Six data sets, each associated with a different model, were selected. Two of the data sets represent modeled results, not direct model inputs. These two data sets include cost and performance improvements that result from increased deployment as well as resulting capacity factors estimated from particular model runs; other data sets represent model input data. For the technologies contained in each data set, the levelized cost of energy (LCOE) was also evaluated, according to published cost, performance, and fuel assumptions.

  2. GROUT HOPPER MODELING STUDY

    SciTech Connect (OSTI)

    Lee, S.

    2011-08-30

    The Saltstone facility has a grout hopper tank to provide agitator stirring of the Saltstone feed materials. The tank has about 300 gallon capacity to provide a larger working volume for the grout slurry to be held in case of a process upset, and it is equipped with a mechanical agitator, which is intended to keep the grout in motion and agitated so that it won't start to set up. The dry feeds and the salt solution are already mixed in the mixer prior to being transferred to the hopper tank. The hopper modeling study through this work will focus on fluid stirring and agitation, instead of traditional mixing in the literature, in order to keep the tank contents in motion during their residence time so that they will not be upset or solidified prior to transferring the grout to the Saltstone disposal facility. The primary objective of the work is to evaluate the flow performance for mechanical agitators to prevent vortex pull-through for an adequate stirring of the feed materials and to estimate an agitator speed which provides acceptable flow performance with a 45{sup o} pitched four-blade agitator. In addition, the power consumption required for the agitator operation was estimated. The modeling calculations were performed by taking two steps of the Computational Fluid Dynamics (CFD) modeling approach. As a first step, a simple single-stage agitator model with 45{sup o} pitched propeller blades was developed for the initial scoping analysis of the flow pattern behaviors for a range of different operating conditions. Based on the initial phase-1 results, the phase-2 model with a two-stage agitator was developed for the final performance evaluations. A series of sensitivity calculations for different designs of agitators and operating conditions have been performed to investigate the impact of key parameters on the grout hydraulic performance in a 300-gallon hopper tank. For the analysis, viscous shear was modeled by using the Bingham plastic approximation. Steady state analyses with a two-equation turbulence model were performed with the FLUENT{trademark} codes. All analyses were based on three-dimensional results. Recommended operational guidance was developed by using the basic concept that local shear rate profiles and flow patterns can be used as a measure of hydraulic performance and spatial stirring. Flow patterns were estimated by a Lagrangian integration technique along the flow paths from the material feed inlet. The modeling results show that when the two-stage agitator consisting of a 45{sup o} pitched propeller and radial flat-plate blades is run at 140 rpm speed with 28 in diameter, the agitator provides an adequate stirring of the feed materials for a wide range of yield stresses (1 to 21 Pa) and the vortex system is shed into the remote region of the tank boundary by the blade passage in an efficient way. The results of this modeling study were used to develop the design guidelines for the agitator stirring and dispersion of the Saltstone feed materials in a hopper tank.

  3. Pumping Optimization Model for Pump and Treat Systems - 15091

    SciTech Connect (OSTI)

    Baker, S.; Ivarson, Kristine A.; Karanovic, M.; Miller, Charles W.; Tonkin, M.

    2015-01-15

    Pump and Treat systems are being utilized to remediate contaminated groundwater in the Hanford 100 Areas adjacent to the Columbia River in Eastern Washington. Design of the systems was supported by a three-dimensional (3D) fate and transport model. This model provided sophisticated simulation capabilities but requires many hours to calculate results for each simulation considered. Many simulations are required to optimize system performance, so a two-dimensional (2D) model was created to reduce run time. The 2D model was developed as a equivalent-property version of the 3D model that derives boundary conditions and aquifer properties from the 3D model. It produces predictions that are very close to the 3D model predictions, allowing it to be used for comparative remedy analyses. Any potential system modifications identified by using the 2D version are verified for use by running the 3D model to confirm performance. The 2D model was incorporated into a comprehensive analysis system (the Pumping Optimization Model, POM) to simplify analysis of multiple simulations. It allows rapid turnaround by utilizing a graphical user interface that: 1 allows operators to create hypothetical scenarios for system operation, 2 feeds the input to the 2D fate and transport model, and 3 displays the scenario results to evaluate performance improvement. All of the above is accomplished within the user interface. Complex analyses can be completed within a few hours and multiple simulations can be compared side-by-side. The POM utilizes standard office computing equipment and established groundwater modeling software.

  4. Adversary Sequence Interruption Model

    Energy Science and Technology Software Center (OSTI)

    1985-11-15

    PC EASI is an IBM personal computer or PC-compatible version of an analytical technique for measuring the effectiveness of physical protection systems. PC EASI utilizes a methodology called Estimate of Adversary Sequence Interruption (EASI) which evaluates the probability of interruption (PI) for a given sequence of adversary tasks. Probability of interruption is defined as the probability that the response force will arrive before the adversary force has completed its task. The EASI methodology is amore » probabilistic approach that analytically evaluates basic functions of the physical security system (detection, assessment, communications, and delay) with respect to response time along a single adversary path. It is important that the most critical scenarios for each target be identified to ensure that vulnerabilities have not been overlooked. If the facility is not overly complex, this can be accomplished by examining all paths. If the facility is complex, a global model such as Safeguards Automated Facility Evaluation (SAFE) may be used to identify the most vulnerable paths. PC EASI is menu-driven with screen forms for entering and editing the basic scenarios. In addition to evaluating PI for the basic scenario, the sensitivities of many of the parameters chosen in the scenario can be analyzed. These sensitivities provide information to aid the analyst in determining the tradeoffs for reducing the probability of interruption. PC EASI runs under the Micro Data Base Systems'' proprietary database management system Knowledgeman. KMAN provides the user environment and file management for the specified basic scenarios, and KGRAPH the graphical output of the sensitivity calculations. This software is not included. Due to errors in release 2 of KMAN, PC EASI will not execute properly; release 1.07 of KMAN is required.« less

  5. SPR Hydrostatic Column Model Verification and Validation.

    SciTech Connect (OSTI)

    Bettin, Giorgia; Lord, David; Rudeen, David Keith

    2015-10-01

    A Hydrostatic Column Model (HCM) was developed to help differentiate between normal "tight" well behavior and small-leak behavior under nitrogen for testing the pressure integrity of crude oil storage wells at the U.S. Strategic Petroleum Reserve. This effort was motivated by steady, yet distinct, pressure behavior of a series of Big Hill caverns that have been placed under nitrogen for extended period of time. This report describes the HCM model, its functional requirements, the model structure and the verification and validation process. Different modes of operation are also described, which illustrate how the software can be used to model extended nitrogen monitoring and Mechanical Integrity Tests by predicting wellhead pressures along with nitrogen interface movements. Model verification has shown that the program runs correctly and it is implemented as intended. The cavern BH101 long term nitrogen test was used to validate the model which showed very good agreement with measured data. This supports the claim that the model is, in fact, capturing the relevant physical phenomena and can be used to make accurate predictions of both wellhead pressure and interface movements.

  6. Interpretation of simulated global warming using a simple model

    SciTech Connect (OSTI)

    Watterson, I.G.

    2000-01-01

    A simple energy balance model with two parameters, an effective heat capacity and an effective climate sensitivity, is used to interpret six GCM simulations of greenhouse gas-induced global warming. By allowing the parameters to vary in time, the model can be accurately calibrated for each run. It is found that the sensitivity can be approximated as a constant in each case. However, the effective heat capacity clearly varies, and it is important that the energy equation is formulated appropriately, and thus unlike many such models. For simulations with linear forcing and from a cold start, the capacity is in each case close to that of a homogeneous ocean with depth initially 200 m, but increasing some 4.3 m each year, irrespective of the sensitivity and forcing growth rate. Analytic solutions for t his linear capacity function are derived, and these reproduce the GCM runs well, even for cases where the forcing is stabilized after a century or so. The formation of a subsurface maximum in the mean ocean temperature anomaly is a significant feature of such cases. A simple model for a GCM run with a realistic forcing scenario starting from 1,880 is constructed using component results for forcing segments. Given this, an estimate of the cold start error of a simulation of the warming due to forcing after the present would be given by the negative of the temperature drift of the anomaly due to the past forcing. The simple model can evidently be used to give an indication of likely warming curves, at lest for this range of scenarios and GCM sensitivities.

  7. Modeling the Benefits of Storage Technologies to Wind Power

    SciTech Connect (OSTI)

    Sullivan, P.; Short, W.; Blair, N.

    2008-06-01

    Rapid expansion of wind power in the electricity sector is raising questions about how wind resource variability might affect the capacity value of wind farms at high levels of penetration. Electricity storage, with the capability to shift wind energy from periods of low demand to peak times and to smooth fluctuations in output, may have a role in bolstering the value of wind power at levels of penetration envisioned by a new Department of Energy report ('20% Wind by 2030, Increasing Wind Energy's Contribution to U.S. Electricity Supply'). This paper quantifies the value storage can add to wind. The analysis was done employing the Regional Energy Deployment System (ReEDS) model, formerly known as the Wind Deployment System (WinDS) model. ReEDS was used to estimate the cost and development path associated with 20% penetration of wind in the report. ReEDS differs from the WinDS model primarily in that the model has been modified to include the capability to build and use three storage technologies: pumped-hydroelectric storage (PHS), compressed-air energy storage (CAES), and batteries. To assess the value of these storage technologies, two pairs of scenarios were run: business-as-usual, with and without storage; 20% wind energy by 2030, with and without storage. This paper presents the results from those model runs.

  8. A Network Contention Model for the Extreme-scale Simulator

    SciTech Connect (OSTI)

    Engelmann, Christian [ORNL; Naughton, III, Thomas J [ORNL

    2015-01-01

    The Extreme-scale Simulator (xSim) is a performance investigation toolkit for high-performance computing (HPC) hardware/software co-design. It permits running a HPC application with millions of concurrent execution threads, while observing its performance in a simulated extreme-scale system. This paper details a newly developed network modeling feature for xSim, eliminating the shortcomings of the existing network modeling capabilities. The approach takes a different path for implementing network contention and bandwidth capacity modeling using a less synchronous and accurate enough model design. With the new network modeling feature, xSim is able to simulate on-chip and on-node networks with reasonable accuracy and overheads.

  9. An Interactive Multi-Model for Consensus on Climate Change

    SciTech Connect (OSTI)

    Kocarev, Ljupco

    2014-07-02

    This project purports to develop a new scheme for forming consensus among alternative climate models, that give widely divergent projections as to the details of climate change, that is more intelligent than simply averaging the model outputs, or averaging with ex post facto weighting factors. The method under development effectively allows models to assimilate data from one another in run time with weights that are chosen in an adaptive training phase using 20th century data, so that the models synchronize with one another as well as with reality. An alternate approach that is being explored in parallel is the automated combination of equations from different models in an expert-system-like framework.

  10. CoMD Implementation Suite in Emerging Programming Models

    Energy Science and Technology Software Center (OSTI)

    2014-09-23

    CoMD-Em is a software implementation suite of the CoMD [4] proxy app using different emerging programming models. It is intended to analyze the features and capabilities of novel programming models that could help ensure code and performance portability and scalability across heterogeneous platforms while improving programmer productivity. Another goal is to provide the authors and venders with some meaningful feedback regarding the capabilities and limitations of their models. The actual application is a classical molecularmore » dynamics (MD) simulation using either the Lennard-Jones method (LJ) or the embedded atom method (EAM) for primary particle interaction. The code can be extended to support alternate interaction models. The code is expected ro run on a wide class of heterogeneous hardware configurations like shard/distributed/hybrid memory, GPU's and any other platform supported by the underlying programming model.« less

  11. CASL - Initial Validation and Benchmark Study of new 3D CRUD Model

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Initial Validation and Benchmark Study of new 3D CRUD Model A new 3D CRUD model, known as "MAMBA" (for "MPO Advanced Model for Boron Analysis"), is being developed by the Crud Group within the MPO focus area of CASL. The 3D MAMBA v2.0 computer code was released to CASL on Feb. 28, 2012 and is capable of being run in "stand-alone" mode or in coupled mode with a thermal hydraulics computational fluid dynamics model (e.g., STAR-CCM+) and/or a neutron transport

  12. The Potosi Reservoir Model 2013c, Property Modeling Update

    SciTech Connect (OSTI)

    Adushita, Yasmin; Smith, Valerie; Leetaru, Hannes

    2014-09-30

    As part of a larger project co-funded by the United States Department of Energy (US DOE) to evaluate the potential of formations within the Cambro-Ordovician strata above the Mt. Simon as potential targets for carbon sequestration in the Illinois and Michigan Basins, the Illinois Clean Coal Institute (ICCI) requested Schlumberger to evaluate the potential injectivity and carbon dioxide (CO2) plume size of the Cambrian Potosi Formation. The evaluation of this formation was accomplished using wireline data, core data, pressure data, and seismic data from this project as well as two other separately funded projects: the US DOE-funded Illinois Basin–Decatur Project (IBDP) being conducted by the Midwest Geological Sequestration Consortium (MGSC) in Macon County, Illinois, and the Illinois Industrial Carbon Capture and Sequestration (ICCS) project funded through the American Recovery and Reinvestment Act. In 2010, technical performance evaluations on the Cambrian Potosi Formation were performed through reservoir modeling. The data included formation tops from mud logs, well logs from the Verification Well #1 (VW1) and the Injection Well (CCS1), structural and stratigraphic formation from three dimensional (3D) seismic data, and field data from several waste water injection wells for Potosi Formation. The intention was for 2.2 million tons per annum (2 million tonnes per annum [MTPA]) of CO2 to be injected for 20 years. In the Task Error! Reference source not found., the 2010 Potosi heterogeneous model (referred to as the "Potosi Dynamic Model 2010") was re-run using a new injection scenario of 3.5 million tons per annum (3.2 MTPA) for 30 years. The extent of the Potosi Dynamic Model 2010, however, appeared too small for the new injection target. The models size was insufficient to accommodate the evolution of the plume. The new model, Potosi Dynamic Model 2013a, was built by extending the Potosi Dynamic Model 2010 grid to 30 by 30 mi (48 by 48 km), while preserving all property modeling workflows and layering. This model was retained as the base case. In the preceding Task [1], the Potosi reservoir model was updated to take into account the new data from the Verification Well #2 (VW2) which was drilled in 2012. The porosity and permeability modeling was revised to take into account the log data from the new well. Revisions of the 2010 modeling assumptions were also done on relative permeability, capillary pressures, formation water salinity, and the maximum allowable well bottomhole pressure. Dynamic simulations were run using the injection target of 3.5 million tons per annum (3.2 MTPA) for 30 years. This dynamic model was named Potosi Dynamic Model 2013b. In this Task, a new property modeling workflow was applied, where seismic inversion data guided the porosity mapping and geobody extraction. The static reservoir model was fully guided by PorosityCube interpretations and derivations coupled with petrophysical logs from three wells. The two main assumptions are: porosity features in the PorosityCube that correlate with lost circulation zones represent vugular zones, and that these vugular zones are laterally continuous. Extrapolation was done carefully to populate the vugular facies and their corresponding properties outside the seismic footprint up to the boundary of the 30 by 30 mi (48 by 48 km) model. Dynamic simulations were also run using the injection target of 3.5 million tons per annum (3.2 MTPA) for 30 years. This new dynamic model was named Potosi Dynamic Model 2013c. Reservoir simulation with the latest model gives a cumulative injection of 43 million tons (39 MT) in 30 years with a single well, which corresponds to 40% of the injection target. The injection rate is approx. 3.2 MTPA in the first six months as the well is injecting into the surrounding vugs, and declines rapidly to 1.8 million tons per annum (1.6 MTPA) in year 3 once the surrounding vugs are full and the CO2 start to reach the matrix. After, the injection rate declines gradually to 1.2 million tons per annum (1.1 MTPA) in year 18 and stays constant. This implies that a minimum of three (3) wells could be required in the Potosi to reach the injection target. The injectivity evaluated in this Task was higher compared to the preceding Task, since the current facies modeling (guided by the porosity map from the seismic inversion) indicated a higher density of vugs within the vugular zones. 5 As the CO2 follows the paths where vugs interconnection exists, a reasonably large and irregular plume extent was created. After 30 years of injection, the plume extends 13.7 mi (22 km) in E-W and 9.7 mi (16 km) in N-S directions. After injection finishes, the plume continues to migrate laterally, mainly driven by the remaining pressure gradient. After 60 years post-injection, the plume extends 14.2 mi (22.8 km) in E-W and 10 mi (16 km) in N-S directions, and remains constant as the remaining pressure gradient has become very low. Should the targeted cumulative injection of 106 million tons (96 MT) be achieved; a much larger plume extent could be expected. The increase of reservoir pressure at the end of injection is approximately 1,200 psia (8,274 kPa) around the injector and gradually decreases away from the well. The reservoir pressure increase is less than 10 psia (69 kPa) beyond 14 mi (23 km) away from injector. Should the targeted cumulative injection of 106 million tons (96 MT) be achieved; a much larger areal pressure increase could be expected. The reservoir pressure declines rapidly during the first 30 years post injection and the initial reservoir pressure is nearly restored after 100 years post-injection. The present evaluation is mainly associated with uncertainty on the vugs permeability and interconnectivity. The use of porosity mapping from seismic inversion might have reduced the uncertainty on the lateral vugs body distributions. However, major uncertainties on the Potosi vugs permeability remains. Therefore, injection test and pressure interference test among the wells could be considered to evaluate the local vugs permeability, extent, and interconnectivity. Facies modeling within the Potosi has yet to be thoroughly addressed. The carbonates during the time of deposition are believed to be regionally extensive. However, it may be worth delineating the reservoir with other regional wells or modern day analogues to understand the extent of the Potosi. More specifically, the model could incorporate lateral changes or trends if deemed necessary to represent facies transition. Data acquisitions to characterize the fracture pressure gradient, the formation water properties, the relative permeability, and the capillary pressure could also be considered in order to allow a more rigorous evaluation of the Potosi storage performance. A simulation using several injectors could also be considered to determine the required number of wells and appropriate spacing to achieve the injection target while taking into account the pressure interference.

  13. Ventilation Model

    SciTech Connect (OSTI)

    V. Chipman

    2002-10-05

    The purpose of the Ventilation Model is to simulate the heat transfer processes in and around waste emplacement drifts during periods of forced ventilation. The model evaluates the effects of emplacement drift ventilation on the thermal conditions in the emplacement drifts and surrounding rock mass, and calculates the heat removal by ventilation as a measure of the viability of ventilation to delay the onset of peak repository temperature and reduce its magnitude. The heat removal by ventilation is temporally and spatially dependent, and is expressed as the fraction of heat carried away by the ventilation air compared to the fraction of heat produced by radionuclide decay. One minus the heat removal is called the wall heat fraction, or the remaining amount of heat that is transferred via conduction to the surrounding rock mass. Downstream models, such as the ''Multiscale Thermohydrologic Model'' (BSC 2001), use the wall heat fractions as outputted from the Ventilation Model to initialize their post-closure analyses. The Ventilation Model report was initially developed to analyze the effects of preclosure continuous ventilation in the Engineered Barrier System (EBS) emplacement drifts, and to provide heat removal data to support EBS design. Revision 00 of the Ventilation Model included documentation of the modeling results from the ANSYS-based heat transfer model. The purposes of Revision 01 of the Ventilation Model are: (1) To validate the conceptual model for preclosure ventilation of emplacement drifts and verify its numerical application in accordance with new procedural requirements as outlined in AP-SIII-10Q, Models (Section 7.0). (2) To satisfy technical issues posed in KTI agreement RDTME 3.14 (Reamer and Williams 2001a). Specifically to demonstrate, with respect to the ANSYS ventilation model, the adequacy of the discretization (Section 6.2.3.1), and the downstream applicability of the model results (i.e. wall heat fractions) to initialize post-closure thermal models (Section 6.6). (3) To satisfy the remainder of KTI agreement TEF 2.07 (Reamer and Williams 2001b). Specifically to provide the results of post-test ANSYS modeling of the Atlas Facility forced convection tests (Section 7.1.2). This portion of the model report also serves as a validation exercise per AP-SIII.10Q, Models, for the ANSYS ventilation model. (4) To further satisfy KTI agreements RDTME 3.01 and 3.14 (Reamer and Williams 2001a) by providing the source documentation referred to in the KTI Letter Report, ''Effect of Forced Ventilation on Thermal-Hydrologic Conditions in the Engineered Barrier System and Near Field Environment'' (Williams 2002). Specifically to provide the results of the MULTIFLUX model which simulates the coupled processes of heat and mass transfer in and around waste emplacement drifts during periods of forced ventilation. This portion of the model report is presented as an Alternative Conceptual Model with a numerical application, and also provides corroborative results used for model validation purposes (Section 6.3 and 6.4).

  14. Spectral analysis of the efficiency of vertical mixing in the deep ocean due to interaction of tidal currents with a ridge running down a continental slope

    SciTech Connect (OSTI)

    Ibragimov, Ranis N.; Tartakovsky, Alexandre M.

    2014-10-29

    Efficiency of mixing, resulting from the reflection of an internal wave field imposed on the oscillatory background flow with a three-dimensional bottom topography, is investigated using a linear approximation. The radiating wave field is associated with the spectrum of the linear model, which consists of those mode numbers n and slope values α, for which the solution represents the internal waves of frequencies ω = nω0 radiating upwrad of the topography, where ω0 is the fundamental frequency at which internal waves are generated at the topography. The effects of the bottom topography and the earth’s rotation on the spectrum is analyzed analytically and numerically in the vicinity of the critical slope, which is a slope with the same angle to the horizontal as the internal wave characteristic. In this notation, θ is latitude, f is the Coriolis parameter and N is the buoyancy frequency, which is assumed to be a constant, which corresponds to the uniform stratification.

  15. OSPREY Model

    SciTech Connect (OSTI)

    Veronica J. Rutledge

    2013-01-01

    The absence of industrial scale nuclear fuel reprocessing in the U.S. has precluded the necessary driver for developing the advanced simulation capability now prevalent in so many other countries. Thus, it is essential to model complex series of unit operations to simulate, understand, and predict inherent transient behavior and feedback loops. A capability of accurately simulating the dynamic behavior of advanced fuel cycle separation processes will provide substantial cost savings and many technical benefits. The specific fuel cycle separation process discussed in this report is the off-gas treatment system. The off-gas separation consists of a series of scrubbers and adsorption beds to capture constituents of interest. Dynamic models are being developed to simulate each unit operation involved so each unit operation can be used as a stand-alone model and in series with multiple others. Currently, an adsorption model has been developed within Multi-physics Object Oriented Simulation Environment (MOOSE) developed at the Idaho National Laboratory (INL). Off-gas Separation and REcoverY (OSPREY) models the adsorption of off-gas constituents for dispersed plug flow in a packed bed under non-isothermal and non-isobaric conditions. Inputs to the model include gas, sorbent, and column properties, equilibrium and kinetic data, and inlet conditions. The simulation outputs component concentrations along the column length as a function of time from which breakthrough data is obtained. The breakthrough data can be used to determine bed capacity, which in turn can be used to size columns. It also outputs temperature along the column length as a function of time and pressure drop along the column length. Experimental data and parameters were input into the adsorption model to develop models specific for krypton adsorption. The same can be done for iodine, xenon, and tritium. The model will be validated with experimental breakthrough curves. Customers will be given access to OSPREY to used and evaluate the model.

  16. Model documentation natural gas transmission and distribution model (NGTDM) of the national energy modeling system. Volume II: Model developer`s report

    SciTech Connect (OSTI)

    Not Available

    1995-01-03

    To partially fulfill the requirements for {open_quotes}Model Acceptance{close_quotes} as stipulated in EIA Standard 91-01-01 (effective February 3, 1991), the Office of Integrated Analysis and Forecasting has conducted tests of the Natural Gas Transmission and Distribution Model (NGTDM) for the specific purpose of validating the forecasting model. This volume of the model documentation presents the results of {open_quotes}one-at-a-time{close_quotes} sensitivity tests conducted in support of this validation effort. The test results are presented in the following forms: (1) Tables of important model outputs for the years 2000 and 2010 are presented with respect to change in each input from the reference case; (2) Tables of percent changes from base case results for the years 2000 and 2010 are presented for important model outputs; (3) Tables of conditional sensitivities (percent change in output/percent change in input) for the years 2000 and 2010 are presented for important model outputs; (4) Finally, graphs presenting the percent change from base case results for each year of the forecast period are presented for selected key outputs. To conduct the sensitivity tests, two main assumptions are made in order to test the performance characteristics of the model itself and facilitate the understanding of the effects of the changes in the key input variables to the model on the selected key output variables: (1) responses to the amount demanded do not occur since there are no feedbacks of inputs from other NEMS models in the stand-alone NGTDM run. (2) All the export and import quantities from and to Canada and Mexico, and liquefied natural gas (LNG) imports and exports are held fixed (i.e., there are no changes in imports and exports between the reference case and the sensitivity cases) throughout the forecast period.

  17. Design of Experiments, Model Calibration and Data Assimilation

    SciTech Connect (OSTI)

    Williams, Brian J.

    2014-07-30

    This presentation provides an overview of emulation, calibration and experiment design for computer experiments. Emulation refers to building a statistical surrogate from a carefully selected and limited set of model runs to predict unsampled outputs. The standard kriging approach to emulation of complex computer models is presented. Calibration refers to the process of probabilistically constraining uncertain physics/engineering model inputs to be consistent with observed experimental data. An initial probability distribution for these parameters is updated using the experimental information. Markov chain Monte Carlo (MCMC) algorithms are often used to sample the calibrated parameter distribution. Several MCMC algorithms commonly employed in practice are presented, along with a popular diagnostic for evaluating chain behavior. Space-filling approaches to experiment design for selecting model runs to build effective emulators are discussed, including Latin Hypercube Design and extensions based on orthogonal array skeleton designs and imposed symmetry requirements. Optimization criteria that further enforce space-filling, possibly in projections of the input space, are mentioned. Designs to screen for important input variations are summarized and used for variable selection in a nuclear fuels performance application. This is followed by illustration of sequential experiment design strategies for optimization, global prediction, and rare event inference.

  18. Building Simulation Modelers are we big-data ready?

    SciTech Connect (OSTI)

    Sanyal, Jibonananda; New, Joshua Ryan

    2014-01-01

    Recent advances in computing and sensor technologies have pushed the amount of data we collect or generate to limits previously unheard of. Sub-minute resolution data from dozens of channels is becoming increasingly common and is expected to increase with the prevalence of non-intrusive load monitoring. Experts are running larger building simulation experiments and are faced with an increasingly complex data set to analyze and derive meaningful insight. This paper focuses on the data management challenges that building modeling experts may face in data collected from a large array of sensors, or generated from running a large number of building energy/performance simulations. The paper highlights the technical difficulties that were encountered and overcome in order to run 3.5 million EnergyPlus simulations on supercomputers and generating over 200 TBs of simulation output. This extreme case involved development of technologies and insights that will be beneficial to modelers in the immediate future. The paper discusses different database technologies (including relational databases, columnar storage, and schema-less Hadoop) in order to contrast the advantages and disadvantages of employing each for storage of EnergyPlus output. Scalability, analysis requirements, and the adaptability of these database technologies are discussed. Additionally, unique attributes of EnergyPlus output are highlighted which make data-entry non-trivial for multiple simulations. Practical experience regarding cost-effective strategies for big-data storage is provided. The paper also discusses network performance issues when transferring large amounts of data across a network to different computing devices. Practical issues involving lag, bandwidth, and methods for synchronizing or transferring logical portions of the data are presented. A cornerstone of big-data is its use for analytics; data is useless unless information can be meaningfully derived from it. In addition to technical aspects of managing big data, the paper details design of experiments in anticipation of large volumes of data. The cost of re-reading output into an analysis program is elaborated and analysis techniques that perform analysis in-situ with the simulations as they are run are discussed. The paper concludes with an example and elaboration of the tipping point where it becomes more expensive to store the output than re-running a set of simulations.

  19. Weather Research and Forecasting Model with Vertical Nesting Capability

    Energy Science and Technology Software Center (OSTI)

    2014-08-01

    The Weather Research and Forecasting (WRF) model with vertical nesting capability is an extension of the WRF model, which is available in the public domain, from www.wrf-model.org. The new code modifies the nesting procedure, which passes lateral boundary conditions between computational domains in the WRF model. Previously, the same vertical grid was required on all domains, while the new code allows different vertical grids to be used on concurrently run domains. This new functionality improvesmore » WRF's ability to produce high-resolution simulations of the atmosphere by allowing a wider range of scales to be efficiently resolved and more accurate lateral boundary conditions to be provided through the nesting procedure.« less

  20. PORFLOW Modeling Supporting The H-Tank Farm Performance Assessment

    SciTech Connect (OSTI)

    Jordan, J. M.; Flach, G. P.; Westbrook, M. L.

    2012-08-31

    Numerical simulations of groundwater flow and contaminant transport in the vadose and saturated zones have been conducted using the PORFLOW code in support of an overall Performance Assessment (PA) of the H-Tank Farm. This report provides technical detail on selected aspects of PORFLOW model development and describes the structure of the associated electronic files. The PORFLOW models for the H-Tank Farm PA, Rev. 1 were updated with grout, solubility, and inventory changes. The aquifer model was refined. In addition, a set of flow sensitivity runs were performed to allow flow to be varied in the related probabilistic GoldSim models. The final PORFLOW concentration values are used as input into a GoldSim dose calculator.

  1. Developing an Integrated Model Framework for the Assessment of Sustainable Agricultural Residue Removal Limits for Bioenergy Systems

    SciTech Connect (OSTI)

    David Muth, Jr.; Jared Abodeely; Richard Nelson; Douglas McCorkle; Joshua Koch; Kenneth Bryden

    2011-08-01

    Agricultural residues have significant potential as a feedstock for bioenergy production, but removing these residues can have negative impacts on soil health. Models and datasets that can support decisions about sustainable agricultural residue removal are available; however, no tools currently exist capable of simultaneously addressing all environmental factors that can limit availability of residue. The VE-Suite model integration framework has been used to couple a set of environmental process models to support agricultural residue removal decisions. The RUSLE2, WEPS, and Soil Conditioning Index models have been integrated. A disparate set of databases providing the soils, climate, and management practice data required to run these models have also been integrated. The integrated system has been demonstrated for two example cases. First, an assessment using high spatial fidelity crop yield data has been run for a single farm. This analysis shows the significant variance in sustainably accessible residue across a single farm and crop year. A second example is an aggregate assessment of agricultural residues available in the state of Iowa. This implementation of the integrated systems model demonstrates the capability to run a vast range of scenarios required to represent a large geographic region.

  2. An enumerative technique for modeling wind power variations in production costing

    SciTech Connect (OSTI)

    Milligan, M.R.; Graham, M.S.

    1997-04-01

    Production cost, generation expansion, and reliability models are used extensively by utilities in the planning process. Most models do not provide adequate means for representing the full range of potential variation in wind power plants. In order to properly account for expected variation in wind-generated electricity with these models, the authors describe an enumerated probabilistic approach that is performed outside the production cost model, compare it with a reduced enumerated approach, and present some selected utility results. The technique can be applied to any model, and can considerably reduce the number of model runs as compared to the full enumerated approach. They use both a load duration curve model and a chronological model to measure wind plant capacity credit, and also present some other selected results.

  3. Autonomie Model

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Autonomie Model (Argonne National Laboratory) Objectives Perform simulations to assess the energy consumption and performance of advanced component and powertrain technologies in a vehicle system context. Key Attributes & Strengths Developed over the past 15 years, Autonomie has been validated using component and vehicle test data, providing confidence in the results. Thus, the tool is widely accepted by the industry and has been licensed to more than 150 organizations worldwide. The model

  4. Data Transfer Software-SAS MetaData Server & Phoenix Integration Model Center

    Energy Science and Technology Software Center (OSTI)

    2010-04-15

    This software is a plug-in that interfaces between the Phoenix Integration's Model Center and the Base SAS 9.2 applications. The end use of the plug-in is to link input and output data that resides in SAS tables or MS SQL to and from "legacy" software programs without recoding. The potential end users are users who need to run legacy code and want data stored in a SQL database.

  5. Users guide for SAMM: A prototype southeast Alaska multiresource model. Forest Service general technical report

    SciTech Connect (OSTI)

    Weyermann, D.L.; Fight, R.D.; Garrett, F.D.

    1991-08-01

    This paper instructs resource analysts on using the southeast Alaska multiresource model (SAMM). SAMM is an interactive microcomputer program that allows users to explore relations among several resources in southeast Alaska (timber, anadromous fish, deer, and hydrology) and the effects of timber management activities (logging, thinning, and road building) on those relations and resources. This guide assists users in installing SAMM on a microcomputer, developing input data files, making simulation runs, and strong output data for external analysis and graphic display.

  6. EQ3/6 A Software Package for Geochemical Modeling

    Energy Science and Technology Software Center (OSTI)

    2010-12-13

    EQ3/6 is a software package for modeling geochemical interactions between aqueous solution, solids, and gases, following principles of chemical thermodynamics and chemical kinetics. It is useful for interpreting aqueiou solution chemical compositions and for calculating the consequences of reaction of such solutions with minerals, other solids, and gases. It is designed to run in a command line environment. EQPT is a thermodynamic data file preprocessor. EQ3NR is a speciation-solubility code. EQ6 is a reaction pathmore » code.« less

  7. Upper Rio Grande Simulation Model (URGSIM)

    Energy Science and Technology Software Center (OSTI)

    2010-08-05

    URGSIM estimates the location of surface water and groundwater resources in the upper Rio Grande Basin between the Colorado-New Mexico state line, and Caballo Reservoir from 1975 - 2045. It is a mass balance hydrology model of the Upper Rio Grande surface water, groundwater, and water demand systems which runs at a monthly timestep from 1975-1999 in calibration mode, 2000 – 2004 in validation mode, and 2005 – 2045 in scenario analysis mode.

  8. Modelling the microstructure of thermal barrier coatings

    SciTech Connect (OSTI)

    Cirolini, S.; Marchese, M.; Jacucci, G.; Harding, J.H.; Mulheran, P.A.

    1994-12-31

    Thermal barrier coatings produced by plasma spraying have a characteristic microstructure of lamellae, pores and cracks. The lamellae are produced by the splashing of particles onto the substrate. As the coating grows, the lamellae pile on top of each other, producing an interlocking structure. In most cases the growth is rapid and chaotic. The result is a microstructure characterized by pores and cracks. The authors present an improved model for the deposition process of thermal barrier coatings. The task of modeling the coating growth is split into two parts: first the authors consider a description of the particle on arrival at the film, based on the available theoretical, numerical and experimental findings. Second they define and discuss a set of physically-based rules for combining these events to obtain the film. The splats run along the surface and are permitted to curl up (producing pores) or interlock. The computer model uses a mesh to combine these processes and build the coating. They discuss the use of the proposed model in predicting microstructures and hence in correlating the properties of these coatings with the parameters of the process used to make them.

  9. Simulating coarse-scale vegetation dynamics using the Columbia River Basin succession model-crbsum. Forest Service general technical report

    SciTech Connect (OSTI)

    Keane, R.E.; Long, D.G.; Menakis, J.P.; Hann, W.J.; Bevins, C.D.

    1996-10-01

    The paper details the landscape succession model developed for the coarse-scale assessment called CRBSUM (Columbia River Basin SUccession Model) and presents some general results of the application of this model to the entire basin. CRBSUM was used to predict future landscape characteristics to evaluate management alternatives for both mid-and coarse-scale efforts. A test and sensitivity analysis of CRBSUM is also presented. This paper was written as a users guide for those who wish to run the model and interprete results, and its was also written as documentation for some results of the Interior Columbia River Basin simulation effort.

  10. Phenomenological Modeling

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Phenomenological Modeling - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Fuel Cycle Defense Waste Management Programs

  11. Criticality Model

    SciTech Connect (OSTI)

    A. Alsaed

    2004-09-14

    The ''Disposal Criticality Analysis Methodology Topical Report'' (YMP 2003) presents the methodology for evaluating potential criticality situations in the monitored geologic repository. As stated in the referenced Topical Report, the detailed methodology for performing the disposal criticality analyses will be documented in model reports. Many of the models developed in support of the Topical Report differ from the definition of models as given in the Office of Civilian Radioactive Waste Management procedure AP-SIII.10Q, ''Models'', in that they are procedural, rather than mathematical. These model reports document the detailed methodology necessary to implement the approach presented in the Disposal Criticality Analysis Methodology Topical Report and provide calculations utilizing the methodology. Thus, the governing procedure for this type of report is AP-3.12Q, ''Design Calculations and Analyses''. The ''Criticality Model'' is of this latter type, providing a process evaluating the criticality potential of in-package and external configurations. The purpose of this analysis is to layout the process for calculating the criticality potential for various in-package and external configurations and to calculate lower-bound tolerance limit (LBTL) values and determine range of applicability (ROA) parameters. The LBTL calculations and the ROA determinations are performed using selected benchmark experiments that are applicable to various waste forms and various in-package and external configurations. The waste forms considered in this calculation are pressurized water reactor (PWR), boiling water reactor (BWR), Fast Flux Test Facility (FFTF), Training Research Isotope General Atomic (TRIGA), Enrico Fermi, Shippingport pressurized water reactor, Shippingport light water breeder reactor (LWBR), N-Reactor, Melt and Dilute, and Fort Saint Vrain Reactor spent nuclear fuel (SNF). The scope of this analysis is to document the criticality computational method. The criticality computational method will be used for evaluating the criticality potential of configurations of fissionable materials (in-package and external to the waste package) within the repository at Yucca Mountain, Nevada for all waste packages/waste forms. The criticality computational method is also applicable to preclosure configurations. The criticality computational method is a component of the methodology presented in ''Disposal Criticality Analysis Methodology Topical Report'' (YMP 2003). How the criticality computational method fits in the overall disposal criticality analysis methodology is illustrated in Figure 1 (YMP 2003, Figure 3). This calculation will not provide direct input to the total system performance assessment for license application. It is to be used as necessary to determine the criticality potential of configuration classes as determined by the configuration probability analysis of the configuration generator model (BSC 2003a).

  12. Analytic models of supercomputer performance in multiprogramming environments

    SciTech Connect (OSTI)

    Menasce, D.A. ); Almeida, V.A.F. )

    1989-01-01

    Supercomputers run multiprogrammed time-sharing operating systems, so their facilities can be shared by many local and remote users. Therefore, it is important to be able to assess the performance of supercomputers and multiprogrammed environments. Analytic models based on Queueing Networks (QNs) and Stochastic Petri Nets (SPNs) are used in this paper with two purposes: to evaluate the performance of supercomputers in multiprogrammed environments, and to compare, performance-wise, conventional supercomputer architectures with a novel architecture proposed here. It is shown, with the aid of the analytic models, that the proposed architecture is preferable performance-wise over the existing conventional supercomputer architectures. A three-level workload characterization model for supercomputers is presented. Input data for the numerical examples discussed here are extracted from the well-known Los Alamos benchmark, and the results are validated by simulation.

  13. Running Dry at the Power Plant

    Broader source: Energy.gov [DOE]

    Securing sufficient supplies of fresh water for societal, industrial, and agricultural uses while protecting the natural environment is becoming increasingly difficult in many parts of the United...

  14. Run 1147 Event 0. August 6

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    34 cm 4

  15. Run 1147 Event 0. August 6

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    200 cm 20

  16. Run 1148 Event 1016. August 6

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    1016. August 6 th 2015 17:15 20 cm 20

  17. Run 1148 Event 778. August 6

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    778. August 6 th 2015 17:16 40 cm 26

  18. Run 1153 Event 40. August 6

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    53 Event 40. August 6 th 2015 21:07 40 cm 24

  19. Run Spear Down Low-alpha Shutdown

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Spear Down Low-alpha Shutdown Maintenance / AP University Holidays Sep Oct S 1 Mar Apr May Jun Jul Aug 4/6/2012 SPEAR OPERATING SCHEDULE 2011-2012 2011 2012 Sep Oct Nov Dec Jan Feb 1 2 M 3 1 1 1 S 2 4 1 3 1 AP 2 2 2 W 5 2 4 2 AP AP 3 3 AP 1 3 T 3 5 3 4 1 4 2 4 1 6 4 F 2 7 5 2 1 5 3 1 5 2 T 1 6 4 S 3 8 5 3 7 5 4 1 6 3 2 6 4 2 6 8 5 S 4 9 6 8 6 low 2 7 4 3 7 5 3 7 5 9 MA MA 9 7 M 5 10 3 8 5 4 alpha 8 6 4 T 6 11 8 6 10 8 7 4 9 MA 6 5 9 MA AP 7 13 11 9 W 7 12 9 5 10 AP 7 6 AP 10 8 6 AP 10 AP 7 AP 13

  20. SunRun Inc | Open Energy Information

    Open Energy Info (EERE)

    Place: San Francisco, California Zip: 94103 Region: Bay Area Sector: Solar Product: Solar installer Website: www.sunrunhome.com Coordinates: 37.7871306, -122.4041075...

  1. Supersymmetric Dark Matter after LHC Run 1

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Bagnaschi, E. A.; Buchmueller, O.; Cavanaugh, R.; Citron, M.; De Roeck, A.; Dolan, M. J.; Ellis, J. R.; Flacher, H.; Heinemeyer, S.; Isidori, G.; et al

    2015-10-23

    Different mechanisms operate in various regions of the MSSM parameter space to bring the relic density of the lightest neutralino, χ~01, assumed here to be the lightest SUSY particle (LSP) and thus the dark matter (DM) particle, into the range allowed by astrophysics and cosmology. These mechanisms include coannihilation with some nearly degenerate next-to-lightest supersymmetric particle such as the lighter stau τ~1, stop t~1 or chargino χ~±1, resonant annihilation via direct-channel heavy Higgs bosons H / A, the light Higgs boson h or the Z boson, and enhanced annihilation via a larger Higgsino component of the LSP in the focus-pointmore » region. These mechanisms typically select lower-dimensional subspaces in MSSM scenarios such as the CMSSM, NUHM1, NUHM2, and pMSSM10. We analyze how future LHC and direct DM searches can complement each other in the exploration of the different DM mechanisms within these scenarios. We find that the τ~1 coannihilation regions of the CMSSM, NUHM1, NUHM2 can largely be explored at the LHC via searches for /ET events and long-lived charged particles, whereas theirH / A funnel, focus-point and χ~±1 coannihilation regions can largely be explored by the LZ and Darwin DM direct detection experiments. Furthermore, we find that the dominant DM mechanism in our pMSSM10 analysis is χ~±1 coannihilation: parts of its parameter space can be explored by the LHC, and a larger portion by future direct DM searches.« less

  2. Running River PLC | Open Energy Information

    Open Energy Info (EERE)

    6HQ Sector: Hydro Product: UK-based private equity investor that has interests in mini-hydro power generation assets. Coordinates: 51.506325, -0.127144 Show Map Loading...

  3. Running Jobs | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    script job environment: COBALTPARTNAME - physical partition assigned by cobalt (e.g. MIR-XXXXX-YYYYY-512) COBALTPARTSIZE - size of the partition assigned by cobalt (e.g. 512)...

  4. Blue running of the primordial tensor spectrum

    SciTech Connect (OSTI)

    Gong, Jinn-Ouk

    2014-07-01

    We examine the possibility of positive spectral index of the power spectrum of the primordial tensor perturbation produced during inflation in the light of the detection of the B-mode polarization by the BICEP2 collaboration. We find a blue tilt is in general possible when the slow-roll parameter decays rapidly. We present two known examples in which a positive spectral index for the tensor power spectrum can be obtained. We also briefly discuss other consistency tests for further studies on inflationary dynamics.

  5. FY2000 Run Schedule v6

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    FY2000 SPEAR Operation Cycle Oct-99 Nov-99 Dec-99 Jan-00 Feb-00 Mar-00 Apr-00 May-00 Jun-00 Jul-00 Aug-00 Sep-00 1 1 1 1 1 1 1 1 M/A 1 1 1 1 2 2 ñý½’ü ¢ :Kd*Ð ‚¢I%` èÀ›]€ƒc0öén›D5‚Ë7-)%ˆÀsÞãS÷‡Ù˜—r*¸YÆþõ/⸡åû]šòCbšÀ°ì™Šo«²/+ìF¬éç—C÷¶Z¸”

  6. Supersymmetric Dark Matter after LHC Run 1

    SciTech Connect (OSTI)

    Bagnaschi, E. A.; Buchmueller, O.; Cavanaugh, R.; Citron, M.; De Roeck, A.; Dolan, M. J.; Ellis, J. R.; Flacher, H.; Heinemeyer, S.; Isidori, G.; Malik, S.; Santos, D. Martinez; Olive, K. A.; Sakurai, K.; de Vries, K. J.; Weiglein, G.

    2015-10-23

    Different mechanisms operate in various regions of the MSSM parameter space to bring the relic density of the lightest neutralino, ?~01, assumed here to be the lightest SUSY particle (LSP) and thus the dark matter (DM) particle, into the range allowed by astrophysics and cosmology. These mechanisms include coannihilation with some nearly degenerate next-to-lightest supersymmetric particle such as the lighter stau ?~1, stop t~1 or chargino ?~1, resonant annihilation via direct-channel heavy Higgs bosons H / A, the light Higgs boson h or the Z boson, and enhanced annihilation via a larger Higgsino component of the LSP in the focus-point region. These mechanisms typically select lower-dimensional subspaces in MSSM scenarios such as the CMSSM, NUHM1, NUHM2, and pMSSM10. We analyze how future LHC and direct DM searches can complement each other in the exploration of the different DM mechanisms within these scenarios. We find that the ?~1 coannihilation regions of the CMSSM, NUHM1, NUHM2 can largely be explored at the LHC via searches for /ET events and long-lived charged particles, whereas theirH / A funnel, focus-point and ?~1 coannihilation regions can largely be explored by the LZ and Darwin DM direct detection experiments. Furthermore, we find that the dominant DM mechanism in our pMSSM10 analysis is ?~1 coannihilation: parts of its parameter space can be explored by the LHC, and a larger portion by future direct DM searches.

  7. Running Jobs under SLURM on Babbage

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    bash-4.1 srun mpirun.mic -n 2 -hostfile micfile.SLURMJOBID -ppn 1 .xthi.mic Hello from rank 0, thread 0, on bc0908-mic0. (core affinity 1) Hello from rank 0, thread...

  8. Running Greener: E-Mobility at SAP

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    CO 2 neutral Reduce consumption of fossil fuels and noise Environmental Mobility Unique battery subsidy as benefit Enjoy free charging exclusively at SAP's charging spots...

  9. NERSC_Capability_Run_Rules.docx

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    w ill u se t he M PI+X e xecution m odel a nd a bide by t he r un r ules f or t he s ame a s d escribed i n t he R FP r un r ules d ocument. b. Each b enchmark's p roblem s ize...

  10. 05-RunningJobs-Turner.pdf

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    o r eserve f or y our j ob * How l ong t o r eserve t hose n odes * OpBonal: w hat t o n ame S TDOUT fi les, w hat a ccount t o c harge, whether t o n oBfy y ou b y e mail w hen y...

  11. Hitting a Home Run for Clean Energy

    Broader source: Energy.gov [DOE]

    Spring. With gentle breezes, blooming flowers, and warm sunshine, the season marks the beginning of fun outdoor activities—picnics, camping, hikes, and the classic American pastime—baseball. In the...

  12. 06 Run 6-16-05.xls

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    4 MA MA 15 11 AP 7 14 12 5 26 16 16 18 20 22 21 Conf 17 18 19 20 21 13 14 User 16 15 21 22 23 1 1 5 AP 2 14 5 6 13 9 18 14 17 16 15 16 23 24 27 30 27 28 31 Oct Sep 26 BL May Aug...

  13. FY2003 Run Sched.xls

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Robleto, B. Scott 31 29 2002 2003 1 2 3 N 13 4 2002 2003 1 2 3 29 30 31 10 5 6 5 6 7 8 9 22 23 MAAP AP A E 5 17 18 19 10 11 12 9 MAAP 18 Startup 24 23 22 21 16 17 15 1 2 3 15 10...

  14. 2005_Run 3-29-05.xls

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    4 2 N 9 11 10 6 8 AP 10 MA 15 17 AP MA 5 2 4 4 3 7 4 13 11 15 12 24 16 23 19 16 20 22 15 22 23 24 17 27 26 24 AP 23 22 28 25 29 30 27 26 25 24 30 28 27 2005 2006 28 29 30 31 26 19...

  15. Discussion: the design and analysis of the Gaussian process model

    SciTech Connect (OSTI)

    Williams, Brian J; Loeppky, Jason L

    2008-01-01

    The investigation of complex physical systems utilizing sophisticated computer models has become commonplace with the advent of modern computational facilities. In many applications, experimental data on the physical systems of interest is extremely expensive to obtain and hence is available in limited quantities. The mathematical systems implemented by the computer models often include parameters having uncertain values. This article provides an overview of statistical methodology for calibrating uncertain parameters to experimental data. This approach assumes that prior knowledge about such parameters is represented as a probability distribution, and the experimental data is used to refine our knowledge about these parameters, expressed as a posterior distribution. Uncertainty quantification for computer model predictions of the physical system are based fundamentally on this posterior distribution. Computer models are generally not perfect representations of reality for a variety of reasons, such as inadequacies in the physical modeling of some processes in the dynamic system. The statistical model includes components that identify and adjust for such discrepancies. A standard approach to statistical modeling of computer model output for unsampled inputs is introduced for the common situation where limited computer model runs are available. Extensions of the statistical methods to functional outputs are available and discussed briefly.

  16. The Potosi Reservoir Model 2013

    SciTech Connect (OSTI)

    Adushita, Yasmin; Smith, Valerie; Leetaru, Hannes

    2014-09-30

    As a part of a larger project co-funded by the United States Department of Energy (US DOE) to evaluate the potential of formations within the Cambro-Ordovician strata above the Mt. Simon as potential targets for carbon sequestration in the Illinois and Michigan Basins, the Illinois Clean Coal Institute (ICCI) requested Schlumberger to evaluate the potential injectivity and carbon dioxide (CO2) plume size of the Cambrian Potosi Formation. The evaluation of this formation was accomplished using wireline data, core data, pressure data, and seismic data from the US DOE-funded Illinois Basin–Decatur Project (IBDP) being conducted by the Midwest Geological Sequestration Consortium in Macon County, Illinois. In 2010, technical performance evaluations on the Cambrian Potosi Formation were performed through reservoir modeling. The data included formation tops from mud logs, well logs from the VW1 and the CCS1 wells, structural and stratigraphic formation from three dimensional (3D) seismic data, and field data from several waste water injection wells for Potosi Formation. Intention was for two million tons per annum (MTPA) of CO2 to be injected for 20 years. In the preceding, the 2010 Potosi heterogeneous model (referred to as the "Potosi Dynamic Model 2010" in this topical report) was re-run using a new injection scenario; 3.2 MTPA for 30 years. The extent of the Potosi Dynamic Model 2010, however, appeared too small for the new injection target. It was not sufficiently large enough to accommodate the evolution of the plume. The new model, Potosi Dynamic Model 2013a, was built by extending the Potosi Dynamic Model 2010 grid to 30 miles x 30 miles (48.3km x48.3km), while preserving all property modeling workflows and layering. This model was retained as the base case of Potosi Dynamic Model 2013a. The Potosi reservoir model was updated to take into account the new data from the verification well VW2 which was drilled in 2012. The new porosity and permeability modeling was performed to take into account the log data from the new well. Revisions of the 2010 modeling assumptions were also done on relative permeability, capillary pressures, formation water salinity, and the maximum allowable well bottomhole pressure. Dynamic simulations were run using the injection target of 3.2 MTPA for 30 years. This new dynamic model was named Potosi Dynamic Model 2013b. Due to the major uncertainties on the vugs permeability, two models were built; the Pessimistic and Optimistic Cases. The Optimistic Case assumes vugs permeability of 9,000 mD, which is analog to the vugs permeability identified in the pressure fall off test of a waste water injector in the Tuscola site, approx. 40 miles (64.4km) away from the IBDP area. The Pessimistic Case assumes that the vugs permeability is equal to the log data, which does not take into account the permeability from secondary porosity. The probability of such case is deemed low and could be treated as the worst case scenario, since the contribution of secondary porosity to the permeability is neglected and the loss circulation events might correspond to a much higher permeability. It is considered important, however, to identify the range of possible reservoir performance since there are no rigorous data available for the vugs permeability. The Optimistic Case gives an average CO2 injection rate of 0.8 MTPA and cumulative injection of 26 MT in 30 years, which corresponds to 27% of the injection target. The injection rate is approx. 3.2 MTPA in the first year as the well is injecting into the surrounding vugs, and declines rapidly to 0.8 MTPA in year 4 once the surrounding vugs are full and the CO2 start to reach the matrix. This implies that according to this preliminary model, a minimum of four (4) wells could be required to achieve the injection target. This result is lower than the injectivity estimated in the Potosi Dynamic Model 2013a (43 MT in 30 years), since the permeability model applied in the Potosi Dynamic Model 2013b is more conservative. This revision was deemed necessary to treat the uncertainty in a more appropriate manner. As the CO2 follows the paths where vugs interconnection exists, a reasonably large and irregular plume extent was created. For the Optimistic Case, the plume extends 17 miles (27.4km) in E-W and 14 miles (22.5km) in N-S directions after 30 years. After injection is completed, the plume continues to migrate laterally, mainly driven by the remaining pressure gradient. After 100 years post injection, the plume extends 20 miles (32.2km) in E-W and 15.5 miles (24.9km) in N-S directions. Should the targeted cumulative injection of 96 MT be achieved; a much larger plume extent could be expected. For the Optimistic Case, the increase of reservoir pressure at the end of injection is approximately 1200 psia (8,274 kPa) around the injector and gradually decreases away from the well. The reservoir pressure increase is less than 30 psia (206.8 kPa) beyond 14 miles (22.5km) away from injector. Should the targeted cumulative injection of 96 MT be achieved; a much larger areal pressure increase could be expected. The initial reservoir pressure is nearly restored after approximately 100 years post injection. The presence of matrix slows down the pressure dissipations. The Pessimistic Case gives an average CO2 injection rate of 0.2 MTPA and cumulative injection of 7 MT in 30 years, which corresponds to 7% of the injection target. This implies that in the worst case scenario, a minimum of sixteen (16) wells could be required to achieve the injection target. The present evaluation is mainly associated with uncertainty on the vugs permeability, distribution, and interconnectivity. The different results indicated by the Optimistic and Pessimistic Cases signify the importance of vugs permeability characterization. Therefore, injection test and pressure interference test among the wells could be considered to evaluate the local vugs permeability, extent, and interconnectivity. Porosity mapping derived from the seismic inversion could also be used in the succeeding task to characterize the lateral porosity distribution within the reservoir. With or without seismic inversion porosity mapping, it is worth exploring whether increased lateral heterogeneity plays a significant role in Potosi injectivity. Investigations on vugular, dolomitic outcrops suggest that there may be significantly greater lateral heterogeneity than what has been modeled here. Facies modeling within the Potosi has yet to be thoroughly addressed. The carbonates during the time of deposition are believed to be regionally extensive. However, it may be worth delineating the reservoir with other regional wells or modern day analogues to understand the extent of the Potosi. More specifically, the model could incorporate lateral changes or trends if deemed necessary to represent facies transition. Data acquisitions to characterize the fracture pressure gradient, the formation water properties, the relative permeability, and the capillary pressure could also be considered in order to allow a more rigorous evaluation of the Potosi storage performance. A simulation using several injectors could also be considered to determine the required number of wells to achieve the injection target while taking into account the pressure interference.

  17. Molten Salt Power Tower Cost Model for the System Advisor Model (SAM)

    SciTech Connect (OSTI)

    Turchi, C. S.; Heath, G. A.

    2013-02-01

    This report describes a component-based cost model developed for molten-salt power tower solar power plants. The cost model was developed by the National Renewable Energy Laboratory (NREL), using data from several prior studies, including a contracted analysis from WorleyParsons Group, which is included herein as an Appendix. The WorleyParsons' analysis also estimated material composition and mass for the plant to facilitate a life cycle analysis of the molten salt power tower technology. Details of the life cycle assessment have been published elsewhere. The cost model provides a reference plant that interfaces with NREL's System Advisor Model or SAM. The reference plant assumes a nominal 100-MWe (net) power tower running with a nitrate salt heat transfer fluid (HTF). Thermal energy storage is provided by direct storage of the HTF in a two-tank system. The design assumes dry-cooling. The model includes a spreadsheet that interfaces with SAM via the Excel Exchange option in SAM. The spreadsheet allows users to estimate the costs of different-size plants and to take into account changes in commodity prices. This report and the accompanying Excel spreadsheet can be downloaded at https://sam.nrel.gov/cost.

  18. On linking an Earth system model to the equilibrium carbon representation of an economically optimizing land use model

    SciTech Connect (OSTI)

    Bond-Lamberty, Benjamin; Calvin, Katherine V.; Jones, Andrew D.; Mao, Jiafu; Patel, Pralit L.; Shi, Xiaoying; Thomson, Allison M.; Thornton, Peter E.; Zhou, Yuyu

    2014-01-01

    Human activities are significantly altering biogeochemical cycles at the global scale, posing a significant problem for earth system models (ESMs), which may incorporate static land-use change inputs but do not actively simulate policy or economic forces. One option to address this problem is a to couple an ESM with an economically oriented integrated assessment model. Here we have implemented and tested a coupling mechanism between the carbon cycles of an ESM (CLM) and an integrated assessment (GCAM) model, examining the best proxy variables to share between the models, and quantifying our ability to distinguish climate- and land-use-driven flux changes. CLMs net primary production and heterotrophic respiration outputs were found to be the most robust proxy variables by which to manipulate GCAMs assumptions of long-term ecosystem steady state carbon, with short-term forest production strongly correlated with long-term biomass changes in climate-change model runs. By leveraging the fact that carbon-cycle effects of anthropogenic land-use change are short-term and spatially limited relative to widely distributed climate effects, we were able to distinguish these effects successfully in the model coupling, passing only the latter to GCAM. By allowing climate effects from a full earth system model to dynamically modulate the economic and policy decisions of an integrated assessment model, this work provides a foundation for linking these models in a robust and flexible framework capable of examining two-way interactions between human and earth system processes.

  19. Systems Modeling

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Modeling - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Fuel Cycle Defense Waste Management Programs Advanced Nuclear Energy

  20. Nuclear Models

    SciTech Connect (OSTI)

    Fossion, Ruben [Instituto de Ciencias Nucleares, Universidad Nacional Autonoma de Mexico, Apartado Postal 70-543, Mexico D. F., C.P. 04510 (Mexico)

    2010-09-10

    The atomic nucleus is a typical example of a many-body problem. On the one hand, the number of nucleons (protons and neutrons) that constitute the nucleus is too large to allow for exact calculations. On the other hand, the number of constituent particles is too small for the individual nuclear excitation states to be explained by statistical methods. Another problem, particular for the atomic nucleus, is that the nucleon-nucleon (n-n) interaction is not one of the fundamental forces of Nature, and is hard to put in a single closed equation. The nucleon-nucleon interaction also behaves differently between two free nucleons (bare interaction) and between two nucleons in the nuclear medium (dressed interaction).Because of the above reasons, specific nuclear many-body models have been devised of which each one sheds light on some selected aspects of nuclear structure. Only combining the viewpoints of different models, a global insight of the atomic nucleus can be gained. In this chapter, we revise the the Nuclear Shell Model as an example of the microscopic approach, and the Collective Model as an example of the geometric approach. Finally, we study the statistical properties of nuclear spectra, basing on symmetry principles, to find out whether there is quantum chaos in the atomic nucleus. All three major approaches have been rewarded with the Nobel Prize of Physics. In the text, we will stress how each approach introduces its own series of approximations to reduce the prohibitingly large number of degrees of freedom of the full many-body problem to a smaller manageable number of effective degrees of freedom.

  1. Competency Models

    Broader source: Energy.gov [DOE]

    An industry-validated competency model is an excellent tool for identifying the skills needed to succeed in a particular job, developing curricula to teach them, and benchmarking their attainment. Particularly valuable in dynamic industries like solar energy, a competency framework is critical to any training program attempting to advance lower-skilled workers into navigable career pathways, or transition higher skilled workers into new industry sectors.

  2. VISION Model

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    VISION Model (Argonne National Laboratory) Objectives To provide estimates of the potential energy use, oil use, and carbon emission impacts of advanced light- and heavy-duty highway vehicle technologies and alternative fuels, up to the year 2100. Key Attributes & Strengths Uses vehicle survival and age-dependent usage characteristics to project total light- and heavy-vehicle stock, total vehicle miles of travel, and total energy use by technology and fuel type by year, given market

  3. Comparing Simulation Results with Traditional PRA Model on a Boiling Water Reactor Station Blackout Case Study

    SciTech Connect (OSTI)

    Zhegang Ma; Diego Mandelli; Curtis Smith

    2011-07-01

    A previous study used RELAP and RAVEN to conduct a boiling water reactor station black-out (SBO) case study in a simulation based environment to show the capabilities of the risk-informed safety margin characterization methodology. This report compares the RELAP/RAVEN simulation results with traditional PRA model results. The RELAP/RAVEN simulation run results were reviewed for their input parameters and output results. The input parameters for each simulation run include various timing information such as diesel generator or offsite power recovery time, Safety Relief Valve stuck open time, High Pressure Core Injection or Reactor Core Isolation Cooling fail to run time, extended core cooling operation time, depressurization delay time, and firewater injection time. The output results include the maximum fuel clad temperature, the outcome, and the simulation end time. A traditional SBO PRA model in this report contains four event trees that are linked together with the transferring feature in SAPHIRE software. Unlike the usual Level 1 PRA quantification process in which only core damage sequences are quantified, this report quantifies all SBO sequences, whether they are core damage sequences or success (i.e., non core damage) sequences, in order to provide a full comparison with the simulation results. Three different approaches were used to solve event tree top events and quantify the SBO sequences: W process flag, default process flag without proper adjustment, and default process flag with adjustment to account for the success branch probabilities. Without post-processing, the first two approaches yield incorrect results with a total conditional probability greater than 1.0. The last approach accounts for the success branch probabilities and provides correct conditional sequence probabilities that are to be used for comparison. To better compare the results from the PRA model and the simulation runs, a simplified SBO event tree was developed with only four top events and eighteen SBO sequences (versus fifty-four SBO sequences in the original SBO model). The estimated SBO sequence conditional probabilities from the original SBO model were integrated to the corresponding sequences in the simplified SBO event tree. These results were then compared with the simulation run results.

  4. Macro System Model (MSM)

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Objectives Perform rapid cross-cutting analysis that utilizes and links other models. ... MSM is a static, cross-cutting model which links models from various modeling platforms. ...

  5. A Programming Model Performance Study Using the NAS Parallel Benchmarks

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Shan, Hongzhang; Blagojević, Filip; Min, Seung-Jai; Hargrove, Paul; Jin, Haoqiang; Fuerlinger, Karl; Koniges, Alice; Wright, Nicholas J.

    2010-01-01

    Harnessing the power of multicore platforms is challenging due to the additional levels of parallelism present. In this paper we use the NAS Parallel Benchmarks to study three programming models, MPI, OpenMP and PGAS to understand their performance and memory usage characteristics on current multicore architectures. To understand these characteristics we use the Integrated Performance Monitoring tool and other ways to measure communication versus computation time, as well as the fraction of the run time spent in OpenMP. The benchmarks are run on two different Cray XT5 systems and an Infiniband cluster. Our results show that in general the threemore » programming models exhibit very similar performance characteristics. In a few cases, OpenMP is significantly faster because it explicitly avoids communication. For these particular cases, we were able to re-write the UPC versions and achieve equal performance to OpenMP. Using OpenMP was also the most advantageous in terms of memory usage. Also we compare performance differences between the two Cray systems, which have quad-core and hex-core processors. We show that at scale the performance is almost always slower on the hex-core system because of increased contention for network resources.« less

  6. Modeling needs for very large systems.

    SciTech Connect (OSTI)

    Stein, Joshua S.

    2010-10-01

    Most system performance models assume a point measurement for irradiance and that, except for the impact of shading from nearby obstacles, incident irradiance is uniform across the array. Module temperature is also assumed to be uniform across the array. For small arrays and hourly-averaged simulations, this may be a reasonable assumption. Stein is conducting research to characterize variability in large systems and to develop models that can better accommodate large system factors. In large, multi-MW arrays, passing clouds may block sunlight from a portion of the array but never affect another portion. Figure 22 shows that two irradiance measurements at opposite ends of a multi-MW PV plant appear to have similar irradiance (left), but in fact the irradiance is not always the same (right). Module temperature may also vary across the array, with modules on the edges being cooler because they have greater wind exposure. Large arrays will also have long wire runs and will be subject to associated losses. Soiling patterns may also vary, with modules closer to the source of soiling, such as an agricultural field, receiving more dust load. One of the primary concerns associated with this effort is how to work with integrators to gain access to better and more comprehensive data for model development and validation.

  7. Anaerobic digestion analysis model: User`s manual

    SciTech Connect (OSTI)

    Ruth, M.; Landucci, R.

    1994-08-01

    The Anaerobic Digestion Analysis Model (ADAM) has been developed to assist investigators in performing preliminary economic analyses of anaerobic digestion processes. The model, which runs under Microsoft Excel{trademark}, is capable of estimating the economic performance of several different waste digestion process configurations that are defined by the user through a series of option selections. The model can be used to predict required feedstock tipping fees, product selling prices, utility rates, and raw material unit costs. The model is intended to be used as a tool to perform preliminary economic estimates that could be used to carry out simple screening analyses. The model`s current parameters are based on engineering judgments and are not reflective of any existing process; therefore, they should be carefully evaluated and modified if necessary to reflect the process under consideration. The accuracy and level of uncertainty of the estimated capital investment and operating costs are dependent on the accuracy and level of uncertainty of the model`s input parameters. The underlying methodology is capable of producing results accurate to within {+-} 30% of actual costs.

  8. Building 235-F Goldsim Fate And Transport Model

    SciTech Connect (OSTI)

    Taylor, G. A.; Phifer, M. A.

    2012-09-14

    Savannah River National Laboratory (SRNL) personnel, at the request of Area Completion Projects (ACP), evaluated In-Situ Disposal (ISD) alternatives that are under consideration for deactivation and decommissioning (D&D) of Building 235-F and the Building 294-2F Sand Filter. SRNL personnel developed and used a GoldSim fate and transport model, which is consistent with Musall 2012, to evaluate relative to groundwater protection, ISD alternatives that involve either source removal and/or the grouting of portions or all of 235-F. This evaluation was conducted through the development and use of a Building 235-F GoldSim fate and transport model. The model simulates contaminant release from four 235-F process areas and the 294-2F Sand Filter. In addition, it simulates the fate and transport through the vadose zone, the Upper Three Runs (UTR) aquifer, and the Upper Three Runs (UTR) creek. The model is designed as a stochastic model, and as such it can provide both deterministic and stochastic (probabilistic) results. The results show that the median radium activity concentrations exceed the 5 ?Ci/L radium MCL at the edge of the building for all ISD alternatives after 10,000 years, except those with a sufficient amount of inventory removed. A very interesting result was that grouting was shown to basically have minimal effect on the radium activity concentration. During the first 1,000 years grouting may have some small positive benefit relative to radium, however after that it may have a slightly deleterious effect. The Pb-210 results, relative to its 0.06 ?Ci/L PRG, are essentially identical to the radium results, but the Pb-210 results exhibit a lesser degree of exceedance. In summary, some level of inventory removal will be required to ensure that groundwater standards are met.

  9. Community Land Model Version 3.0 (CLM3.0) Developer's Guide

    SciTech Connect (OSTI)

    Hoffman, FM

    2004-12-21

    This document describes the guidelines adopted for software development of the Community Land Model (CLM) and serves as a reference to the entire code base of the released version of the model. The version of the code described here is Version 3.0 which was released in the summer of 2004. This document, the Community Land Model Version 3.0 (CLM3.0) User's Guide (Vertenstein et al., 2004), the Technical Description of the Community Land Model (CLM) (Oleson et al., 2004), and the Community Land Model's Dynamic Global Vegetation Model (CLM-DGVM): Technical Description and User's Guide (Levis et al., 2004) provide the developer, user, or researcher with details of implementation, instructions for using the model, a scientific description of the model, and a scientific description of the Dynamic Global Vegetation Model integrated with CLM respectively. The CLM is a single column (snow-soil-vegetation) biogeophysical model of the land surface which can be run serially (on a laptop or personal computer) or in parallel (using distributed or shared memory processors or both) on both vector and scalar computer architectures. Written in Fortran 90, CLM can be run offline (i.e., run in isolation using stored atmospheric forcing data), coupled to an atmospheric model (e.g., the Community Atmosphere Model (CAM)), or coupled to a climate system model (e.g., the Community Climate System Model Version 3 (CCSM3)) through a flux coupler (e.g., Coupler 6 (CPL6)). When coupled, CLM exchanges fluxes of energy, water, and momentum with the atmosphere. The horizontal land surface heterogeneity is represented by a nested subgrid hierarchy composed of gridcells, landunits, columns, and plant functional types (PFTs). This hierarchical representation is reflected in the data structures used by the model code. Biophysical processes are simulated for each subgrid unit (landunit, column, and PFT) independently, and prognostic variables are maintained for each subgrid unit. Vertical heterogeneity is represented by a single vegetation layer, 10 layers for soil, and up to five layers for snow, depending on the snow depth. For computational efficiency, gridcells are grouped into ''clumps'' which are divided in cyclic fashion among distributed memory processors. Additional parallel performance is obtained by distributing clumps of gridcells across shared memory processors on computer platforms that support hybrid Message Passing Interface (MPI)/OpenMP operation. Significant modifications to the source code have been made over the last year to support efficient operation on newer vector architectures, specifically the Earth Simulator in Japan and the Cray X1 at Oak Ridge National Laboratory (Homan et al., 2004). These code modifications resulted in performance improvements even on the scalar architectures widely used for running CLM presently. To better support vectorized processing in the code, subgrid units (columns and PFTs) are grouped into ''filters'' based on their process-specific categorization. For example, filters (vectors of integers) referring to all snow, non-snow, lake, non-lake, and soil covered columns and PFTs within each clump are built and maintained when the model is run. Many loops within the scientific subroutines use these filters to indirectly address the process-appropriate subgrid units.

  10. Junior Solar Sprint - An Introduction to Building a Model Solar Car

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    8 Revised 8/23/01 An Introduction to Building a Model Solar Car Student Guide for the Junior Solar Sprint Competition Produced by: Krisztina Holly and Akhil Madhani 2 Introduction Welcome to Junior Solar Sprint! By competing in Junior Solar Sprint, you will learn how to make your own model solar car that will run entirely from the power of the sun. Design You will experience first-hand the process of design. When you design your car, you will start with some ideas in your head and turn then into

  11. Diffractively produced Z bosons in the muon decay channel in p-pbar collisions at s**(1/2) = 1.96 TeV, and the measurement of the efficiency of the D0 Run II luminosity monitor

    SciTech Connect (OSTI)

    Edwards, Tamsin L

    2006-04-01

    The first analysis of diffractively produced Z bosons in the muon decay channel is presented, using data taken by the D0 detector at the Tevatron at {radical}s = 1.96 TeV. The data sample corresponds to an integrated luminosity of 109 pb{sup -1}. The diffractive sample is defined using the fractional momentum loss {zeta} of the intact proton or antiproton measured using the calorimeter and muon detector systems. In a sample of 10791 (Z/{gamma})* {yields} {mu}{sup +}{mu}{sup -} events, 24 diffractive candidate events are found with {zeta} < 0.02. The first work towards measuring the cross section times branching ratio for diffractive production of (Z/{gamma})* {yields} {mu}{sup +}{mu}{sup -} is presented for the kinematic region {zeta} < 0.02. The first work towards measuring the cross section times branching ratio for diffractive production of (Z/{gamma})* {yields} {mu}{sup +}{mu}{sup -} is presented for the kinematic region {zeta} < 0.02. The systematic uncertainties are not yet sufficiently understood to present the cross section result. In addition, the first measurement of the efficiency of the Run II D0 Luminosity Monitor is presented, which is used in all cross section measurements. The efficiency is: {var_epsilon}{sub LM} = (90.9 {+-} 1.8)%.

  12. InMAP: a new model for air pollution interventions

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Tessum, C. W.; Hill, J. D.; Marshall, J. D.

    2015-10-29

    Mechanistic air pollution models are essential tools in air quality management. Widespread use of such models is hindered, however, by the extensive expertise or computational resources needed to run most models. Here, we present InMAP (Intervention Model for Air Pollution), which offers an alternative to comprehensive air quality models for estimating the air pollution health impacts of emission reductions and other potential interventions. InMAP estimates annual-average changes in primary and secondary fine particle (PM2.5) concentrations the air pollution outcome generally causing the largest monetized health damages attributable to annual changes in precursor emissions. InMAP leverages pre-processed physical andmorechemical information from the output of a state-of-the-science chemical transport model (WRF-Chem) within an Eulerian modeling framework, to perform simulations that are several orders of magnitude less computationally intensive than comprehensive model simulations. InMAP uses a variable resolution grid that focuses on human exposures by employing higher spatial resolution in urban areas and lower spatial resolution in rural and remote locations and in the upper atmosphere; and by directly calculating steady-state, annual average concentrations. In comparisons run here, InMAP recreates WRF-Chem predictions of changes in total PM2.5 concentrations with population-weighted mean fractional error (MFE) and bias (MFB) R2 ~ 0.99. Among individual PM2.5 species, the best predictive performance is for primary PM2.5 (MFE: 16 %; MFB: 13 %) and the worst predictive performance is for particulate nitrate (MFE: 119 %; MFB: 106 %). Potential uses of InMAP include studying exposure, health, and environmental justice impacts of potential shifts in emissions for annual-average PM2.5. Features planned for future model releases include a larger spatial domain, more temporal information, and the ability to predict ground-level ozone (O3) concentrations. The InMAP model source code and input data are freely available online.less

  13. Model Components of the Certification Framework for Geologic Carbon Sequestration Risk Assessment

    SciTech Connect (OSTI)

    Oldenburg, Curtis M.; Bryant, Steven L.; Nicot, Jean-Philippe; Kumar, Navanit; Zhang, Yingqi; Jordan, Preston; Pan, Lehua; Granvold, Patrick; Chow, Fotini K.

    2009-06-01

    We have developed a framework for assessing the leakage risk of geologic carbon sequestration sites. This framework, known as the Certification Framework (CF), emphasizes wells and faults as the primary potential leakage conduits. Vulnerable resources are grouped into compartments, and impacts due to leakage are quantified by the leakage flux or concentrations that could potentially occur in compartments under various scenarios. The CF utilizes several model components to simulate leakage scenarios. One model component is a catalog of results of reservoir simulations that can be queried to estimate plume travel distances and times, rather than requiring CF users to run new reservoir simulations for each case. Other model components developed for the CF and described here include fault characterization using fault-population statistics; fault connection probability using fuzzy rules; well-flow modeling with a drift-flux model implemented in TOUGH2; and atmospheric dense-gas dispersion using a mesoscale weather prediction code.

  14. Predictive models of circulating fluidized bed combustors

    SciTech Connect (OSTI)

    Gidaspow, D.

    1992-07-01

    Steady flows influenced by walls cannot be described by inviscid models. Flows in circulating fluidized beds have significant wall effects. Particles in the form of clusters or layers can be seen to run down the walls. Hence modeling of circulating fluidized beds (CFB) without a viscosity is not possible. However, in interpreting Equations (8-1) and (8-2) it must be kept in mind that CFB or most other two phase flows are never in a true steady state. Then the viscosity in Equations (8-1) and (8-2) may not be the true fluid viscosity to be discussed next, but an Eddy type viscosity caused by two phase flow oscillations usually referred to as turbulence. In view of the transient nature of two-phase flow, the drag and the boundary layer thickness may not be proportional to the square root of the intrinsic viscosity but depend upon it to a much smaller extent. As another example, liquid-solid flow and settling of colloidal particles in a lamella electrosettler the settling process is only moderately affected by viscosity. Inviscid flow with settling is a good first approximation to this electric field driven process. The physical meaning of the particulate phase viscosity is described in detail in the chapter on kinetic theory. Here the conventional derivation resented in single phase fluid mechanics is generalized to multiphase flow.

  15. ARM - Datastreams - ecmwfflx

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Datastreamsecmwfflx Documentation XDC documentation Data Quality Plots ARM Data Discovery Browse Data Comments? We would love to hear from you! Send us a note below or call us at 1-888-ARM-DATA. Send Datastream : ECMWFFLX ECMWF: radiative fluxes at altitude, 1-hr avg, entire coverage Active Dates 1995.04.17 - 2016.02.29 Measurement Categories Radiometric Originating Instrument European Centre for Medium Range Weather Forecasts Diagnostic Analyses (ECMWFDIAG) Description These data can only be

  16. Sandia Energy - Phenomenological Modeling

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Phenomenological Modeling Home Stationary Power Nuclear Fuel Cycle Nuclear Energy Safety Technologies Phenomenological Modeling Phenomenological ModelingTara Camacho-Lopez2015-05-1...

  17. Nonlinear structure formation in the cubic Galileon gravity model

    SciTech Connect (OSTI)

    Barreira, Alexandre; Li, Baojiu; Hellwing, Wojciech A.; Baugh, Carlton M.; Pascoli, Silvia E-mail: baojiu.li@durham.ac.uk E-mail: c.m.baugh@durham.ac.uk

    2013-10-01

    We model the linear and nonlinear growth of large scale structure in the Cubic Galileon gravity model, by running a suite of N-body cosmological simulations using the ECOSMOG code. Our simulations include the Vainshtein screening effect, which reconciles the Cubic Galileon model with local tests of gravity. In the linear regime, the amplitude of the matter power spectrum increases by ? 20% with respect to the standard ?CDM model today. The modified expansion rate accounts for ? 15% of this enhancement, while the fifth force is responsible for only ? 5%. This is because the effective unscreened gravitational strength deviates from standard gravity only at late times, even though it can be twice as large today. In the nonlinear regime (k?>0.1h Mpc{sup ?1}), the fifth force leads to only a modest increase (?<8%) in the clustering power on all scales due to the very efficient operation of the Vainshtein mechanism. Such a strong effect is typically not seen in other models with the same screening mechanism. The screening also results in the fifth force increasing the number density of halos by less than 10%, on all mass scales. Our results show that the screening does not ruin the validity of linear theory on large scales which anticipates very strong constraints from galaxy clustering data. We also show that, whilst the model gives an excellent match to CMB data on small angular scales (l?>50), the predicted integrated Sachs-Wolfe effect is in tension with Planck/WMAP results.

  18. Reducing uncertainty in high-resolution sea ice models.

    SciTech Connect (OSTI)

    Peterson, Kara J.; Bochev, Pavel Blagoveston

    2013-07-01

    Arctic sea ice is an important component of the global climate system, reflecting a significant amount of solar radiation, insulating the ocean from the atmosphere and influencing ocean circulation by modifying the salinity of the upper ocean. The thickness and extent of Arctic sea ice have shown a significant decline in recent decades with implications for global climate as well as regional geopolitics. Increasing interest in exploration as well as climate feedback effects make predictive mathematical modeling of sea ice a task of tremendous practical import. Satellite data obtained over the last few decades have provided a wealth of information on sea ice motion and deformation. The data clearly show that ice deformation is focused along narrow linear features and this type of deformation is not well-represented in existing models. To improve sea ice dynamics we have incorporated an anisotropic rheology into the Los Alamos National Laboratory global sea ice model, CICE. Sensitivity analyses were performed using the Design Analysis Kit for Optimization and Terascale Applications (DAKOTA) to determine the impact of material parameters on sea ice response functions. Two material strength parameters that exhibited the most significant impact on responses were further analyzed to evaluate their influence on quantitative comparisons between model output and data. The sensitivity analysis along with ten year model runs indicate that while the anisotropic rheology provides some benefit in velocity predictions, additional improvements are required to make this material model a viable alternative for global sea ice simulations.

  19. Global and regional modeling of clouds and aerosols in the marine boundary layer during VOCALS: the VOCA intercomparison

    SciTech Connect (OSTI)

    Wyant, M. C.; Bretherton, Christopher S.; Wood, Robert; Carmichael, Gregory; Clarke, A. D.; Fast, Jerome D.; George, R.; Gustafson, William I.; Hannay, Cecile; Lauer, Axel; Lin, Yanluan; Morcrette, J. -J.; Mulcahay, Jane; Saide, Pablo; Spak, S. N.; Yang, Qing

    2015-01-09

    A diverse collection of models are used to simulate the marine boundary layer in the southeast Pacific region during the period of the OctoberNovember 2008 VOCALS REx (VAMOS Ocean Cloud Atmosphere Land Study Regional Experiment) field campaign. Regional models simulate the period continuously in boundary-forced free-running mode, while global forecast models and GCMs (general circulation models) are run in forecast mode. The models are compared to extensive observations along a line at 20 S extending westward from the South American coast. Most of the models simulate cloud and aerosol characteristics and gradients across the region that are recognizably similar to observations, despite the complex interaction of processes involved in the problem, many of which are parameterized or poorly resolved. Some models simulate the regional low cloud cover well, though many models underestimate MBL (marine boundary layer) depth near the coast. Most models qualitatively simulate the observed offshore gradients of SO2, sulfate aerosol, CCN (cloud condensation nuclei) concentration in the MBL as well as differences in concentration between the MBL and the free troposphere. Most models also qualitatively capture the decrease in cloud droplet number away from the coast. However, there are large quantitative intermodel differences in both means and gradients of these quantities. Many models are able to represent episodic offshore increases in cloud droplet number and aerosol concentrations associated with periods of offshore flow. Most models underestimate CCN (at 0.1% supersaturation) in the MBL and free troposphere. The GCMs also have difficulty simulating coastal gradients in CCN and cloud droplet number concentration near the coast. The overall performance of the models demonstrates their potential utility in simulating aerosolcloud interactions in the MBL, though quantitative estimation of aerosolcloud interactions and aerosol indirect effects of MBL clouds with these models remains uncertain.

  20. Global and regional modeling of clouds and aerosols in the marine boundary layer during VOCALS: the VOCA intercomparison

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Wyant, M. C.; Bretherton, Christopher S.; Wood, Robert; Carmichael, Gregory; Clarke, A. D.; Fast, Jerome D.; George, R.; Gustafson, William I.; Hannay, Cecile; Lauer, Axel; et al

    2015-01-09

    A diverse collection of models are used to simulate the marine boundary layer in the southeast Pacific region during the period of the October–November 2008 VOCALS REx (VAMOS Ocean Cloud Atmosphere Land Study Regional Experiment) field campaign. Regional models simulate the period continuously in boundary-forced free-running mode, while global forecast models and GCMs (general circulation models) are run in forecast mode. The models are compared to extensive observations along a line at 20° S extending westward from the South American coast. Most of the models simulate cloud and aerosol characteristics and gradients across the region that are recognizably similar tomore » observations, despite the complex interaction of processes involved in the problem, many of which are parameterized or poorly resolved. Some models simulate the regional low cloud cover well, though many models underestimate MBL (marine boundary layer) depth near the coast. Most models qualitatively simulate the observed offshore gradients of SO2, sulfate aerosol, CCN (cloud condensation nuclei) concentration in the MBL as well as differences in concentration between the MBL and the free troposphere. Most models also qualitatively capture the decrease in cloud droplet number away from the coast. However, there are large quantitative intermodel differences in both means and gradients of these quantities. Many models are able to represent episodic offshore increases in cloud droplet number and aerosol concentrations associated with periods of offshore flow. Most models underestimate CCN (at 0.1% supersaturation) in the MBL and free troposphere. The GCMs also have difficulty simulating coastal gradients in CCN and cloud droplet number concentration near the coast. The overall performance of the models demonstrates their potential utility in simulating aerosol–cloud interactions in the MBL, though quantitative estimation of aerosol–cloud interactions and aerosol indirect effects of MBL clouds with these models remains uncertain.« less

  1. Monitoring and Evaluation of Smolt Migration in the Columbia Basin : Volume XV : Evaluation of the 2007 Predictions of the Run-Timing of Wild and Hatchery-Reared Salmon and Steelhead Smolts to Rock Island, Lower Granite, McNary, John Day, and Bonneville Dams using Program RealTime.

    SciTech Connect (OSTI)

    Griswold, Jim; Townsend, Richard L.; Skalski, John R.

    2008-12-01

    Program RealTime provided monitoring and forecasting of the 2007 inseason outmigrations via the internet for 26 PIT-tagged stocks of wild ESU Chinook salmon and steelhead to Lower Granite and/or McNary dams, one PIT-tagged hatchery-reared ESU of sockeye salmon to Lower Granite Dam, one PIT-tagged wild stock of sockeye salmon to McNary Dam, and 20 passage-indexed runs-at-large, five each to Rock Island, McNary, John Day, and Bonneville dams. Nineteen stocks are of wild yearling Chinook salmon which were captured, PIT-tagged, and released at sites above Lower Granite Dam in 2007 and have at least one year's historical migration data previous to the 2007 migration. These stocks originate in 19 tributaries of the Salmon, Grande Ronde and Clearwater Rivers, all tributaries to the Snake River, and are subsequently detected through tag identification and monitored at Lower Granite Dam. Seven wild PIT-tagged runs-at-large of Snake or Upper Columbia River ESU salmon and steelhead were monitored at McNary Dam. Three wild PIT-tagged runs-at-large were monitored at Lower Granite Dam, consisting of the yearling and subyearling Chinook salmon and the steelhead runs. The hatchery-reared PIT-tagged sockeye salmon stock from Redfish Lake was monitored outmigrating through Lower Granite Dam. Passage-indexed stocks (stocks monitored by FPC passage indices) included combined wild and hatchery runs-at-large of subyearling and yearling Chinook, coho, and sockeye salmon, and steelhead forecasted to Rock Island, McNary, John Day, and Bonneville dams.

  2. Monitoring and Evaluation of Smolt Migration in the Columbia Basin, Volume XIV; Evaluation of 2006 Prediction of the Run-Timing of Wild and Hatchery-Reared Salmon and Steelhead at Rock Island, Lower Granite, McNary, John Day and Bonneville Dams using Program Real Time, Technical Report 2006.

    SciTech Connect (OSTI)

    Griswold, Jim

    2007-01-01

    Program RealTime provided monitoring and forecasting of the 2006 inseason outmigrations via the internet for 32 PIT-tagged stocks of wild ESU chinook salmon and steelhead to Lower Granite and/or McNary dams, one PIT-tagged hatchery-reared ESU of sockeye salmon to Lower Granite Dam, and 20 passage-indexed runs-at-large, five each to Rock Island, McNary, John Day, and Bonneville Dams. Twenty-four stocks are of wild yearling chinook salmon which were captured, PIT-tagged, and released at sites above Lower Granite Dam in 2006, and have at least one year's historical migration data previous to the 2006 migration. These stocks originate in drainages of the Salmon, Grande Ronde and Clearwater Rivers, all tributaries to the Snake River, and are subsequently detected through the tag identification and monitored at Lower Granite Dam. In addition, seven wild PIT-tagged runs-at-large of Snake or Upper Columbia River ESU salmon and steelhead were monitored at McNary Dam. Three wild PIT-tagged runs-at-large were monitored at Lower Granite Dam, consisting of the yearling and subyearling chinook salmon and the steelhead trout runs. The hatchery-reared PIT-tagged sockeye salmon stock from Redfish Lake was monitored outmigrating through Lower Granite Dam. Passage-indexed stocks (stocks monitored by FPC passage indices) included combined wild and hatchery runs-at-large of subyearling and yearling chinook, coho, and sockeye salmon, and steelhead trout forecasted to Rock Island, McNary, John Day, and Bonneville Dams.

  3. SIMPLIFIED PREDICTIVE MODELS FOR CO2 SEQUESTRATION PERFORMANCE ASSESSMENT RESEARCH TOPICAL REPORT ON TASK #4 REDUCED-ORDER METHOD (ROM) BASED MODELS

    SciTech Connect (OSTI)

    Mishra, Srikanta; Jin, Larry; He, Jincong; Durlofsky, Louis

    2015-06-30

    Reduced-order models provide a means for greatly accelerating the detailed simulations that will be required to manage CO2 storage operations. In this work, we investigate the use of one such method, POD-TPWL, which has previously been shown to be effective in oil reservoir simulation problems. This method combines trajectory piecewise linearization (TPWL), in which the solution to a new (test) problem is represented through a linearization around the solution to a previously-simulated (training) problem, with proper orthogonal decomposition (POD), which enables solution states to be expressed in terms of a relatively small number of parameters. We describe the application of POD-TPWL for CO2-water systems simulated using a compositional procedure. Stanford’s Automatic Differentiation-based General Purpose Research Simulator (AD-GPRS) performs the full-order training simulations and provides the output (derivative matrices and system states) required by the POD-TPWL method. A new POD-TPWL capability introduced in this work is the use of horizontal injection wells that operate under rate (rather than bottom-hole pressure) control. Simulation results are presented for CO2 injection into a synthetic aquifer and into a simplified model of the Mount Simon formation. Test cases involve the use of time-varying well controls that differ from those used in training runs. Results of reasonable accuracy are consistently achieved for relevant well quantities. Runtime speedups of around a factor of 370 relative to full- order AD-GPRS simulations are achieved, though the preprocessing needed for POD-TPWL model construction corresponds to the computational requirements for about 2.3 full-order simulation runs. A preliminary treatment for POD-TPWL modeling in which test cases differ from training runs in terms of geological parameters (rather than well controls) is also presented. Results in this case involve only small differences between training and test runs, though they do demonstrate that the approach is able to capture basic solution trends. The impact of some of the detailed numerical treatments within the POD-TPWL formulation is considered in an Appendix. ii

  4. I&C Modeling in SPAR Models

    SciTech Connect (OSTI)

    John A. Schroeder

    2012-06-01

    The Standardized Plant Analysis Risk (SPAR) models for the U.S. commercial nuclear power plants currently have very limited instrumentation and control (I&C) modeling [1]. Most of the I&C components in the operating plant SPAR models are related to the reactor protection system. This was identified as a finding during the industry peer review of SPAR models. While the Emergency Safeguard Features (ESF) actuation and control system was incorporated into the Peach Bottom Unit 2 SPAR model in a recent effort [2], various approaches to expend resources for detailed I&C modeling in other SPAR models are investigated.

  5. Development and pilot testing of modular dynamic thermomechanical pulp mill model to develop energy reduction strategies. Final report

    SciTech Connect (OSTI)

    Coffin, D.W.

    1996-10-01

    With the development of on-line and real-time process simulations, one obtains the ability to predict and control the process; thus, the opportunity exists to improve energy efficiency, decrease materials wastes, and maintain product quality. Developing this capability was the objective of the this research program. A thermomechanical pulp mill was simulated using both a first principles model and a neural network. The models made use of actual process data and a model that calculated the mass and energy balance of the mill was successfully implemented and run at the mill on an hourly basis. The attempt to develop a model that accurately predicted the quality of the pulp was not successful. It was concluded that the key fro a successful implementation of a real-time control model, such as a neural net model, is availability of on-line sensors that sufficiently characterize the pulp.

  6. The Lagrangian particle dispersion model FLEXPART-WRF VERSION 3.1

    SciTech Connect (OSTI)

    Brioude, J.; Arnold, D.; Stohl, A.; Cassiani, M.; Morton, Don; Seibert, P.; Angevine, W. M.; Evan, S.; Dingwell, A.; Fast, Jerome D.; Easter, Richard C.; Pisso, I.; Bukhart, J.; Wotawa, G.

    2013-11-01

    The Lagrangian particle dispersion model FLEXPART was originally designed for cal- culating long-range and mesoscale dispersion of air pollutants from point sources, such as after an accident in a nuclear power plant. In the meantime FLEXPART has evolved into a comprehensive tool for atmospheric transport modeling and analysis at different scales. This multiscale need from the modeler community has encouraged new developments in FLEXPART. In this document, we present a version that works with the Weather Research and Forecasting (WRF) mesoscale meteoro- logical model. Simple procedures on how to run FLEXPART-WRF are presented along with special options and features that differ from its predecessor versions. In addition, test case data, the source code and visualization tools are provided to the reader as supplementary material.

  7. TRANSIENT HEAT TRANSFER MODEL FOR SRS WASTE TANK OPERATIONS

    SciTech Connect (OSTI)

    Lee, S; Richard Dimenna, R

    2007-03-27

    A transient heat balance model was developed to assess the impact of a Submersible Mixer Pump (SMP) on waste temperature during the process of waste mixing and removal for the Type-I Savannah River Site (SRS) tanks. The model results will be mainly used to determine the SMP design impacts on the waste tank temperature during operations and to develop a specification for a new SMP design to replace existing long-shaft mixer pumps used during waste removal. The model will also be used to provide input to the operation planning. This planning will be used as input to pump run duration in order to maintain temperature requirements within the tank during SMP operation. The analysis model took a parametric approach. A series of the modeling analyses was performed to examine how submersible mixer pumps affect tank temperature during waste removal operation in the Type-I tank. The model domain included radioactive decay heat load, two SMP's, and one Submersible Transfer Pump (STP) as heat source terms. The present model was benchmarked against the test data obtained by the tank measurement to examine the quantitative thermal response of the tank and to establish the reference conditions of the operating variables under no SMP operation. The results showed that the model predictions agreed with the test data of the waste temperatures within about 10%. Transient modeling calculations for two potential scenarios of sludge mixing and removal operations have been made to estimate transient waste temperatures within a Type-I waste tank. When two 200-HP submersible mixers and 12 active cooling coils are continuously operated in 100-in tank level and 40 C initial temperature for 40 days since the initiation of mixing operation, waste temperature rises about 9 C in 48 hours at a maximum. Sensitivity studies for the key operating variables were performed. The sensitivity results showed that the chromate cooling coil system provided the primary cooling mechanism to remove process heat from the tank during operation.

  8. Reformulated Gasoline Complex Model

    Gasoline and Diesel Fuel Update (EIA)

    Refiners Switch to Reformulated Gasoline Complex Model Contents * Summary * Introduction o Table 1. Comparison of Simple Model and Complex Model RFG Per Gallon Requirements * Statutory, Individual Refinery, and Compliance Baselines o Table 2. Statutory Baseline Fuel Compositions * Simple Model * Complex Model o Table 3. Complex Model Variables * Endnotes Related EIA Short-Term Forecast Analysis Products * RFG Simple and Complex Model Spreadsheets * Areas Particpating in the Reformulated Gasoline

  9. A summary of recent refinements to the WAKE dispersion model, a component of the HGSYSTEM/UF{sub 6} model suite

    SciTech Connect (OSTI)

    Yambert, M.W.; Lombardi, D.A.; Goode, W.D. Jr.; Bloom, S.G.

    1998-08-01

    The original WAKE dispersion model a component of the HGSYSTEM/UF{sub 6} model suite, is based on Shell Research Ltd.`s HGSYSTEM Version 3.0 and was developed by the US Department of Energy for use in estimating downwind dispersion of materials due to accidental releases from gaseous diffusion plant (GDP) process buildings. The model is applicable to scenarios involving both ground-level and elevated releases into building wake cavities of non-reactive plumes that are either neutrally or positively buoyant. Over the 2-year period since its creation, the WAKE model has been used to perform consequence analyses for Safety Analysis Reports (SARs) associated with gaseous diffusion plants in Portsmouth (PORTS), Paducah (PGDP), and Oak Ridge. These applications have identified the need for additional model capabilities (such as the treatment of complex terrain and time-variant releases) not present in the original utilities which, in turn, has resulted in numerous modifications to these codes as well as the development of additional, stand-alone postprocessing utilities. Consequently, application of the model has become increasingly complex as the number of executable, input, and output files associated with a single model run has steadily grown. In response to these problems, a streamlined version of the WAKE model has been developed which integrates all calculations that are currently performed by the existing WAKE, and the various post-processing utilities. This report summarizes the efforts involved in developing this revised version of the WAKE model.

  10. Stochastic Energy Deployment System (SEDS) World Oil Model (WOM)

    Energy Science and Technology Software Center (OSTI)

    2009-08-07

    The function of the World Oil Market Model (WOMM) is to calculate a world oil price. SEDS will set start and end dates for the forecast period, and a time increment (assumed to be 1 year in the initial version). The WOMM will then randomly select an Annual Energy Outlook (AEO) oil price case and calibrate itself to that case. As it steps through each year, the WOMM will generate a stochastic supply shock tomore » OPEC output and accept a new estimate of U.S. petroleum demand from SEDS. The WOMM will then calculate a new oil market equilibrium for the current year. The world oil price at the new equilibrium will be sent back to SEDS. When the end year is reached, the process will begin again with the selection of a new AEO forecast. Iterations over forecasts will continue until SEDS has completed all its simulation runs.« less

  11. Multi-Dimensional Modeling of Nova with Realistic Nuclear Physics

    SciTech Connect (OSTI)

    Zingale, M; Hoffman, R D

    2011-01-27

    This contract covered the period from 03/09/2010 to 09/30/2010. Over this period, we adapted the low Mach number hydrodynamics code MAESTRO to perform simulations of novae. A nova is the thermonuclear runaway of an accreted hydrogen layer on the surface of a white dwarf. As the accreted layer grows in mass, the temperature and density at the base increase to the point where hydrogen fusion can begin by the CNO cycle - a burning process that uses carbon, nitrogen, and oxygen to complete the fusion of four hydrogen nuclei into one helium-4 nucleus. At this point, we are running initial models of nova, exploring the details of the convection. In the follow-on contract to this one, we will continue this investigation.

  12. Unsaturated Groundwater and Heat Transport Model

    Energy Science and Technology Software Center (OSTI)

    2008-05-15

    TOUGH2-MP is a massive parallel version of the TOUGH2 Code, designed for computationally efficient parallel simulation of isothermal and nonisothermal flows of multicomponent, multiphase fluids in one, two, and three-dimensional porous and fractured media. The code runs on computers with parallel architecture or clusters and can be used for applications, such as radioactive waste disposal, CO2 geological sequestration, environmental assessment and remediation, reservoir engineering, and groundwater hydrology. The parallel simulator has achieved orders-of-the-magnitude improvement inmore » computational time and/or modeling problem size. The parallel simulator uses fully implicit time differencing and solves large, sparse linear systems arising from discretization of the partial differential equations for mass and energy balance. A domain decomposition approach is adopted for multiphase flow simulations with coarse- granularity parallel computation. Current version of TOUGH2-MP includes following modules: EOS1, EOS2, EOS3, EOS4, EOS5, EOS7, EOS7R, E0S8, EOS9, ECO2N, EWASG, and T2R3D.« less

  13. Comparison of the Accuracy and Speed of Transient Mobile A/C System Simulation Models: Preprint

    SciTech Connect (OSTI)

    Kiss, T.; Lustbader, J.

    2014-03-01

    The operation of air conditioning (A/C) systems is a significant contributor to the total amount of fuel used by light- and heavy-duty vehicles. Therefore, continued improvement of the efficiency of these mobile A/C systems is important. Numerical simulation has been used to reduce the system development time and to improve the electronic controls, but numerical models that include highly detailed physics run slower than desired for carrying out vehicle-focused drive cycle-based system optimization. Therefore, faster models are needed even if some accuracy is sacrificed. In this study, a validated model with highly detailed physics, the 'Fully-Detailed' model, and two models with different levels of simplification, the 'Quasi-Transient' and the 'Mapped- Component' models, are compared. The Quasi-Transient model applies some simplifications compared to the Fully-Detailed model to allow faster model execution speeds. The Mapped-Component model is similar to the Quasi-Transient model except instead of detailed flow and heat transfer calculations in the heat exchangers, it uses lookup tables created with the Quasi-Transient model. All three models are set up to represent the same physical A/C system and the same electronic controls. Speed and results of the three model versions are compared for steady state and transient operation. Steady state simulated data are also compared to measured data. The results show that the Quasi-Transient and Mapped-Component models ran much faster than the Fully-Detailed model, on the order of 10- and 100-fold, respectively. They also adequately approach the results of the Fully-Detailed model for steady-state operation, and for drive cycle-based efficiency predictions

  14. Evaluation of the 1998 Predictions of the Run-Timing of Wild Migrant Yearling Chinook and Water Quality at Multiple Locations on the Snake and Columbia Rivers using CRiSP/RealTime, 1998 Technical Report.

    SciTech Connect (OSTI)

    Beer, W. Nicholas; Hayes, Joshua A.; Shaw, Pamela

    1999-07-21

    Since 1988, wild salmon have been PIT-tagged through monitoring and research programs conducted by the Columbia River fisheries agencies and Tribes. Workers at the University of Washington have used detection data at Lower Granite Dam to generate predictions of arrival distributions for various stocks at the dam. The prediction tool is known as RealTime. In 1996, RealTime predictions were linked to a downstream migration model, CRiSP.1. The composite model, known as CRiSP/RealTime, predicts the arrival distribution and fraction transported at downriver locations.

  15. Restoration of the Potosi Dynamic Model 2010

    SciTech Connect (OSTI)

    Adushita, Yasmin; Leetaru, Hannes

    2014-09-30

    In topical Report DOE/FE0002068-1 [2] technical performance evaluations on the Cambrian Potosi Formation were performed through reservoir modeling. The data included formation tops from mud logs, well logs from the VW1 and the CCS1 wells, structural and stratigraphic formation from three dimensional (3D) seismic data, and field data from several waste water injection wells for Potosi Formation. Intention was for two million tons per annum (MTPA) of CO2 to be injected for 20 years. In this Task the 2010 Potosi heterogeneous model (referred to as the "Potosi Dynamic Model 2010" in this report) was re-run using a new injection scenario; 3.2 MTPA for 30 years. The extent of the Potosi Dynamic Model 2010, however, appeared too small for the new injection target. It was not sufficiently large enough to accommodate the evolution of the plume. Also, it might have overestimated the injection capacity by enhancing too much the pressure relief due to the relatively close proximity between the injector and the infinite acting boundaries. The new model, Potosi Dynamic Model 2013a, was built by extending the Potosi Dynamic Model 2010 grid to 30 miles x 30 miles (48 km by 48 km), while preserving all property modeling workflows and layering. This model was retained as the base case. Potosi Dynamic Model 2013.a gives an average CO2 injection rate of 1.4 MTPA and cumulative injection of 43 Mt in 30 years, which corresponds to 45% of the injection target. This implies that according to this preliminary model, a minimum of three (3) wells could be required to achieve the injection target. The injectivity evaluation of the Potosi formation will be revisited in topical Report 15 during which more data will be integrated in the modeling exercise. A vertical flow performance evaluation could be considered for the succeeding task to determine the appropriate tubing size, the required injection tubing head pressure (THP) and to investigate whether the corresponding well injection rate falls within the tubing erosional velocity limit. After 30 years, the plume extends 15 miles (24 km) in E-W and 14 miles (22 km) in N-S directions. After injection is completed, the plume continues to migrate laterally, mainly driven by the remaining pressure gradient. After 100 years post-injection, the plume extends 17 miles (27 km) in E-W and 15 miles (24 km) in N-S directions. The increase of reservoir pressure at the end of injection is approximately 370 psia around the injector and gradually decreases away from the well. The reservoir pressure increase is less than 30 psia beyond 14 miles (22 km) away from injector. The initial reservoir pressure is restored after approximately 20 years post-injection. This result, however, is associated with uncertainties on the boundary conditions, and a sensitivity analysis could be considered for the succeeding tasks. It is important to remember that the respective plume extent and areal pressure increase corresponds to an injection of 43 Mt CO2. Should the targeted cumulative injection of 96 Mt be achieved; a much larger plume extent and areal pressure increase could be expected. Re-evaluating the permeability modeling, vugs and heterogeneity distributions, and relative permeability input could be considered for the succeeding Potosi formation evaluations. A simulation using several injectors could also be considered to determine the required number of wells to achieve the injection target while taking into account the pressure interference.

  16. Sandia Modeling Tool Webinar

    Broader source: Energy.gov [DOE]

    Webinar attendees will learn what collaborative, stakeholder-driven modeling is, how the models developed have been and could be used, and how specifically this process and resulting models might...

  17. Instruction-level performance modeling and characterization of multimedia applications

    SciTech Connect (OSTI)

    Luo, Y.; Cameron, K.W.

    1999-06-01

    One of the challenges for characterizing and modeling realistic multimedia applications is the lack of access to source codes. On-chip performance counters effectively resolve this problem by monitoring run-time behaviors at the instruction-level. This paper presents a novel technique of characterizing and modeling workloads at the instruction level for realistic multimedia applications using hardware performance counters. A variety of instruction counts are collected from some multimedia applications, such as RealPlayer, GSM Vocoder, MPEG encoder/decoder, and speech synthesizer. These instruction counts can be used to form a set of abstract characteristic parameters directly related to a processor`s architectural features. Based on microprocessor architectural constraints and these calculated abstract parameters, the architectural performance bottleneck for a specific application can be estimated. Meanwhile, the bottleneck estimation can provide suggestions about viable architectural/functional improvement for certain workloads. The biggest advantage of this new characterization technique is a better understanding of processor utilization efficiency and architectural bottleneck for each application. This technique also provides predictive insight of future architectural enhancements and their affect on current codes. In this paper the authors also attempt to model architectural effect on processor utilization without memory influence. They derive formulas for calculating CPI{sub 0}, CPI without memory effect, and they quantify utilization of architectural parameters. These equations are architecturally diagnostic and predictive in nature. Results provide promise in code characterization, and empirical/analytical modeling.

  18. District-heating strategy model: computer programmer's manual

    SciTech Connect (OSTI)

    Kuzanek, J.F.

    1982-05-01

    The US Department of Housing and Urban Development (HUD) and the US Department of Energy (DOE) cosponsor a program aimed at increasing the number of district heating and cooling (DHC) systems. Such systems can reduce the amount and costs of fuels used to heat and cool buildings in a district. Twenty-eight communities have agreed to aid HUD in a national feasibility assessment of DHC systems. The HUD/DOE program entails technical assistance by Argonne National Laboratory and Oak Ridge National Laboratory. The assistance includes a computer program, called the district heating strategy model (DHSM), that performs preliminary calculations to analyze potential DHC systems. This report describes the general capabilities of the DHSM, provides historical background on its development, and explains the computer installation and operation of the model - including the data file structures and the options. Sample problems illustrate the structure of the various input data files, the interactive computer-output listings. The report is written primarily for computer programmers responsible for installing the model on their computer systems, entering data, running the model, and implementing local modifications to the code.

  19. CAMPUS ENERGY MODEL

    Energy Science and Technology Software Center (OSTI)

    003655IBMPC00 Campus Energy Model for Control and Performance Validation https://github.com/NREL/CampusEnergyModeling/releases/tag/v0.2.1

  20. Fuel Model | NISAC

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Fuels Model This model informs analyses of the availability of transportation fuel in the event the fuel supply chain is disrupted. The portion of the fuel supply system...

  1. Multiscale Subsurface Biogeochemical Modeling

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Biogeochemical Modeling Multiscale Subsurface Biogeochemical Modeling ScheibeSmaller.jpg Simulation of flow inside an experimental packed bed, performed on Franklin Key...

  2. Building Energy Modeling Library

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Building Energy Modeling (BEM) Library * Define and develop a best-practices BEM knowledge ... Links within modeling process for informing design Terms Methods Project Phase Key ...

  3. Single-Column Modeling

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    of the Atmospheric Radiation Measurement (ARM) Program. The model contains a full set of modern GCM parameterizations of subgrid physical processes. To force the model, the...

  4. Modeling and Analysis

    Broader source: Energy.gov [DOE]

    DOE modeling and analysis activities focus on reducing uncertainties and improving transparency in photovoltaics (PV) and concentrating solar power (CSP) performance modeling. The overall goal of...

  5. Models the Electromagnetic Response of a 3D Distribution using MP COMPUTERS

    Energy Science and Technology Software Center (OSTI)

    1999-05-01

    EM3D models the electromagnetic response of a 3D distribution of conductivity, dielectric permittivity and magnetic permeability within the earth for geophysical applications using massively parallel computers. The simulations are carried out in the frequency domain for either electric or magnetic sources for either scattered or total filed formulations of Maxwell''s equations. The solution is based on the method of finite differences and includes absorbing boundary conditions so that responses can be modeled up into themore »radar range where wave propagation is dominant. Recent upgrades in the software include the incorporation of finite size sources, that in addition to dipolar source fields, and a low induction number preconditioner that can significantly reduce computational run times. A graphical user interface (GUI) is bundled with the software so that complicated 3D models can be easily constructed and simulated with the software. The GUI also allows for plotting of the output.« less

  6. Supercomputer and cluster performance modeling and analysis efforts:2004-2006.

    SciTech Connect (OSTI)

    Sturtevant, Judith E.; Ganti, Anand; Meyer, Harold Edward; Stevenson, Joel O.; Benner, Robert E., Jr.; Goudy, Susan Phelps; Doerfler, Douglas W.; Domino, Stefan Paul; Taylor, Mark A.; Malins, Robert Joseph; Scott, Ryan T.; Barnette, Daniel Wayne; Rajan, Mahesh; Ang, James Alfred; Black, Amalia Rebecca; Laub, Thomas William; Vaughan, Courtenay Thomas; Franke, Brian Claude

    2007-02-01

    This report describes efforts by the Performance Modeling and Analysis Team to investigate performance characteristics of Sandia's engineering and scientific applications on the ASC capability and advanced architecture supercomputers, and Sandia's capacity Linux clusters. Efforts to model various aspects of these computers are also discussed. The goals of these efforts are to quantify and compare Sandia's supercomputer and cluster performance characteristics; to reveal strengths and weaknesses in such systems; and to predict performance characteristics of, and provide guidelines for, future acquisitions and follow-on systems. Described herein are the results obtained from running benchmarks and applications to extract performance characteristics and comparisons, as well as modeling efforts, obtained during the time period 2004-2006. The format of the report, with hypertext links to numerous additional documents, purposefully minimizes the document size needed to disseminate the extensive results from our research.

  7. Nonlinear modelling of polymer electrolyte membrane fuel cell stack using nonlinear cancellation technique

    SciTech Connect (OSTI)

    Barus, R. P. P.; Tjokronegoro, H. A.; Leksono, E.; Ismunandar

    2014-09-25

    Fuel cells are promising new energy conversion devices that are friendly to the environment. A set of control systems are required in order to operate a fuel cell based power plant system optimally. For the purpose of control system design, an accurate fuel cell stack model in describing the dynamics of the real system is needed. Currently, linear model are widely used for fuel cell stack control purposes, but it has limitations in narrow operation range. While nonlinear models lead to nonlinear control implemnetation whos more complex and hard computing. In this research, nonlinear cancellation technique will be used to transform a nonlinear model into a linear form while maintaining the nonlinear characteristics. The transformation is done by replacing the input of the original model by a certain virtual input that has nonlinear relationship with the original input. Then the equality of the two models is tested by running a series of simulation. Input variation of H2, O2 and H2O as well as disturbance input I (current load) are studied by simulation. The error of comparison between the proposed model and the original nonlinear model are less than 1 %. Thus we can conclude that nonlinear cancellation technique can be used to represent fuel cell nonlinear model in a simple linear form while maintaining the nonlinear characteristics and therefore retain the wide operation range.

  8. modeling-sediment-html

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Modeling of Sediment Transport and Porous Medium Response Under Current ad Waves

  9. Biomass Scenario Model

    SciTech Connect (OSTI)

    2015-09-01

    The Biomass Scenario Model (BSM) is a unique, carefully validated, state-of-the-art dynamic model of the domestic biofuels supply chain which explicitly focuses on policy issues, their feasibility, and potential side effects. It integrates resource availability, physical/technological/economic constraints, behavior, and policy. The model uses a system dynamics simulation (not optimization) to model dynamic interactions across the supply chain.

  10. Dynamic cable analysis models

    SciTech Connect (OSTI)

    Palo, P.A.; Meggitt, D.J.; Nordell, W.J.

    1983-05-01

    This paper presents a summary of the development and validation of undersea cable dynamics computer models by the Naval Civil Engineering Laboratory (NCEL) under the sponsorship of the Naval Facilities Engineering Command. These models allow for the analysis of both small displacement (strumming) and large displacement (static and dynamic) deformations of arbitrarily configured cable structures. All of the large displacement models described in this paper are available to the public. This paper does not emphasize the theoretical development of the models (this information is available in other references) but emphasizes the various features of the models, the comparisons between model output and experimental data, and applications for which the models have been used.

  11. Hydrologic Modeling Capabilities

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Understanding complex hydrologic systems requires the ability to develop, utilize, and interpret both numerical and analytical models. The Defense Waste Management Programs has both experience and technical knowledge to use and develop Earth systems models. Hydrological Modeling Models are simplified representations of reality, which we accept do not capture every detail of reality. Mathematical and numerical models can be used to rigorously test geologic and hydrologic assumptions, determine

  12. Microsoft Word - Modeling Summary

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Estimated Onsite worker and offsite public dose Modeling has been done to estimate onsite worker and offsite public dose that may have resulted from the February 14, 2014, event. The results of the modeling indicate that all potential doses were well below the applicable regulatory limits (see results below). The modeling results are consistent with actual worker bioassay results. For modeling data see: (http://www.wipp.energy.gov/Special/Modeling Results.pdf) Estimated Dose Applicable

  13. Evaluation of Forecasted Southeast Pacific Stratocumulus in the NCAR, GFDL and ECMWF Models

    SciTech Connect (OSTI)

    Hannay, C; Williamson, D L; Hack, J J; Kiehl, J T; Olson, J G; Klein, S A; Bretherton, C S; K?hler, M

    2008-01-24

    We examine forecasts of Southeast Pacific stratocumulus at 20S and 85W during the East Pacific Investigation of Climate (EPIC) cruise of October 2001 with the ECMWF model, the Atmospheric Model (AM) from GFDL, the Community Atmosphere Model (CAM) from NCAR, and the CAM with a revised atmospheric boundary layer formulation from the University of Washington (CAM-UW). The forecasts are initialized from ECMWF analyses and each model is run for 3 days to determine the differences with the EPIC field data. Observations during the EPIC cruise show a stable and well-mixed boundary layer under a sharp inversion. The inversion height and the cloud layer have a strong and regular diurnal cycle. A key problem common to the four models is that the forecasted planetary boundary layer (PBL) height is too low when compared to EPIC observations. All the models produce a strong diurnal cycle in the Liquid Water Path (LWP) but there are large differences in the amplitude and the phase compared to the EPIC observations. This, in turn, affects the radiative fluxes at the surface. There is a large spread in the surface energy budget terms amongst the models and large discrepancies with observational estimates. Single Column Model (SCM) experiments with the CAM show that the vertical pressure velocity has a large impact on the PBL height and LWP. Both the amplitude of the vertical pressure velocity field and its vertical structure play a significant role in the collapse or the maintenance of the PBL.

  14. High Resolution Atmospheric Modeling for Wind Energy Applications

    SciTech Connect (OSTI)

    Simpson, M; Bulaevskaya, V; Glascoe, L; Singer, M

    2010-03-18

    The ability of the WRF atmospheric model to forecast wind speed over the Nysted wind park was investigated as a function of time. It was found that in the time period we considered (August 1-19, 2008), the model is able to predict wind speeds reasonably accurately for 48 hours ahead, but that its forecast skill deteriorates rapidly after 48 hours. In addition, a preliminary analysis was carried out to investigate the impact of vertical grid resolution on the forecast skill. Our preliminary finding is that increasing vertical grid resolution does not have a significant impact on the forecast skill of the WRF model over Nysted wind park during the period we considered. Additional simulations during this period, as well as during other time periods, will be run in order to validate the results presented here. Wind speed is a difficult parameter to forecast due the interaction of large and small length scale forcing. To accurately forecast the wind speed at a given location, the model must correctly forecast the movement and strength of synoptic systems, as well as the local influence of topography / land use on the wind speed. For example, small deviations in the forecast track or strength of a large-scale low pressure system can result in significant forecast errors for local wind speeds. The purpose of this study is to provide a preliminary baseline of a high-resolution limited area model forecast performance against observations from the Nysted wind park. Validating the numerical weather prediction model performance for past forecasts will give a reasonable measure of expected forecast skill over the Nysted wind park. Also, since the Nysted Wind Park is over water and some distance from the influence of terrain, the impact of high vertical grid spacing for wind speed forecast skill will also be investigated.

  15. Ensemble Atmospheric Dispersion Modeling

    SciTech Connect (OSTI)

    Addis, R.P.

    2002-06-24

    Prognostic atmospheric dispersion models are used to generate consequence assessments, which assist decision-makers in the event of a release from a nuclear facility. Differences in the forecast wind fields generated by various meteorological agencies, differences in the transport and diffusion models, as well as differences in the way these models treat the release source term, result in differences in the resulting plumes. Even dispersion models using the same wind fields may produce substantially different plumes. This talk will address how ensemble techniques may be used to enable atmospheric modelers to provide decision-makers with a more realistic understanding of how both the atmosphere and the models behave.

  16. Development Status of the PEBBLES Code for Pebble Mechanics: Improved Physical Models and Speed-up

    SciTech Connect (OSTI)

    Joshua J. Cogliati; Abderrafi M. Ougouag

    2009-12-01

    PEBBLES is a code for simulating the motion of all the pebbles in a pebble bed reactor. Since pebble bed reactors are packed randomly and not precisely placed, the location of the fuel elements in the reactor is not deterministically known. Instead, when determining operating parameters the motion of the pebbles can be simulated and stochastic locations can be found. The PEBBLES code can output information relevant for other simulations of the pebble bed reactors such as the positions of the pebbles in the reactor, packing fraction change in an earthquake, and velocity profiles created by recirculation. The goal for this level three milestone was to speedup the PEBBLES code through implementation on massively parallel computer. Work on this goal has resulted in speeding up both the single processor version and creation of a new parallel version of PEBBLES. Both the single processor version and the parallel running capability of the PEBBLES code have improved since the fiscal year start. The hybrid MPI/OpenMP PEBBLES version was created this year to run on the increasingly common cluster hardware profile that combines nodes with multiple processors that share memory and a cluster of nodes that are networked together. The OpenMP portions use the Open Multi-Processing shared memory parallel processing model to split the task across processors in a single node that shares memory. The Message Passing Interface (MPI) portion uses messages to communicate between different nodes over a network. The following are wall clock speed up for simulating an NGNP-600 sized reactor. The single processor version runs 1.5 times faster compared to the single processor version at the beginning of the fiscal year. This speedup is primarily due to the improved static friction model described in the report. When running on 64 processors, the new MPI/OpenMP hybrid version has a wall clock speed up of 22 times compared to the current single processor version. When using 88 processors, a speed up of 23 times is achieved. This speedup and other improvements of PEBBLES combine to make PEBBLES more capable and more useful for simulation of a pebble bed reactor. This report details the implementation and effects of the speedup work done over the course of the fiscal year.

  17. Development Status of the PEBBLES Code for Pebble Mechanics: Improved Physical Models and Speed-up

    SciTech Connect (OSTI)

    Joshua J. Cogliati; Abderrafi M. Ougouag

    2009-09-01

    PEBBLES is a code for simulating the motion of all the pebbles in a pebble bed reactor. Since pebble bed reactors are packed randomly and not precisely placed, the location of the fuel elements in the reactor is not deterministically known. Instead, when determining operating parameters the motion of the pebbles can be simulated and stochastic locations can be found. The PEBBLES code can output information relevant for other simulations of the pebble bed reactors such as the positions of the pebbles in the reactor, packing fraction change in an earthquake, and velocity profiles created by recirculation. The goal for this level three milestone was to speedup the PEBBLES code through implementation on massively parallel computer. Work on this goal has resulted in speeding up both the single processor version and creation of a new parallel version of PEBBLES. Both the single processor version and the parallel running capability of the PEBBLES code have improved since the fiscal year start. The hybrid MPI/OpenMP PEBBLES version was created this year to run on the increasingly common cluster hardware profile that combines nodes with multiple processors that share memory and a cluster of nodes that are networked together. The OpenMP portions use the Open Multi-Processing shared memory parallel processing model to split the task across processors in a single node that shares memory. The Message Passing Interface (MPI) portion uses messages to communicate between different nodes over a network. The following are wall clock speed up for simulating an NGNP-600 sized reactor. The single processor version runs 1.5 times faster compared to the single processor version at the beginning of the fiscal year. This speedup is primarily due to the improved static friction model described in the report. When running on 64 processors, the new MPI/OpenMP hybrid version has a wall clock speed up of 22 times compared to the current single processor version. When using 88 processors, a speed up of 23 times is achieved. This speedup and other improvements of PEBBLES combine to make PEBBLES more capable and more useful for simulation of a pebble bed reactor. This report details the implementation and effects of the speedup work done over the course of the fiscal year.

  18. Analysis of Modeling Assumptions used in Production Cost Models for Renewable Integration Studies

    SciTech Connect (OSTI)

    Stoll, Brady; Brinkman, Gregory; Townsend, Aaron; Bloom, Aaron

    2016-01-01

    Renewable energy integration studies have been published for many different regions exploring the question of how higher penetration of renewable energy will impact the electric grid. These studies each make assumptions about the systems they are analyzing; however the effect of many of these assumptions has not been yet been examined and published. In this paper we analyze the impact of modeling assumptions in renewable integration studies, including the optimization method used (linear or mixed-integer programming) and the temporal resolution of the dispatch stage (hourly or sub-hourly). We analyze each of these assumptions on a large and a small system and determine the impact of each assumption on key metrics including the total production cost, curtailment of renewables, CO2 emissions, and generator starts and ramps. Additionally, we identified the impact on these metrics if a four-hour ahead commitment step is included before the dispatch step and the impact of retiring generators to reduce the degree to which the system is overbuilt. We find that the largest effect of these assumptions is at the unit level on starts and ramps, particularly for the temporal resolution, and saw a smaller impact at the aggregate level on system costs and emissions. For each fossil fuel generator type we measured the average capacity started, average run-time per start, and average number of ramps. Linear programming results saw up to a 20% difference in number of starts and average run time of traditional generators, and up to a 4% difference in the number of ramps, when compared to mixed-integer programming. Utilizing hourly dispatch instead of sub-hourly dispatch saw no difference in coal or gas CC units for either start metric, while gas CT units had a 5% increase in the number of starts and 2% increase in the average on-time per start. The number of ramps decreased up to 44%. The smallest effect seen was on the CO2 emissions and total production cost, with a 0.8% and 0.9% reduction respectively when using linear programming compared to mixed-integer programming and 0.07% and 0.6% reduction, respectively, in the hourly dispatch compared to sub-hourly dispatch.

  19. System Advisor Model

    Energy Science and Technology Software Center (OSTI)

    2010-03-01

    The System Advisor Model (SAM) is a performance and economic model designed to facilitate decision making for people involved in the renewable energy industry, ranging from project managers and engineers to incentive program designers, technology developers, and researchers.

  20. Enterprise Risk Management Model

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Model The Enterprise Risk Management (ERM) Model is a system used to analyze the cost and benefit of addressing risks inherent in the work performed by the Department of Energy....

  1. Multifamily Envelope Leakage Model

    Energy Savers [EERE]

    Multifamily Envelope Leakage Model © Steven Winter Associates, Inc. 2013 Acknowledgements * Sponsored by Department of Energy's Building America Program © Steven Winter Associates, Inc. 2013 NEW YORK, NY | WASHINGTON, DC | NORWALK, CT CALL US 866.676.1972 | SWINTER.COM Outline/Agenda * Introduce multifamily air leakage testing * Statement of the problem * Steps taken for a solution * Model results * Applying the model * Benefits of the model © Steven Winter Associates, Inc. 2013 NEW YORK, NY

  2. Modeling & Analysis

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    A view upwind of SWIS' aerosol-generating system. Permalink Gallery Sandia Wake-Imaging System Successfully Deployed at Scaled Wind Farm Technology Facility Analysis, Capabilities, Energy, Facilities, Modeling, Modeling, Modeling & Analysis, Modeling & Analysis, News, News & Events, Partnership, Renewable Energy, Research & Capabilities, SWIFT, Systems Analysis, Technical Highlights, Wind Energy, Wind News Sandia Wake-Imaging System Successfully Deployed at Scaled Wind Farm

  3. Estimation of k-e parameters using surrogate models and jet-in-crossflow data.

    SciTech Connect (OSTI)

    Lefantzi, Sophia; Ray, Jaideep; Arunajatesan, Srinivasan; Dechant, Lawrence

    2015-02-01

    We demonstrate a Bayesian method that can be used to calibrate computationally expensive 3D RANS (Reynolds Av- eraged Navier Stokes) models with complex response surfaces. Such calibrations, conditioned on experimental data, can yield turbulence model parameters as probability density functions (PDF), concisely capturing the uncertainty in the parameter estimates. Methods such as Markov chain Monte Carlo (MCMC) estimate the PDF by sampling, with each sample requiring a run of the RANS model. Consequently a quick-running surrogate is used instead to the RANS simulator. The surrogate can be very difficult to design if the model's response i.e., the dependence of the calibration variable (the observable) on the parameter being estimated is complex. We show how the training data used to construct the surrogate can be employed to isolate a promising and physically realistic part of the parameter space, within which the response is well-behaved and easily modeled. We design a classifier, based on treed linear models, to model the "well-behaved region". This classifier serves as a prior in a Bayesian calibration study aimed at estimating 3 k [?] e parameters ( C u , C e 2 , C e 1 ) from experimental data of a transonic jet-in-crossflow interaction. The robustness of the calibration is investigated by checking its predictions of variables not included in the cal- ibration data. We also check the limit of applicability of the calibration by testing at off-calibration flow regimes. We find that calibration yield turbulence model parameters which predict the flowfield far better than when the nomi- nal values of the parameters are used. Substantial improvements are still obtained when we use the calibrated RANS model to predict jet-in-crossflow at Mach numbers and jet strengths quite different from those used to generate the ex- perimental (calibration) data. Thus the primary reason for poor predictive skill of RANS, when using nominal values of the turbulence model parameters, was parametric uncertainty, which was rectified by calibration. Post-calibration, the dominant contribution to model inaccuraries are due to the structural errors in RANS.

  4. Model Fire Protection Program

    Broader source: Energy.gov [DOE]

    To facilitate conformance with its fire safety directives and the implementation of a comprehensive fire protection program, DOE has developed a number of "model" program documents. These include a comprehensive model fire protection program, model fire hazards analyses and assessments, fire protection system inspection and testing procedures, and related material.

  5. IR DIAL performance modeling

    SciTech Connect (OSTI)

    Sharlemann, E.T.

    1994-07-01

    We are developing a DIAL performance model for CALIOPE at LLNL. The intent of the model is to provide quick and interactive parameter sensitivity calculations with immediate graphical output. A brief overview of the features of the performance model is given, along with an example of performance calculations for a non-CALIOPE application.

  6. Transportation Systems Modeling

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    TRACC RESEARCH Computational Fluid Dynamics Computational Structural Mechanics Transportation Systems Modeling TRANSPORTATION SYSTEMS MODELING Overview of TSM Transportation systems modeling research at TRACC uses the TRANSIMS (Transportation Analysis SIMulation System) traffic micro simulation code developed by the U.S. Department of Transportation (USDOT). The TRANSIMS code represents the latest generation of traffic simulation codes developed jointly under multiyear programs by USDOT, the

  7. CONTENT MODEL HOW-TO

    Energy Science and Technology Software Center (OSTI)

    003241MLTPL00 Content Model Guidelines https://github.com/usgin/usginspecs/wiki/Content-Model-Guidelines

  8. Collider Detector at Fermilab (CDF): Data from Standard Model and Supersymmetric Higgs Bosons Research of the Higgs Group

    DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]

    The Collider Detector at Fermilab (CDF) is a Tevatron experiment at Fermilab. The Tevatron, a powerful particle accelerator, accelerates protons and antiprotons close to the speed of light, and then makes them collide head-on inside the CDF detector. The CDF detector is used to study the products of such collisions. The CDF Physics Group at Fermilab is organized into six working groups, each with a specific focus. The Higgs group searches for Standard Model and Supersymmetric Higgs bosons. Their public web page makes data and numerous figures available from both CDF Runs I and II.

  9. Hurricanes in an Aquaplanet World: Implications of the Impacts of External Forcing and Model Horizontal Resolution

    SciTech Connect (OSTI)

    Li, Fuyu; Collins, William D.; Wehner, Michael F.; Leung, Lai-Yung R.

    2013-06-02

    High-resolution climate models have been shown to improve the statistics of tropical storms and hurricanes compared to low-resolution models. The impact of increasing horizontal resolution in the tropical storm simulation is investigated exclusively using a series of Atmospheric Global Climate Model (AGCM) runs with idealized aquaplanet steady-state boundary conditions and a fixed operational storm-tracking algorithm. The results show that increasing horizontal resolution helps to detect more hurricanes, simulate stronger extreme rainfall, and emulate better storm structures in the models. However, increasing model resolution does not necessarily produce stronger hurricanes in terms of maximum wind speed, minimum sea level pressure, and mean precipitation, as the increased number of storms simulated by high-resolution models is mainly associated with weaker storms. The spatial scale at which the analyses are conducted appears to have more important control on these meteorological statistics compared to horizontal resolution of the model grid. When the simulations are analyzed on common low-resolution grids, the statistics of the hurricanes, particularly the hurricane counts, show reduced sensitivity to the horizontal grid resolution and signs of scale invariant.

  10. Direct pore-level modeling of incompressible fluid flow in porous media

    SciTech Connect (OSTI)

    Ovaysi, Saeed; Piri, Mohammad

    2010-09-20

    We present a dynamic particle-based model for direct pore-level modeling of incompressible viscous fluid flow in disordered porous media. The model is capable of simulating flow directly in three-dimensional high-resolution micro-CT images of rock samples. It is based on moving particle semi-implicit (MPS) method. We modify this technique in order to improve its stability for flow in porous media problems. Using the micro-CT image of a rock sample, the entire medium, i.e., solid and fluid, is discretized into particles. The incompressible Navier-Stokes equations are then solved for each particle using the MPS summations. The model handles highly irregular fluid-solid boundaries effectively. An algorithm to split and merge fluid particles is also introduced. To handle the computational load, we present a parallel version of the model that runs on distributed memory computer clusters. The accuracy of the model is validated against the analytical, numerical, and experimental data available in the literature. The validated model is then used to simulate both unsteady- and steady-state flow of an incompressible fluid directly in a representative elementary volume (REV) size micro-CT image of a naturally-occurring sandstone with 3.398 {mu}m resolution. We analyze the quality and consistency of the predicted flow behavior and calculate absolute permeability using the steady-state flow rate.

  11. UZ Colloid Transport Model

    SciTech Connect (OSTI)

    M. McGraw

    2000-04-13

    The UZ Colloid Transport model development plan states that the objective of this Analysis/Model Report (AMR) is to document the development of a model for simulating unsaturated colloid transport. This objective includes the following: (1) use of a process level model to evaluate the potential mechanisms for colloid transport at Yucca Mountain; (2) Provide ranges of parameters for significant colloid transport processes to Performance Assessment (PA) for the unsaturated zone (UZ); (3) Provide a basis for development of an abstracted model for use in PA calculations.

  12. SHOCK INITIATION EXPERIMENTS ON THE TATB BASED EXPLOSIVE RX-03-GO WITH IGNITION AND GROWTH MODELING

    SciTech Connect (OSTI)

    Vandersall, K S; Garcia, F; Tarver, C M

    2009-06-23

    Shock initiation experiments on the TATB based explosive RX-03-GO (92.5% TATB, 7.5% Cytop A by weight) were performed to obtain in-situ pressure gauge data, characterize the run-distance-to-detonation behavior, and calculate Ignition and Growth modeling parameters. A 101 mm diameter propellant driven gas gun was utilized to initiate the explosive sample with manganin piezoresistive pressure gauge packages placed between sample slices. The RX-03-GO formulation utilized is similar to that of LX-17 (92.5% TATB, 7.5% Kel-f by weight) with the notable differences of a new binder material and TATB that has been dissolved and recrystallized in order to improve the purity and morphology. The shock sensitivity will be compared with that of prior data on LX-17 and other TATB formulations. Ignition and Growth modeling parameters were obtained with a reasonable fit to the experimental data.

  13. Translating Non-Trivial Algorithms from the Circuit Model to the Measurement

    SciTech Connect (OSTI)

    Smith IV, Amos M; Alsing, Paul; Lott, Capt. Gordon; Fanto, Michael

    2015-01-01

    We provide a set of prescriptions for implementing a circuit model algorithm as measurement based quantum computing via a large discrete cluster state constructed sequentially, from qubits implemented as single photons. We describe a large optical discrete graph state capable of searching logical 4 and 8 element lists as an example. To do so we have developed several prescriptions based on analytic evaluation of the evolution of discrete cluster states and graph state equations. We describe the cluster state as a sequence of repeated entanglement and measurement steps using a small number of single photons for each step. These prescriptions can be generalized to implement any logical circuit model operation with appropriate single photon measurements and feed forward error corrections. Such a cluster state is not guaranteed to be optimal (i.e. minimum number of photons, measurements, run time).

  14. Foam process models.

    SciTech Connect (OSTI)

    Moffat, Harry K.; Noble, David R.; Baer, Thomas A.; Adolf, Douglas Brian; Rao, Rekha Ranjana; Mondy, Lisa Ann

    2008-09-01

    In this report, we summarize our work on developing a production level foam processing computational model suitable for predicting the self-expansion of foam in complex geometries. The model is based on a finite element representation of the equations of motion, with the movement of the free surface represented using the level set method, and has been implemented in SIERRA/ARIA. An empirically based time- and temperature-dependent density model is used to encapsulate the complex physics of foam nucleation and growth in a numerically tractable model. The change in density with time is at the heart of the foam self-expansion as it creates the motion of the foam. This continuum-level model uses an homogenized description of foam, which does not include the gas explicitly. Results from the model are compared to temperature-instrumented flow visualization experiments giving the location of the foam front as a function of time for our EFAR model system.

  15. Research and evaluation of biomass resources/conversion/utilization systems (market/experimental analysis for development of a data base for a fuels from biomass model). Quarterly technical progress report, Februray 1, 1980-April 30, 1980

    SciTech Connect (OSTI)

    Ahn, Y.K.; Chen, Y.C.; Chen, H.T.; Helm, R.W.; Nelson, E.T.; Shields, K.J.

    1980-01-01

    The project will result in two distinct products: (1) a biomass allocation model which will serve as a tool for the energy planner. (2) the experimental data is being generated to help compare and contrast the behavior of a large number of biomass material in thermochemical environments. Based on information in the literature, values have been developed for regional biomass costs and availabilities and for fuel costs and demands. This data is now stored in data banks and may be updated as better data become available. Seventeen biomass materials have been run on the small TGA and the results partially analyzed. Ash analysis has been performed on 60 biomass materials. The Effluent Gas Analyzer with its associated gas chromatographs has been made operational and some runs have been carried out. Using a computerized program for developing product costs, parametric studies on all but 1 of the 14 process configurations being considered have been performed. Background economic data for all the configuration have been developed. Models to simulate biomass gasifications in an entrained and fixed bed have been developed using models previously used for coal gasification. Runs have been carried out in the fluidized and fixed bed reactor modes using a variety of biomass materials in atmospheres of steam, O/sub 2/ and air. Check aout of the system continues using fabricated manufacturing cost and efficiency data. A users manual has been written.

  16. Ventilation Model Report

    SciTech Connect (OSTI)

    V. Chipman; J. Case

    2002-12-20

    The purpose of the Ventilation Model is to simulate the heat transfer processes in and around waste emplacement drifts during periods of forced ventilation. The model evaluates the effects of emplacement drift ventilation on the thermal conditions in the emplacement drifts and surrounding rock mass, and calculates the heat removal by ventilation as a measure of the viability of ventilation to delay the onset of peak repository temperature and reduce its magnitude. The heat removal by ventilation is temporally and spatially dependent, and is expressed as the fraction of heat carried away by the ventilation air compared to the fraction of heat produced by radionuclide decay. One minus the heat removal is called the wall heat fraction, or the remaining amount of heat that is transferred via conduction to the surrounding rock mass. Downstream models, such as the ''Multiscale Thermohydrologic Model'' (BSC 2001), use the wall heat fractions as outputted from the Ventilation Model to initialize their post-closure analyses. The Ventilation Model report was initially developed to analyze the effects of preclosure continuous ventilation in the Engineered Barrier System (EBS) emplacement drifts, and to provide heat removal data to support EBS design. Revision 00 of the Ventilation Model included documentation of the modeling results from the ANSYS-based heat transfer model. Revision 01 ICN 01 included the results of the unqualified software code MULTIFLUX to assess the influence of moisture on the ventilation efficiency. The purposes of Revision 02 of the Ventilation Model are: (1) To validate the conceptual model for preclosure ventilation of emplacement drifts and verify its numerical application in accordance with new procedural requirements as outlined in AP-SIII-10Q, Models (Section 7.0). (2) To satisfy technical issues posed in KTI agreement RDTME 3.14 (Reamer and Williams 2001a). Specifically to demonstrate, with respect to the ANSYS ventilation model, the adequacy of the discretization (Section 6.2.3.1), and the downstream applicability of the model results (i.e. wall heat fractions) to initialize post-closure thermal models (Section 6.6). (3) To satisfy the remainder of KTI agreement TEF 2.07 (Reamer and Williams 2001b). Specifically to provide the results of post-test ANSYS modeling of the Atlas Facility forced convection tests (Section 7.1.2). This portion of the model report also serves as a validation exercise per AP-SIII.10Q, Models, for the ANSYS ventilation model. (4) To asses the impacts of moisture on the ventilation efficiency.

  17. Particle-in-cell modeling for MJ scale dense plasma focus with varied anode shape

    SciTech Connect (OSTI)

    Link, A. Halvorson, C. Schmidt, A.; Hagen, E. C.; Rose, D. V.; Welch, D. R.

    2014-12-15

    Megajoule scale dense plasma focus (DPF) Z-pinches with deuterium gas fill are compact devices capable of producing 10{sup 12} neutrons per shot but past predictive models of large-scale DPF have not included kinetic effects such as ion beam formation or anomalous resistivity. We report on progress of developing a predictive DPF model by extending our 2D axisymmetric collisional kinetic particle-in-cell (PIC) simulations from the 4 kJ, 200 kA LLNL DPF to 1 MJ, 2 MA Gemini DPF using the PIC code LSP. These new simulations incorporate electrodes, an external pulsed-power driver circuit, and model the plasma from insulator lift-off through the pinch phase. To accommodate the vast range of relevant spatial and temporal scales involved in the Gemini DPF within the available computational resources, the simulations were performed using a new hybrid fluid-to-kinetic model. This new approach allows single simulations to begin in an electron/ion fluid mode from insulator lift-off through the 5-6 ?s run-down of the 50+ cm anode, then transition to a fully kinetic PIC description during the run-in phase, when the current sheath is 2-3 mm from the central axis of the anode. Simulations are advanced through the final pinch phase using an adaptive variable time-step to capture the fs and sub-mm scales of the kinetic instabilities involved in the ion beam formation and neutron production. Validation assessments are being performed using a variety of different anode shapes, comparing against experimental measurements of neutron yield, neutron anisotropy and ion beam production.

  18. Response of precipitation extremes to idealized global warming in an aqua-planet climate model: Towards robust projection across different horizontal resolutions

    SciTech Connect (OSTI)

    Li, F.; Collins, W.D.; Wehner, M.F.; Williamson, D.L.; Olson, J.G.

    2011-04-15

    Current climate models produce quite heterogeneous projections for the responses of precipitation extremes to future climate change. To help understand the range of projections from multimodel ensembles, a series of idealized 'aquaplanet' Atmospheric General Circulation Model (AGCM) runs have been performed with the Community Atmosphere Model CAM3. These runs have been analysed to identify the effects of horizontal resolution on precipitation extreme projections under two simple global warming scenarios. We adopt the aquaplanet framework for our simulations to remove any sensitivity to the spatial resolution of external inputs and to focus on the roles of model physics and dynamics. Results show that a uniform increase of sea surface temperature (SST) and an increase of low-to-high latitude SST gradient both lead to increase of precipitation and precipitation extremes for most latitudes. The perturbed SSTs generally have stronger impacts on precipitation extremes than on mean precipitation. Horizontal model resolution strongly affects the global warming signals in the extreme precipitation in tropical and subtropical regions but not in high latitude regions. This study illustrates that the effects of horizontal resolution have to be taken into account to develop more robust projections of precipitation extremes.

  19. Evaluation of INL Supplied MOOSE/OSPREY Model: Modeling Water Adsorption on Type 3A Molecular Sieve

    SciTech Connect (OSTI)

    Pompilio, L. M.; DePaoli, D. W.; Spencer, B. B.

    2014-08-29

    The purpose of this study was to evaluate Idaho National Labs Multiphysics Object-Oriented Simulation Environment (MOOSE) software in modeling the adsorption of water onto type 3A molecular sieve (3AMS). MOOSE can be thought-of as a computing framework within which applications modeling specific coupled-phenomena can be developed and run. The application titled Off-gas SeParation and REcoverY (OSPREY) has been developed to model gas sorption in packed columns. The sorbate breakthrough curve calculated by MOOSE/OSPREY was compared to results previously obtained in the deep bed hydration tests conducted at Oak Ridge National Laboratory. The coding framework permits selection of various options, when they exist, for modeling a process. For example, the OSPREY module includes options to model the adsorption equilibrium with a Langmuir model or a generalized statistical thermodynamic adsorption (GSTA) model. The vapor solid equilibria and the operating conditions of the process (e.g., gas phase concentration) are required to calculate the concentration gradient driving the mass transfer between phases. Both the Langmuir and GSTA models were tested in this evaluation. Input variables were either known from experimental conditions, or were available (e.g., density) or were estimated (e.g., thermal conductivity of sorbent) from the literature. Variables were considered independent of time, i.e., rather than having a mass transfer coefficient that varied with time or position in the bed, the parameter was set to remain constant. The calculated results did not coincide with data from laboratory tests. The model accurately estimated the number of bed volumes processed for the given operating parameters, but breakthrough times were not accurately predicted, varying 50% or more from the data. The shape of the breakthrough curves also differed from the experimental data, indicating a much wider sorption band. Model modifications are needed to improve its utility and predictive capability. Recommended improvements include: greater flexibility for input of mass transfer parameters, time-variable gas inlet concentration, direct output of loading and temperature profiles along the bed, and capability to conduct simulations of beds in series.

  20. Modeling of Carbon Migration During JET Injection Experiments

    SciTech Connect (OSTI)

    Strachan, J. D.; Likonen, J.; Coad, P.; Rubel, M.; Widdowson, A.; Airila, M.; Andrew, P.; Brezinsek, S.; Corrigan, G.; Esser, H. G.; Jachmich, S.; Kallenbach, A.; Kirschner, A.; Kreter, A.; Matthews, G. F.; Philipps, V.; Pitts, R. A.; Spence, J.; Stamp, M.; Wiesen, S.

    2008-10-15

    JET has performed two dedicated carbon migration experiments on the final run day of separate campaigns (2001 and 2004) using {sup 13}CH{sub 4} methane injected into repeated discharges. The EDGE2D/NIMBUS code modelled the carbon migration in both experiments. This paper describes this modelling and identifies a number of important migration pathways: (1) deposition and erosion near the injection location, (2) migration through the main chamber SOL, (3) migration through the private flux region aided by E x B drifts, and (4) neutral migration originating near the strike points. In H-Mode, type I ELMs are calculated to influence the migration by enhancing erosion during the ELM peak and increasing the long-range migration immediately following the ELM. The erosion/re-deposition cycle along the outer target leads to a multistep migration of {sup 13}C towards the separatrix which is called 'walking'. This walking created carbon neutrals at the outer strike point and led to {sup 13}C deposition in the private flux region. Although several migration pathways have been identified, quantitative analyses are hindered by experimental uncertainty in divertor leakage, and the lack of measurements at locations such as gaps and shadowed regions.

  1. 488-4D ASH LANDFILL CLOSURE CAP HELP MODELING

    SciTech Connect (OSTI)

    Phifer, M.

    2014-11-17

    At the request of Area Completion Projects (ACP) in support of the 488-4D Landfill closure, the Savannah River National Laboratory (SRNL) has performed Hydrologic Evaluation of Landfill Performance (HELP) modeling of the planned 488-4D Ash Landfill closure cap to ensure that the South Carolina Department of Health and Environmental Control (SCDHEC) limit of no more than 12 inches of head on top of the barrier layer (saturated hydraulic conductivity of no more than 1.0E-05 cm/s) in association with a 25-year, 24-hour storm event is not projected to be exceeded. Based upon Weber 1998 a 25-year, 24-hour storm event at the Savannah River Site (SRS) is 6.1 inches. The results of the HELP modeling indicate that the greatest peak daily head on top of the barrier layer (i.e. geosynthetic clay liner (GCL) or high density polyethylene (HDPE) geomembrane) for any of the runs made was 0.079 inches associated with a peak daily precipitation of 6.16 inches. This is well below the SCDHEC limit of 12 inches.

  2. The model coupling toolkit.

    SciTech Connect (OSTI)

    Larson, J. W.; Jacob, R. L.; Foster, I.; Guo, J.

    2001-04-13

    The advent of coupled earth system models has raised an important question in parallel computing: What is the most effective method for coupling many parallel models to form a high-performance coupled modeling system? We present our solution to this problem--The Model Coupling Toolkit (MCT). We explain how our effort to construct the Next-Generation Coupler for NCAR Community Climate System Model motivated us to create this toolkit. We describe in detail the conceptual design of the MCT and explain its usage in constructing parallel coupled models. We present preliminary performance results for the toolkit's parallel data transfer facilities. Finally, we outline an agenda for future development of the MCT.

  3. Search for the standard model Higgs boson produced in association with top quarks using the full CDF data set

    SciTech Connect (OSTI)

    Aaltonen, T.; Alvarez Gonzalez, B.; Amerio, S.; Amidei, D.; Anastassov, A.; Annovi, A.; Antos, J.; Apollinari, G.; Appel, J.A.; Arisawa, T.; Artikov, A.; /Dubna, JINR /Texas A-M

    2012-08-01

    A search is presented for the standard model Higgs boson produced in association with top quarks using the full Run II proton-antiproton collision data set, corresponding to 9.45 fb{sup -1}, collected by the Collider Detector at Fermilab. No significant excess over the expected background is observed, and 95% credibility-level upper bounds are placed on the cross section {sigma}(t{bar t}H {yields} lepton + missing transverse energy + jets). For a Higgs boson mass of 125 GeV/c{sup 2}, we expect to set a limit of 12.6, and observe a limit of 20.5 times the standard model rate. This represents the most sensitive search for a standard model Higgs boson in this channel to date.

  4. Modeling & Simulation

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Modeling & Simulation Modeling & Simulation Research into alternative forms of energy, especially energy security, is one of the major national security imperatives of this century. Get Expertise David Harradine Physical Chemistry and Applied Spectroscopy Email Josh Smith Chemistry Communications Email The inherent knowledge of transformation has beguiled sorcerers and scientists alike. Data Analysis and Modeling & Simulation for the Chemical Sciences Project Description Almos every

  5. PV modules modelling

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    of the Environmental Sciences / Group of Energy / PVsyst Modeling Systems Losses in PVsyst André Mermoud Institute of the Environmental Sciences Group of energy - PVsyst andre.mermoud@unige.ch Institute of the Environmental Sciences / Group of Energy / PVsyst Summary Losses in a PV system simulation may be: - Determined by specific models (shadings) - Interpretations of models (PV module behaviour) - User's parameter specifications (soiling, wiring, etc). PVsyst provides a detailed analysis of

  6. Modeling & Simulation publications

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Modeling & Simulation » Modeling & Simulation Publications Modeling & Simulation publications Research into alternative forms of energy, especially energy security, is one of the major national security imperatives of this century. Get Expertise David Harradine Physical Chemistry and Applied Spectroscopy Email Josh Smith Chemistry Email The inherent knowledge of transformation has beguiled sorcerers and scientists alike. D.A. Horner, F. Lambert, J.D. Kress, and L.A. Collins,

  7. Liftoff Model for MELCOR.

    SciTech Connect (OSTI)

    Young, Michael F.

    2015-07-01

    Aerosol particles that deposit on surfaces may be subsequently resuspended by air flowing over the surface. A review of models for this liftoff process is presented and compared to available data. Based on this review, a model that agrees with existing data and is readily computed is presented for incorporation into a system level code such as MELCOR. Liftoff Model for MELCOR July 2015 4 This page is intentionally blank

  8. Distributed generation capabilities of the national energy modeling system

    SciTech Connect (OSTI)

    LaCommare, Kristina Hamachi; Edwards, Jennifer L.; Marnay, Chris

    2003-01-01

    This report describes Berkeley Lab's exploration of how the National Energy Modeling System (NEMS) models distributed generation (DG) and presents possible approaches for improving how DG is modeled. The on-site electric generation capability has been available since the AEO2000 version of NEMS. Berkeley Lab has previously completed research on distributed energy resources (DER) adoption at individual sites and has developed a DER Customer Adoption Model called DER-CAM. Given interest in this area, Berkeley Lab set out to understand how NEMS models small-scale on-site generation to assess how adequately DG is treated in NEMS, and to propose improvements or alternatives. The goal is to determine how well NEMS models the factors influencing DG adoption and to consider alternatives to the current approach. Most small-scale DG adoption takes place in the residential and commercial modules of NEMS. Investment in DG ultimately offsets purchases of electricity, which also eliminates the losses associated with transmission and distribution (T&D). If the DG technology that is chosen is photovoltaics (PV), NEMS assumes renewable energy consumption replaces the energy input to electric generators. If the DG technology is fuel consuming, consumption of fuel in the electric utility sector is replaced by residential or commercial fuel consumption. The waste heat generated from thermal technologies can be used to offset the water heating and space heating energy uses, but there is no thermally activated cooling capability. This study consists of a review of model documentation and a paper by EIA staff, a series of sensitivity runs performed by Berkeley Lab that exercise selected DG parameters in the AEO2002 version of NEMS, and a scoping effort of possible enhancements and alternatives to NEMS current DG capabilities. In general, the treatment of DG in NEMS is rudimentary. The penetration of DG is determined by an economic cash-flow analysis that determines adoption based on the n umber of years to a positive cash flow. Some important technologies, e.g. thermally activated cooling, are absent, and ceilings on DG adoption are determined by some what arbitrary caps on the number of buildings that can adopt DG. These caps are particularly severe for existing buildings, where the maximum penetration for any one technology is 0.25 percent. On the other hand, competition among technologies is not fully considered, and this may result in double-counting for certain applications. A series of sensitivity runs show greater penetration with net metering enhancements and aggressive tax credits and a more limited response to lowered DG technology costs. Discussion of alternatives to the current code is presented in Section 4. Alternatives or improvements to how DG is modeled in NEMS cover three basic areas: expanding on the existing total market for DG both by changing existing parameters in NEMS and by adding new capabilities, such as for missing technologies; enhancing the cash flow analysis but incorporating aspects of DG economics that are not currently represented, e.g. complex tariffs; and using an external geographic information system (GIS) driven analysis that can better and more intuitively identify niche markets.

  9. Technical Note: On the use of nudging for aerosol–climate model intercomparison studies

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Zhang, K.; Wan, H.; Liu, X.; Ghan, S. J.; Kooperman, G. J.; Ma, P.-L.; Rasch, P. J.; Neubauer, D.; Lohmann, U.

    2014-08-26

    Nudging as an assimilation technique has seen increased use in recent years in the development and evaluation of climate models. Constraining the simulated wind and temperature fields using global weather reanalysis facilitates more straightforward comparison between simulation and observation, and reduces uncertainties associated with natural variabilities of the large-scale circulation. On the other hand, the forcing introduced by nudging can be strong enough to change the basic characteristics of the model climate. In the paper we show that for the Community Atmosphere Model version 5 (CAM5), due to the systematic temperature bias in the standard model and the sensitivity ofmore »simulated ice formation to anthropogenic aerosol concentration, nudging towards reanalysis results in substantial reductions in the ice cloud amount and the impact of anthropogenic aerosols on long-wave cloud forcing. In order to reduce discrepancies between the nudged and unconstrained simulations, and meanwhile take the advantages of nudging, two alternative experimentation methods are evaluated. The first one constrains only the horizontal winds. The second method nudges both winds and temperature, but replaces the long-term climatology of the reanalysis by that of the model. Results show that both methods lead to substantially improved agreement with the free-running model in terms of the top-of-atmosphere radiation budget and cloud ice amount. The wind-only nudging is more convenient to apply, and provides higher correlations of the wind fields, geopotential height and specific humidity between simulation and reanalysis. Results from both CAM5 and a second aerosol–climate model ECHAM6-HAM2 also indicate that compared to the wind-and-temperature nudging, constraining only winds leads to better agreement with the free-running model in terms of the estimated shortwave cloud forcing and the simulated convective activities. This suggests nudging the horizontal winds but not temperature is a good strategy for the investigation of aerosol indirect effects since it provides well-constrained meteorology without strongly perturbing the model's mean climate.« less

  10. Scale Models & Wind Turbines

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Turbines * Readings about Cape Wind and other offshore and onshore siting debates for wind farms * Student Worksheet * A number of scale model items: Ken, Barbie or other dolls...

  11. Multifamily Envelope Leakage Model

    SciTech Connect (OSTI)

    Faakye, O.; Griffiths, D.

    2015-05-01

    The objective of the 2013 research project was to develop the model for predicting fully guarded test results (FGT), using unguarded test data and specific building features of apartment units. The model developed has a coefficient of determination R2 value of 0.53 with a root mean square error (RMSE) of 0.13. Both statistical metrics indicate that the model is relatively strong. When tested against data that was not included in the development of the model, prediction accuracy was within 19%, which is reasonable given that seasonal differences in blower door measurements can vary by as much as 25%.

  12. The Standard Model

    ScienceCinema (OSTI)

    Lincoln, Don

    2014-08-12

    Fermilab scientist Don Lincoln describes the Standard Model of particle physics, covering both the particles that make up the subatomic realm and the forces that govern them.

  13. Severe Accident Modeling

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Severe Accident Modeling - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power ...

  14. Power Sector Modeling 101

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    given assumptions about future electricity demand, fuel prices, technology cost ... * Production Cost (Grid OperationsUnit Commitment and Dispatch) Models * Network ...

  15. VISION Model: Description

    SciTech Connect (OSTI)

    2009-01-18

    Description of VISION model, which is used to estimate the impact of highway vehicle technologies and fuels on energy use and carbon emissions to 2050.

  16. Sandia Energy Modeling

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ?p34831 http:energy.sandia.govwave-energy-device-modeling-developing-a-117-scaled-modelfeed 0 New Small Business Voucher Pilot Opens http:energy.sandia.gov...

  17. Photovoltaics Business Models

    SciTech Connect (OSTI)

    Frantzis, L.; Graham, S.; Katofsky, R.; Sawyer, H.

    2008-02-01

    This report summarizes work to better understand the structure of future photovoltaics business models and the research, development, and demonstration required to support their deployment.

  18. PV Reliability & Performance Model

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ... PV Reliability & Performance Model HomeStationary PowerEnergy Conversion ... such as module output degradation over time or disruptions such as electrical grid outages. ...

  19. Sandia Energy - Reference Model Documents

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Documents Home Stationary Power Energy Conversion Efficiency Water Power Reference Model Project (RMP) Reference Model Documents Reference Model DocumentsTara Camacho-Lopez2015-05-...

  20. Numeric-modeling sensitivity analysis of the performance of wind turbine arrays

    SciTech Connect (OSTI)

    Lissaman, P.B.S.; Gyatt, G.W.; Zalay, A.D.

    1982-06-01

    An evaluation of the numerical model created by Lissaman for predicting the performance of wind turbine arrays has been made. Model predictions of the wake parameters have been compared with both full-scale and wind tunnel measurements. Only limited, full-scale data were available, while wind tunnel studies showed difficulties in representing real meteorological conditions. Nevertheless, several modifications and additions have been made to the model using both theoretical and empirical techniques and the new model shows good correlation with experiment. The larger wake growth rate and shorter near wake length predicted by the new model lead to reduced interference effects on downstream turbines and hence greater array efficiencies. The array model has also been re-examined and now incorporates the ability to show the effects of real meteorological conditions such as variations in wind speed and unsteady winds. The resulting computer code has been run to show the sensitivity of array performance to meteorological, machine, and array parameters. Ambient turbulence and windwise spacing are shown to dominate, while hub height ratio is seen to be relatively unimportant. Finally, a detailed analysis of the Goodnoe Hills wind farm in Washington has been made to show how power output can be expected to vary with ambient turbulence, wind speed, and wind direction.

  1. Development and Validation of a Slurry Model for Chemical Hydrogen Storage in Fuel Cell Applications

    SciTech Connect (OSTI)

    Brooks, Kriston P.; Pires, Richard P.; Simmons, Kevin L.

    2014-07-25

    The US Department of Energy's (DOE) Hydrogen Storage Engineering Center of Excellence (HSECoE) is developing models for hydrogen storage systems for fuel cell-based light duty vehicle applications for a variety of promising materials. These transient models simulate the performance of the storage system for comparison to the DOEs Technical Targets and a set of four drive cycles. The purpose of this research is to describe the models developed for slurry-based chemical hydrogen storage materials. The storage systems of both a representative exothermic system based on ammonia borane and endothermic system based on alane were developed and modeled in Simulink. Once complete the reactor and radiator components of the model were validated with experimental data. The model was then run using a highway cycle, an aggressive cycle, cold-start cycle and hot drive cycle. The system design was adjusted to meet these drive cycles. A sensitivity analysis was then performed to identify the range of material properties where these DOE targets and drive cycles could be met. Materials with a heat of reaction greater than 11 kJ/mol H2 generated and a slurry hydrogen capacity of greater than 11.4% will meet the on-board efficiency and gravimetric capacity targets, respectively.

  2. Solar Deployment System (SolarDS) Model: Documentation and Sample Results

    SciTech Connect (OSTI)

    Denholm, P.; Drury, E.; Margolis, R.

    2009-09-01

    The Solar Deployment System (SolarDS) model is a bottom-up, market penetration model that simulates the potential adoption of photovoltaics (PV) on residential and commercial rooftops in the continental United States through 2030. NREL developed SolarDS to examine the market competitiveness of PV based on regional solar resources, capital costs, electricity prices, utility rate structures, and federal and local incentives. The model uses the projected financial performance of PV systems to simulate PV adoption for building types and regions then aggregates adoption to state and national levels. The main components of SolarDS include a PV performance simulator, a PV annual revenue calculator, a PV financial performance calculator, a PV market share calculator, and a regional aggregator. The model simulates a variety of installed PV capacity for a range of user-specified input parameters. PV market penetration levels from 15 to 193 GW by 2030 were simulated in preliminary model runs. SolarDS results are primarily driven by three model assumptions: (1) future PV cost reductions, (2) the maximum PV market share assumed for systems with given financial performance, and (3) PV financing parameters and policy-driven assumptions, such as the possible future cost of carbon emissions.

  3. Integration of Feedstock Assembly System and Cellulosic Ethanol Conversion Models to Analyze Bioenergy System Performance

    SciTech Connect (OSTI)

    Jared M. Abodeely; Douglas S. McCorkle; Kenneth M. Bryden; David J. Muth; Daniel Wendt; Kevin Kenney

    2010-09-01

    Research barriers continue to exist in all phases of the emerging cellulosic ethanol biorefining industry. These barriers include the identification and development of a sustainable and abundant biomass feedstock, the assembly of viable assembly systems formatting the feedstock and moving it from the field (e.g., the forest) to the biorefinery, and improving conversion technologies. Each of these phases of cellulosic ethanol production are fundamentally connected, but computational tools used to support and inform analysis within each phase remain largely disparate. This paper discusses the integration of a feedstock assembly system modeling toolkit and an Aspen Plus conversion process model. Many important biomass feedstock characteristics, such as composition, moisture, particle size and distribution, ash content, etc. are impacted and most effectively managed within the assembly system, but generally come at an economic cost. This integration of the assembly system and the conversion process modeling tools will facilitate a seamless investigation of the assembly system conversion process interface. Through the integrated framework, the user can design the assembly system for a particular biorefinery by specifying location, feedstock, equipment, and unit operation specifications. The assembly system modeling toolkit then provides economic valuation, and detailed biomass feedstock composition and formatting information. This data is seamlessly and dynamically used to run the Aspen Plus conversion process model. The model can then be used to investigate the design of systems for cellulosic ethanol production from field to final product.

  4. Swelling in light water reactor internal components: Insights from computational modeling

    SciTech Connect (OSTI)

    Stoller, Roger E.; Barashev, Alexander V.; Golubov, Stanislav I.

    2015-08-01

    A modern cluster dynamics model has been used to investigate the materials and irradiation parameters that control microstructural evolution under the relatively low-temperature exposure conditions that are representative of the operating environment for in-core light water reactor components. The focus is on components fabricated from austenitic stainless steel. The model accounts for the synergistic interaction between radiation-produced vacancies and the helium that is produced by nuclear transmutation reactions. Cavity nucleation rates are shown to be relatively high in this temperature regime (275 to 325C), but are sensitive to assumptions about the fine scale microstructure produced under low-temperature irradiation. The cavity nucleation rates observed run counter to the expectation that void swelling would not occur under these conditions. This expectation was based on previous research on void swelling in austenitic steels in fast reactors. This misleading impression arose primarily from an absence of relevant data. The results of the computational modeling are generally consistent with recent data obtained by examining ex-service components. However, it has been shown that the sensitivity of the model s predictions of low-temperature swelling behavior to assumptions about the primary damage source term and specification of the mean-field sink strengths is somewhat greater that that observed at higher temperatures. Further assessment of the mathematical model is underway to meet the long-term objective of this research, which is to provide a predictive model of void swelling at relevant lifetime exposures to support extended reactor operations.

  5. JPEG-2000 Part 10 Verification Model

    Energy Science and Technology Software Center (OSTI)

    2003-03-04

    VM10 is a research software implementation of the ISO/IEC JPEG-2000 Still Image Coding standard (ISO international Standard 15444). JPEG-2000 image coding involves subband codiing and compression of digital raster images to facilitate storage and transmission of such imagery. Images are decomposed into space/scale subbands using cascades of two-dimensional (tensor product) discrete wavelet transforms. The wavelet transforms can be either reversible (integer-to-integer) transforms or irreversible (integer-to-float). The subbands in each resolution level are quantized by uniformmore » scalar quantization in the irreversible case. The resulting integer subbands in each resolution level are partitioned into spatially localized code blocks to facilitate localized entropy decoding. Code blocks are encoded and packaged into an embedded bitstream using binary arithmetic bitplane coding (the MQ Coder algorithm applied to hierarchical bitplane coding (the MQ coder algorithm applied to hierachical bitplane context modeling). The resultant compressed bitstream is configured for use with the JPIP interactive client-server protocol (JPEG-2000 part 9). VM10 is written in ANSI C++ using the Biltz++ array class library. To enable development of multidimensional image coding algorithms, VM10 is templated on the dimension of the array containers. It was developed with the GNU g++ compiler on both Linux (Red Hat) and Windows/cygwin platforms, although it should compile and run under other ANSI C++ compilers as well. Software design is highly modular and object-oriented in order to facilitate rapid development and frequent revision and experimentation. No attempt has been made to optimize the run-time performance of the code. The software performs both the encoding and decoding operations involved in JPEG-2000 image coding, as implemented in apps/compress/main.cpp and apps/expand/main.cpp. VM10 implements all of the JPEG-2000 baseline (Part 1, ISO 15444-1) and portions of the published extensions to the baseline (Part 2, ISO 15444-2). As such, VM10 is an implementation of published international standards for digital image coding systems. The purpose of VM10 is to serve as a software platform for developing further extensions of the JPEG-2000 standard that will contribute to JPEG-2000 Part 10, "Extensions for Three-Dimensional Data and Floating Point Data" (currently under development). It will be used to test the performance and compati bility of poposed Part 10 algorithms. The authors of VM10 are active participants in the iSO standardization effort that is producing JPEG-2000 Part 10. The VM10 software will be distributed to other membes of the ISO still image coding standards committee (ISO/IEC JTC1/SC29/WG1). VM10 is only intended for the use of ISO/IEC JTC1/SC29/WG1, however, and will not be distributed to the general public. In particular, it is not being placed in the public domain (or "open-sourced"). The University of California/LANL will retain copyright on all LANL source code contained in the VM10 distribution. This does not preclude rights to this software retainied by the US government in accordance with its contract with the University of California.« less

  6. X-ray ablation measurements and modeling for ICF applications

    SciTech Connect (OSTI)

    Anderson, A.T.

    1996-09-01

    X-ray ablation of material from the first wall and other components of an ICF (Inertial Confinement Fusion) chamber is a major threat to the laser final optics. Material condensing on these optics after a shot may cause damage with subsequent laser shots. To ensure the successful operation of the ICF facility, removal rates must be predicted accurately. The goal for this dissertation is to develop an experimentally validated x-ray response model, with particular application to the National Ignition Facility (NIF). Accurate knowledge of the x-ray and debris emissions from ICF targets is a critical first step in the process of predicting the performance of the target chamber system. A number of 1-D numerical simulations of NIF targets have been run to characterize target output in terms of energy, angular distribution, spectrum, and pulse shape. Scaling of output characteristics with variations of both target yield and hohlraum wall thickness are also described. Experiments have been conducted at the Nova laser on the effects of relevant x-ray fluences on various materials. The response was diagnosed using post-shot examinations of the surfaces with scanning electron microscope and atomic force microscope instruments. Judgments were made about the dominant removal mechanisms for each material. Measurements of removal depths were made to provide data for the modeling. The finite difference ablation code developed here (ABLATOR) combines the thermomechanical response of materials to x-rays with models of various removal mechanisms. The former aspect refers to energy deposition in such small characteristic depths ({approx} micron) that thermal conduction and hydrodynamic motion are significant effects on the nanosecond time scale. The material removal models use the resulting time histories of temperature and pressure-profiles, along with ancillary local conditions, to predict rates of surface vaporization and the onset of conditions that would lead to spallation.

  7. Biosphere Process Model Report

    SciTech Connect (OSTI)

    J. Schmitt

    2000-05-25

    To evaluate the postclosure performance of a potential monitored geologic repository at Yucca Mountain, a Total System Performance Assessment (TSPA) will be conducted. Nine Process Model Reports (PMRs), including this document, are being developed to summarize the technical basis for each of the process models supporting the TSPA model. These reports cover the following areas: (1) Integrated Site Model; (2) Unsaturated Zone Flow and Transport; (3) Near Field Environment; (4) Engineered Barrier System Degradation, Flow, and Transport; (5) Waste Package Degradation; (6) Waste Form Degradation; (7) Saturated Zone Flow and Transport; (8) Biosphere; and (9) Disruptive Events. Analysis/Model Reports (AMRs) contain the more detailed technical information used to support TSPA and the PMRs. The AMRs consists of data, analyses, models, software, and supporting documentation that will be used to defend the applicability of each process model for evaluating the postclosure performance of the potential Yucca Mountain repository system. This documentation will ensure the traceability of information from its source through its ultimate use in the TSPA-Site Recommendation (SR) and in the National Environmental Policy Act (NEPA) analysis processes. The objective of the Biosphere PMR is to summarize (1) the development of the biosphere model, and (2) the Biosphere Dose Conversion Factors (BDCFs) developed for use in TSPA. The Biosphere PMR does not present or summarize estimates of potential radiation doses to human receptors. Dose calculations are performed as part of TSPA and will be presented in the TSPA documentation. The biosphere model is a component of the process to evaluate postclosure repository performance and regulatory compliance for a potential monitored geologic repository at Yucca Mountain, Nevada. The biosphere model describes those exposure pathways in the biosphere by which radionuclides released from a potential repository could reach a human receptor. Collectively, the potential human receptor and exposure pathways form the biosphere model. More detailed technical information and data about potential human receptor groups and the characteristics of exposure pathways have been developed in a series of AMRs and Calculation Reports.

  8. Modeling for Insights

    SciTech Connect (OSTI)

    Jacob J. Jacobson; Gretchen Matthern

    2007-04-01

    System Dynamics is a computer-aided approach to evaluating the interrelationships of different components and activities within complex systems. Recently, System Dynamics models have been developed in areas such as policy design, biological and medical modeling, energy and the environmental analysis, and in various other areas in the natural and social sciences. The real power of System Dynamic modeling is gaining insights into total system behavior as time, and system parameters are adjusted and the effects are visualized in real time. System Dynamic models allow decision makers and stakeholders to explore long-term behavior and performance of complex systems, especially in the context of dynamic processes and changing scenarios without having to wait decades to obtain field data or risk failure if a poor management or design approach is used. The Idaho National Laboratory recently has been developing a System Dynamic model of the US Nuclear Fuel Cycle. The model is intended to be used to identify and understand interactions throughout the entire nuclear fuel cycle and suggest sustainable development strategies. This paper describes the basic framework of the current model and presents examples of useful insights gained from the model thus far with respect to sustainable development of nuclear power.

  9. Canister Model, Systems Analysis

    Energy Science and Technology Software Center (OSTI)

    1993-09-29

    This packges provides a computer simulation of a systems model for packaging nuclear waste and spent nuclear fuel in canisters. The canister model calculates overall programmatic cost, number of canisters, and fuel and waste inventories for the Idaho Chemical Processing Plant (other initial conditions can be entered).

  10. XAFS Model Compound Library

    DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]

    Newville, Matthew

    The XAFS Model Compound Library contains XAFS data on model compounds. The term "model" compounds refers to compounds of homogeneous and well-known crystallographic or molecular structure. Each data file in this library has an associated atoms.inp file that can be converted to a feff.inp file using the program ATOMS. (See the related Searchable Atoms.inp Archive at http://cars9.uchicago.edu/~newville/adb/) This Library exists because XAFS data on model compounds is useful for several reasons, including comparing to unknown data for "fingerprinting" and testing calculations and analysis methods. The collection here is currently limited, but is growing. The focus to date has been on inorganic compounds and minerals of interest to the geochemical community. [Copied, with editing, from http://cars9.uchicago.edu/~newville/ModelLib/

  11. Varicella infection modeling.

    SciTech Connect (OSTI)

    Jones, Katherine A.; Finley, Patrick D.; Moore, Thomas W.; Nozick, Linda Karen; Martin, Nathaniel; Bandlow, Alisa; Detry, Richard Joseph; Evans, Leland B.; Berger, Taylor Eugen

    2013-09-01

    Infectious diseases can spread rapidly through healthcare facilities, resulting in widespread illness among vulnerable patients. Computational models of disease spread are useful for evaluating mitigation strategies under different scenarios. This report describes two infectious disease models built for the US Department of Veteran Affairs (VA) motivated by a Varicella outbreak in a VA facility. The first model simulates disease spread within a notional contact network representing staff and patients. Several interventions, along with initial infection counts and intervention delay, were evaluated for effectiveness at preventing disease spread. The second model adds staff categories, location, scheduling, and variable contact rates to improve resolution. This model achieved more accurate infection counts and enabled a more rigorous evaluation of comparative effectiveness of interventions.

  12. VENTILATION MODEL REPORT

    SciTech Connect (OSTI)

    V. Chipman

    2002-10-31

    The purpose of the Ventilation Model is to simulate the heat transfer processes in and around waste emplacement drifts during periods of forced ventilation. The model evaluates the effects of emplacement drift ventilation on the thermal conditions in the emplacement drifts and surrounding rock mass, and calculates the heat removal by ventilation as a measure of the viability of ventilation to delay the onset of peak repository temperature and reduce its magnitude. The heat removal by ventilation is temporally and spatially dependent, and is expressed as the fraction of heat carried away by the ventilation air compared to the fraction of heat produced by radionuclide decay. One minus the heat removal is called the wall heat fraction, or the remaining amount of heat that is transferred via conduction to the surrounding rock mass. Downstream models, such as the ''Multiscale Thermohydrologic Model'' (BSC 2001), use the wall heat fractions as outputted from the Ventilation Model to initialize their postclosure analyses.

  13. Integrated Environmental Control Model

    Energy Science and Technology Software Center (OSTI)

    1999-09-03

    IECM is a powerful multimedia engineering software program for simulating an integrated coal-fired power plant. It provides a capability to model various conventional and advanced processes for controlling air pollutant emissions from coal-fired power plants before, during, or after combustion. The principal purpose of the model is to calculate the performance, emissions, and cost of power plant configurations employing alternative environmental control methods. The model consists of various control technology modules, which may be integratedmore » into a complete utility plant in any desired combination. In contrast to conventional deterministic models, the IECM offers the unique capability to assign probabilistic values to all model input parameters, and to obtain probabilistic outputs in the form of cumulative distribution functions indicating the likelihood of dofferent costs and performance results. A Graphical Use Interface (GUI) facilitates the configuration of the technologies, entry of data, and retrieval of results.« less

  14. Moderate forest disturbance as a stringent test for gap and big-leaf models

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Bond-Lamberty, Benjamin; Fisk, Justin P.; Holm, Jennifer; Bailey, Vanessa L.; Bohrer, Gil; Gough, Christopher

    2015-01-27

    Disturbance-induced tree mortality is a key factor regulating the carbon balance of a forest, but tree mortality and its subsequent effects are poorly represented processes in terrestrial ecosystem models. It is thus unclear whether models can robustly simulate moderate (non-catastrophic) disturbances, which tend to increase biological and structural complexity and are increasingly common in aging US forests. We tested whether three forest ecosystem models – Biome-BGC (BioGeochemical Cycles), a classic big-leaf model, and the ZELIG and ED (Ecosystem Demography) gap-oriented models – could reproduce the resilience to moderate disturbance observed in an experimentally manipulated forest (the Forest Accelerated Succession Experimentmore » in northern Michigan, USA, in which 38% of canopy dominants were stem girdled and compared to control plots). Each model was parameterized, spun up, and disturbed following similar protocols and run for 5 years post-disturbance. The models replicated observed declines in aboveground biomass well. Biome-BGC captured the timing and rebound of observed leaf area index (LAI), while ZELIG and ED correctly estimated the magnitude of LAI decline. None of the models fully captured the observed post-disturbance C fluxes, in particular gross primary production or net primary production (NPP). Biome-BGC NPP was correctly resilient but for the wrong reasons, and could not match the absolute observational values. ZELIG and ED, in contrast, exhibited large, unobserved drops in NPP and net ecosystem production. The biological mechanisms proposed to explain the observed rapid resilience of the C cycle are typically not incorporated by these or other models. It is thus an open question whether most ecosystem models will simulate correctly the gradual and less extensive tree mortality characteristic of moderate disturbances.« less

  15. Moderate forest disturbance as a stringent test for gap and big-leaf models

    SciTech Connect (OSTI)

    Bond-Lamberty, Benjamin; Fisk, Justin P.; Holm, Jennifer; Bailey, Vanessa L.; Bohrer, Gil; Gough, Christopher

    2015-01-27

    Disturbance-induced tree mortality is a key factor regulating the carbon balance of a forest, but tree mortality and its subsequent effects are poorly represented processes in terrestrial ecosystem models. It is thus unclear whether models can robustly simulate moderate (non-catastrophic) disturbances, which tend to increase biological and structural complexity and are increasingly common in aging US forests. We tested whether three forest ecosystem models Biome-BGC (BioGeochemical Cycles), a classic big-leaf model, and the ZELIG and ED (Ecosystem Demography) gap-oriented models could reproduce the resilience to moderate disturbance observed in an experimentally manipulated forest (the Forest Accelerated Succession Experiment in northern Michigan, USA, in which 38% of canopy dominants were stem girdled and compared to control plots). Each model was parameterized, spun up, and disturbed following similar protocols and run for 5 years post-disturbance. The models replicated observed declines in aboveground biomass well. Biome-BGC captured the timing and rebound of observed leaf area index (LAI), while ZELIG and ED correctly estimated the magnitude of LAI decline. None of the models fully captured the observed post-disturbance C fluxes, in particular gross primary production or net primary production (NPP). Biome-BGC NPP was correctly resilient but for the wrong reasons, and could not match the absolute observational values. ZELIG and ED, in contrast, exhibited large, unobserved drops in NPP and net ecosystem production. The biological mechanisms proposed to explain the observed rapid resilience of the C cycle are typically not incorporated by these or other models. It is thus an open question whether most ecosystem models will simulate correctly the gradual and less extensive tree mortality characteristic of moderate disturbances.

  16. Hydrocarbon Effect on a Fe-zeolite Urea-SCR Catalyst: An Experimental and Modeling Study

    SciTech Connect (OSTI)

    Devarakonda, Maruthi N.; Tonkyn, Russell G.; Herling, Darrell R.

    2010-04-14

    Synergies between various catalytic converters such as SCR and DPF are vital to the success of an integrated aftertreatment system for simultaneous NOx and particulate matter control in diesel engines. Several issues such as hydrocarbon poisoning, thermal aging and other coupled aftertreatment dynamics need to be addressed to develop an effective emission control system. This paper reports an experimental and modeling study to understand the effect of hydrocarbons on a Fe-zeolite urea-SCR bench reactor. Several bench-reactor tests to understand the inhibition of NOx oxidation, to characterize hydrocarbon storage and to investigate the impact of hydrocarbons on SCR reactions were conducted. Toluene was chosen as a representative hydrocarbon in diesel exhaust and various tests using toluene reveal its inhibition of NO oxidation at low temperatures and its oxidation to CO and CO2 at high temperatures. Surface isotherm tests were conducted to characterize the adsorption-desorption equilibrium of toluene through Langmuir isotherms. Using the rate parameters, a toluene storage model was developed and validated in simulation. With toluene in the stream, controlled SCR tests were run on the reactor and performance metrics such as NOx conversion and NH3 slip were compared to a set of previously run tests with no toluene in the stream. Tests indicate a significant effect of toluene on NOx and NH3 conversion efficiencies even at temperatures greater than 300oC. A kinetic model to address the toluene inhibition during NO oxidation reaction was developed and is reported in the paper. This work is significant especially in an integrated DPF-SCR aftertreatment scenario where the SCR catalyst on the filter substrate is exposed to un-burnt diesel hydrocarbons during active regeneration of the particulate filter.

  17. Evaluation of convection-permitting model simulations of cloud populations associated with the Madden-Julian Oscillation using data collected during the AMIE/DYNAMO field campaign

    SciTech Connect (OSTI)

    Hagos, Samson M.; Feng, Zhe; Burleyson, Casey D.; Lim, Kyo-Sun; Long, Charles N.; Wu, Di; Thompson, Gregory

    2014-11-12

    Regional cloud permitting model simulations of cloud populations observed during the 2011 ARM Madden Julian Oscillation Investigation Experiment/ Dynamics of Madden-Julian Experiment (AMIE/DYNAMO) field campaign are evaluated against radar and ship-based measurements. Sensitivity of model simulated surface rain rate statistics to parameters and parameterization of hydrometeor sizes in five commonly used WRF microphysics schemes are examined. It is shown that at 2 km grid spacing, the model generally overestimates rain rate from large and deep convective cores. Sensitivity runs involving variation of parameters that affect rain drop or ice particle size distribution (more aggressive break-up process etc) generally reduce the bias in rain-rate and boundary layer temperature statistics as the smaller particles become more vulnerable to evaporation. Furthermore significant improvement in the convective rain-rate statistics is observed when the horizontal grid-spacing is reduced to 1 km and 0.5 km, while it is worsened when run at 4 km grid spacing as increased turbulence enhances evaporation. The results suggest modulation of evaporation processes, through parameterization of turbulent mixing and break-up of hydrometeors may provide a potential avenue for correcting cloud statistics and associated boundary layer temperature biases in regional and global cloud permitting model simulations.

  18. SPAR Model Structural Efficiencies

    SciTech Connect (OSTI)

    John Schroeder; Dan Henry

    2013-04-01

    The Nuclear Regulatory Commission (NRC) and the Electric Power Research Institute (EPRI) are supporting initiatives aimed at improving the quality of probabilistic risk assessments (PRAs). Included in these initiatives are the resolution of key technical issues that are have been judged to have the most significant influence on the baseline core damage frequency of the NRCs Standardized Plant Analysis Risk (SPAR) models and licensee PRA models. Previous work addressed issues associated with support system initiating event analysis and loss of off-site power/station blackout analysis. The key technical issues were: Development of a standard methodology and implementation of support system initiating events Treatment of loss of offsite power Development of standard approach for emergency core cooling following containment failure Some of the related issues were not fully resolved. This project continues the effort to resolve outstanding issues. The work scope was intended to include substantial collaboration with EPRI; however, EPRI has had other higher priority initiatives to support. Therefore this project has addressed SPAR modeling issues. The issues addressed are SPAR model transparency Common cause failure modeling deficiencies and approaches Ac and dc modeling deficiencies and approaches Instrumentation and control system modeling deficiencies and approaches

  19. Autotune Building Energy Models

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Autotune Building Energy Models Joshua New Oak Ridge National Laboratory newjr@ornl.gov, 865-241-8783 April 2, 2013 2 | Building Technologies Office eere.energy.gov Purpose & Objectives Problem Statement: * "All (building energy) models are wrong, but some are useful" - 22%-97% different from utility data for 3,349 buildings * More accurate models are more useful - Error from inputs and algorithms for practical reasons - Useful for cost-effective energy efficiency (EE) at speed and

  20. Development of Reference Models and Design Tools (LCOE Models) | Department

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    of Energy Development of Reference Models and Design Tools (LCOE Models) Development of Reference Models and Design Tools (LCOE Models) Development of Reference Models and Design Tools (LCOE Models) Office presentation icon 17_reference_model_snl_jepsen.ppt More Documents & Publications FY 09 Lab Call: Research & Assessment for MHK Development 2014 Water Power Program Peer Review Compiled Presentations: Marine and Hydrokinetic Technologies Effects on the Physical Environment

  1. Dynamic (G2) Model Design Document, 24590-WTP-MDD-PR-01-002, Rev. 12

    SciTech Connect (OSTI)

    Deng, Yueying; Kruger, Albert A.

    2013-12-16

    The Hanford Tank Waste Treatment and Immobilization Plant (WTP) Statement of Work (Department of Energy Contract DE-AC27-01RV14136, Section C) requires the contractor to develop and use process models for flowsheet analyses and pre-operational planning assessments. The Dynamic (G2) Flowsheet is a discrete-time process model that enables the project to evaluate impacts to throughput from eventdriven activities such as pumping, sampling, storage, recycle, separation, and chemical reactions. The model is developed by the Process Engineering (PE) department, and is based on the Flowsheet Bases, Assumptions, and Requirements Document (24590-WTP-RPT-PT-02-005), commonly called the BARD. The terminologies of Dynamic (G2) Flowsheet and Dynamic (G2) Model are interchangeable in this document. The foundation of this model is a dynamic material balance governed by prescribed initial conditions, boundary conditions, and operating logic. The dynamic material balance is achieved by tracking the storage and material flows within the plant as time increments. The initial conditions include a feed vector that represents the waste compositions and delivery sequence of the Tank Farm batches, and volumes and concentrations of solutions in process equipment before startup. The boundary conditions are the physical limits of the flowsheet design, such as piping, volumes, flowrates, operation efficiencies, and physical and chemical environments that impact separations, phase equilibriums, and reaction extents. The operating logic represents the rules and strategies of running the plant.

  2. A Global Climate Model Agent for High Spatial and Temporal Resolution Data

    SciTech Connect (OSTI)

    Wood, Lynn S.; Daily, Jeffrey A.; Henry, Michael J.; Palmer, Bruce J.; Schuchardt, Karen L.; Dazlich, Donald A.; Heikes, Ross P.; Randall, David

    2015-02-01

    Fine cell granularity in modern climate models can produce terabytes of data in each snapshot, causing significant I/O overhead. To address this issue, a method of reducing the I/O latency of high-resolution climate models by identifying and selectively outputting regions of interest is presented. Working with a Global Cloud Resolving Model and running with up to 10240 processors on a Cray XE6, this method provides significant I/O bandwidth reduction depending on the frequency of writes and size of the region of interest. The implementation challenges of determining global parameters in a strictly core-localized model and properly formatting output files that only contain subsections of the global grid are addressed, as well as the overall bandwidth impact and benefits of the method. The gains in I/O throughput provided by this method allow dual output rates for high-resolution climate models: a low-frequency global snapshot as well as a high-frequency regional snapshot when events of particular interest occur.

  3. Validation of a Hot Water Distribution Model Using Laboratory and Field Data

    SciTech Connect (OSTI)

    Backman, C.; Hoeschele, M.

    2013-07-01

    Characterizing the performance of hot water distribution systems is a critical step in developing best practice guidelines for the design and installation of high performance hot water systems. Developing and validating simulation models is critical to this effort, as well as collecting accurate input data to drive the models. In this project, the Building America research team ARBI validated the newly developed TRNSYS Type 604 pipe model against both detailed laboratory and field distribution system performance data. Validation efforts indicate that the model performs very well in handling different pipe materials, insulation cases, and varying hot water load conditions. Limitations of the model include the complexity of setting up the input file and long simulation run times. This project also looked at recent field hot water studies to better understand use patterns and potential behavioral changes as homeowners convert from conventional storage water heaters to gas tankless units. The team concluded that the current Energy Factor test procedure overestimates typical use and underestimates the number of hot water draws, which has implications for both equipment and distribution system performance. Gas tankless water heaters were found to impact how people use hot water, but the data does not necessarily suggest an increase in usage. Further study in hot water usage and patterns is needed to better define these characteristics in different climates and home vintages.

  4. Validation of a Hot Water Distribution Model Using Laboratory and Field Data

    SciTech Connect (OSTI)

    Backman, C.; Hoeschele, M.

    2013-07-01

    Characterizing the performance of hot water distribution systems is a critical step in developing best practice guidelines for the design and installation of high performance hot water systems. Developing and validating simulation models is critical to this effort, as well as collecting accurate input data to drive the models. In this project, the ARBI team validated the newly developed TRNSYS Type 604 pipe model against both detailed laboratory and field distribution system performance data. Validation efforts indicate that the model performs very well in handling different pipe materials, insulation cases, and varying hot water load conditions. Limitations of the model include the complexity of setting up the input file and long simulation run times. In addition to completing validation activities, this project looked at recent field hot water studies to better understand use patterns and potential behavioral changes as homeowners convert from conventional storage water heaters to gas tankless units. Based on these datasets, we conclude that the current Energy Factor test procedure overestimates typical use and underestimates the number of hot water draws. This has implications for both equipment and distribution system performance. Gas tankless water heaters were found to impact how people use hot water, but the data does not necessarily suggest an increase in usage. Further study in hot water usage and patterns is needed to better define these characteristics in different climates and home vintages.

  5. Modeling Species Inhibition of NO Oxidation in Urea-SCR Catalysts for Diesel Engine NOx Control

    SciTech Connect (OSTI)

    Devarakonda, Maruthi N.; Tonkyn, Russell G.; Tran, Diana N.; Lee, Jong H.; Herling, Darrell R.

    2011-04-20

    Urea-selective catalytic reduction (SCR) catalysts are regarded as the leading NOx aftertreatment technology to meet the 2010 NOx emission standards for on-highway vehicles running on heavy-duty diesel engines. However, issues such as low NOx conversion at low temperature conditions still exist due to various factors, including incomplete urea thermolysis, inhibition of SCR reactions by hydrocarbons and H2O. We have observed a noticeable reduction in the standard SCR reaction efficiency at low temperature with increasing water content. We observed a similar effect when hydrocarbons are present in the stream. This effect is absent under fast SCR conditions where NO ~ NO2 in the feed gas. As a first step in understanding the effects of such inhibition on SCR reaction steps, kinetic models that predict the inhibition behavior of H2O and hydrocarbons on NO oxidation are presented in the paper. A one-dimensional SCR model was developed based on conservation of species equations and was coded as a C-language S-function and implemented in Matlab/Simulink environment. NO oxidation and NO2 dissociation kinetics were defined as a function of the respective adsorbates storage in the Fe-zeolite SCR catalyst. The corresponding kinetic models were then validated on temperature ramp tests that showed good match with the test data. Such inhibition models will improve the accuracy of model based control design for integrated DPF-SCR aftertreatment systems.

  6. An update on modeling land-ice/ocean interactions in CESM

    SciTech Connect (OSTI)

    Asay-davis, Xylar

    2011-01-24

    This talk is an update on ongoing land-ice/ocean coupling work within the Community Earth System Model (CESM). The coupling method is designed to allow simulation of a fully dynamic ice/ocean interface, while requiring minimal modification to the existing ocean model (the Parallel Ocean Program, POP). The method makes use of an immersed boundary method (IBM) to represent the geometry of the ice-ocean interface without requiring that the computational grid be modified in time. We show many of the remaining development challenges that need to be addressed in order to perform global, century long climate runs with fully coupled ocean and ice sheet models. These challenges include moving to a new grid where the computational pole is no longer at the true south pole and several changes to the coupler (the software tool used to communicate between model components) to allow the boundary between land and ocean to vary in time. We discuss benefits for ice/ocean coupling that would be gained from longer-term ocean model development to allow for natural salt fluxes (which conserve both water and salt mass, rather than water volume).

  7. Validation of the thermospheric vector spherical harmonic (VSH) computer model. Master's thesis

    SciTech Connect (OSTI)

    Davis, J.L.

    1991-01-01

    A semi-empirical computer model of the lower thermosphere has been developed that provides a description of the composition and dynamics of the thermosphere (Killeen et al., 1992). Input variables needed to run the VSH model include time, space and geophysical conditions. One of the output variables the model provides, neutral density, is of particular interest to the U.S. Air Force. Neutral densities vary both as a result of change in solar flux (eg. the solar cycle) and as a result of changes in the magnetosphere (eg. large changes occur in neutral density during geomagnetic storms). Satellites in earth orbit experience aerodynamic drag due to the atmospheric density of the thermosphere. Variability in the neutral density described above affects the drag a satellite experiences and as a result can change the orbital characteristics of the satellite. These changes make it difficult to track the satellite's position. Therefore, it is particularly important to insure that the accuracy of the model's neutral density is optimized for all input parameters. To accomplish this, a validation program was developed to evaluate the strengths and weaknesses of the model's density output by comparing it to SETA-2 (satellite electrostatic accelerometer) total mass density measurements.

  8. 3D Model of the Tuscarora Geothermal Area

    DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]

    Faulds, James E.

    2013-12-31

    The Tuscarora geothermal system sits within a ~15 km wide left-step in a major west-dipping range-bounding normal fault system. The step over is defined by the Independence Mountains fault zone and the Bull Runs Mountains fault zone which overlap along strike. Strain is transferred between these major fault segments via and array of northerly striking normal faults with offsets of 10s to 100s of meters and strike lengths of less than 5 km. These faults within the step over are one to two orders of magnitude smaller than the range-bounding fault zones between which they reside. Faults within the broad step define an anticlinal accommodation zone wherein east-dipping faults mainly occupy western half of the accommodation zone and west-dipping faults lie in the eastern half of the accommodation zone. The 3D model of Tuscarora encompasses 70 small-offset normal faults that define the accommodation zone and a portion of the Independence Mountains fault zone, which dips beneath the geothermal field. The geothermal system resides in the axial part of the accommodation, straddling the two fault dip domains. The Tuscarora 3D geologic model consists of 10 stratigraphic units. Unconsolidated Quaternary alluvium has eroded down into bedrock units, the youngest and stratigraphically highest bedrock units are middle Miocene rhyolite and dacite flows regionally correlated with the Jarbidge Rhyolite and modeled with uniform cumulative thickness of ~350 m. Underlying these lava flows are Eocene volcanic rocks of the Big Cottonwood Canyon caldera. These units are modeled as intracaldera deposits, including domes, flows, and thick ash deposits that change in thickness and locally pinch out. The Paleozoic basement of consists metasedimenary and metavolcanic rocks, dominated by argillite, siltstone, limestone, quartzite, and metabasalt of the Schoonover and Snow Canyon Formations. Paleozoic formations are lumped in a single basement unit in the model. Fault blocks in the eastern portion of the model are tilted 5-30 degrees toward the Independence Mountains fault zone. Fault blocks in the western portion of the model are tilted toward steeply east-dipping normal faults. These opposing fault block dips define a shallow extensional anticline. Geothermal production is from 4 closely-spaced wells, that exploit a west-dipping, NNE-striking fault zone near the axial part of the accommodation zone.

  9. 3D Model of the Tuscarora Geothermal Area

    DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]

    Faulds, James E.

    The Tuscarora geothermal system sits within a ~15 km wide left-step in a major west-dipping range-bounding normal fault system. The step over is defined by the Independence Mountains fault zone and the Bull Runs Mountains fault zone which overlap along strike. Strain is transferred between these major fault segments via and array of northerly striking normal faults with offsets of 10s to 100s of meters and strike lengths of less than 5 km. These faults within the step over are one to two orders of magnitude smaller than the range-bounding fault zones between which they reside. Faults within the broad step define an anticlinal accommodation zone wherein east-dipping faults mainly occupy western half of the accommodation zone and west-dipping faults lie in the eastern half of the accommodation zone. The 3D model of Tuscarora encompasses 70 small-offset normal faults that define the accommodation zone and a portion of the Independence Mountains fault zone, which dips beneath the geothermal field. The geothermal system resides in the axial part of the accommodation, straddling the two fault dip domains. The Tuscarora 3D geologic model consists of 10 stratigraphic units. Unconsolidated Quaternary alluvium has eroded down into bedrock units, the youngest and stratigraphically highest bedrock units are middle Miocene rhyolite and dacite flows regionally correlated with the Jarbidge Rhyolite and modeled with uniform cumulative thickness of ~350 m. Underlying these lava flows are Eocene volcanic rocks of the Big Cottonwood Canyon caldera. These units are modeled as intracaldera deposits, including domes, flows, and thick ash deposits that change in thickness and locally pinch out. The Paleozoic basement of consists metasedimenary and metavolcanic rocks, dominated by argillite, siltstone, limestone, quartzite, and metabasalt of the Schoonover and Snow Canyon Formations. Paleozoic formations are lumped in a single basement unit in the model. Fault blocks in the eastern portion of the model are tilted 5-30 degrees toward the Independence Mountains fault zone. Fault blocks in the western portion of the model are tilted toward steeply east-dipping normal faults. These opposing fault block dips define a shallow extensional anticline. Geothermal production is from 4 closely-spaced wells, that exploit a west-dipping, NNE-striking fault zone near the axial part of the accommodation zone.

  10. System Dynamics Model | NISAC

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Dynamics Model content top Chemical Supply Chain Analysis Posted by Admin on Mar 1, 2012 in | Comments 0 comments Chemical Supply Chain Analysis NISAC has developed a range of...

  11. Customer Prepay Impact Model

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Instructions for Use Inputs 1. To use this model, you will need to unprotect the workbook, by going to Select "Unprotect Workbook", and enter the password "bpa". 2. The Input...

  12. Refining climate models

    ScienceCinema (OSTI)

    Warren, Jeff; Iversen, Colleen; Brooks, Jonathan; Ricciuto, Daniel

    2014-06-26

    Using dogwood trees, Oak Ridge National Laboratory researchers are gaining a better understanding of the role photosynthesis and respiration play in the atmospheric carbon dioxide cycle. Their findings will aid computer modelers in improving the accuracy of climate simulations.

  13. HOMER® Energy Modeling Software

    Energy Science and Technology Software Center (OSTI)

    2000-12-31

    The HOMER® energy modeling software is a tool for designing and analyzing hybrid power systems, which contain a mix of conventional generators, cogeneration, wind turbines, solar photovoltaic, hydropower, batteries, fuel cells, hydropower, biomass and other inputs.

  14. Battery Life Predictive Model

    Energy Science and Technology Software Center (OSTI)

    2009-12-31

    The Software consists of a model used to predict battery capacity fade and resistance growth for arbitrary cycling and temperature profiles. It allows the user to extrapolate from experimental data to predict actual life cycle.

  15. Advanced Target Effects Modeling

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Advanced Target Effects Modeling for Ion Accelerators and other High-Energy-Density Experiments Alice Koniges 1,a , Wangyi Liu 1 , Steven Lidia 1 , Thomas Schenkel 1 , John Barnard...

  16. Renewable Model Documentation

    Gasoline and Diesel Fuel Update (EIA)

    sites is calculated by constructing a model of a representative 100-acre by 50-feet deep landfill site and by applying methane emission factors for high, low, and very low...

  17. Model Wind Ordinance

    Broader source: Energy.gov [DOE]

    In July, 2008 the North Carolina Wind Working Group, a coalition of state government, non-profit and wind industry organizations, published a model wind ordinance to provide guidance for...

  18. Theory, Modeling and Computation

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    modeling and simulation will be enhanced not only by the wealth of data available from MaRIE but by the increased computational capacity made possible by the advent of extreme...

  19. Refining climate models

    SciTech Connect (OSTI)

    Warren, Jeff; Iversen, Colleen; Brooks, Jonathan; Ricciuto, Daniel

    2012-10-31

    Using dogwood trees, Oak Ridge National Laboratory researchers are gaining a better understanding of the role photosynthesis and respiration play in the atmospheric carbon dioxide cycle. Their findings will aid computer modelers in improving the accuracy of climate simulations.

  20. Community Atmosphere Model

    Energy Science and Technology Software Center (OSTI)

    2004-10-18

    The Community Atmosphere Model (CAM) is an atmospheric general circulation model that solves equations for atmospheric dynamics and physics. CAM is an outgrowth of the Community Climate Model at the National Center for Atmospheric Research (NCAR) and was developed as a joint collaborative effort between NCAR and several DOE laboratories, including LLNL. CAM contains several alternative approaches for advancing the atmospheric dynamics. One of these approaches uses a finite-volume method originally developed by personnel atmore » NASNGSFC, We have developed a scalable version of the finite-volume solver for massively parallel computing systems. FV-CAM is meant to be used in conjunction with the Community Atmosphere Model. It is not stand-alone.« less