;
A\ptomcat Ba==h\:#8X@"1Arial1Arial1Arial1Arial1Arial"$"#,##0_);\("$"#,##0\)!"$"#,##0_);[Red]\("$"#,##0\)""$"#,##0.00_);\("$"#,##0.00\)'""$"#,##0.00_);[Red]\("$"#,##0.00\)7*2_("$"* #,##0_);_("$"* \(#,##0\);_("$"* "-"_);_(@_).))_(* #,##0_);_(* \(#,##0\);_(* "-"_);_(@_)?,:_("$"* #,##0.00_);_("$"* \(#,##0.00\);_("$"* "-"??_);_(@_)6+1_(* #,##0.00_);_(* \(#,##0.00\);_(* "-"??_);_(@_) + ) , * `metadata
2+TitleCreator/AuthorPublication DateOSTI IdentifierReport Number(s)DOE Contract NumberOther Number(s)
Resource Type
Specific TypeCoverageResource Relation
Research Org.Sponsoring Org.SubjectRelated SubjectDescription/Abstract PublisherCountry of PublicationLanguageFormatAvailabilityRightsSystem Entry Date Full TextBibliographic CitationUsing a Simple Binomial Model to Assess Improvement in Predictive Capability: Sequential Bayesian Inference, Hypothesis Testing, and Power AnalysisdSigeti, David E. [Los Alamos National Laboratory]; Pelak, Robert A. [Los Alamos National Laboratory]2012-09-11T04:00:00Z1050516LA-UR-12-24643AC52-06NA25396TRN: US201218%%1038Technical Report%Los Alamos National Laboratory (LANL)DOE/LANL42 ENGINEERING; 97 MATHEMATICAL METHODS AND COMPUTING; COMPUTER CODES; DISTRIBUTION; FORECASTING; HYPOTHESIS; PRICES; PROBABILITY; SIMULATION; STATISTICS; TESTINGWe present a Bayesian statistical methodology for identifying improvement in predictive simulations, including an analysis of the number of (presumably expensive) simulations that will need to be made in order to establish with a given level of confidence that an improvement has been observed. Our analysis assumes the ability to predict (or postdict) the same experiments with legacy and new simulation codes and uses a simple binomial model for the probability, {theta}, that, in an experiment chosen at random, the new code will provide a better prediction than the old. This model makes it possible to do statistical analysis with an absolute minimum of assumptions about the statistics of the quantities involved, at the price of discarding some potentially important information in the data. In particular, the analysis depends only on whether or not the new code predicts better than the old in any given experiment, and not on the magnitude of the improvement. We show how the posterior distribution for {theta} may be used, in a kind of Bayesian hypothesis testing, both to decide if an improvement has been observed and to quantify our confidence in that decision. We quantify the predictive probability that should be assigned, prior to taking any data, to the possibility of achieving a given level of confidence, as a function of sample size. We show how this predictive probability depends on the true value of {theta} and, in particular, how there will always be a region around {theta} = 1/2 where it is highly improbable that we will be able to identify an improvement in predictive capability, although the width of this region will shrink to zero as the sample size goes to infinity. We show how the posterior standard deviation may be used, as a kind of 'plan B metric' in the case that the analysis shows that {theta} is close to 1/2 and argue that such a plan B should generally be part of hypothesis testing. All the analysis presented in the paper is done with a general beta-function prior for {theta}, enabling sequential analysis in which a small number of new simulations may be done and the resulting posterior for {theta} used as a prior to inform the next stage of power analysis.
United StatesEnglish2016-12-06T05:00:00Z2https://www.osti.gov/scitech/servlets/purl/1050516+https://www.osti.gov/scitech/biblio/10505162t#
dMbP?_*+%"d,,??U
!
!
!
"
#
$
!
%
!
&
'
!
!
!
(
)
*^>@
Root EntryWorkbook