skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Why Models Don%3CU%2B2019%3Et Forecast.

Conference ·
OSTI ID:1022212

The title of this paper, Why Models Don't Forecast, has a deceptively simple answer: models don't forecast because people forecast. Yet this statement has significant implications for computational social modeling and simulation in national security decision making. Specifically, it points to the need for robust approaches to the problem of how people and organizations develop, deploy, and use computational modeling and simulation technologies. In the next twenty or so pages, I argue that the challenge of evaluating computational social modeling and simulation technologies extends far beyond verification and validation, and should include the relationship between a simulation technology and the people and organizations using it. This challenge of evaluation is not just one of usability and usefulness for technologies, but extends to the assessment of how new modeling and simulation technologies shape human and organizational judgment. The robust and systematic evaluation of organizational decision making processes, and the role of computational modeling and simulation technologies therein, is a critical problem for the organizations who promote, fund, develop, and seek to use computational social science tools, methods, and techniques in high-consequence decision making.

Research Organization:
Sandia National Laboratories (SNL), Albuquerque, NM, and Livermore, CA (United States)
Sponsoring Organization:
USDOE
DOE Contract Number:
AC04-94AL85000
OSTI ID:
1022212
Report Number(s):
SAND2010-5203C; TRN: US201117%%683
Resource Relation:
Conference: Proposed for presentation at the National Research Council's %22Unifying Social Frameworks%22 Workshop held August 15-18, 2010 in Washington, DC.
Country of Publication:
United States
Language:
English