Skip to main content
U.S. Department of Energy
Office of Scientific and Technical Information

Verifying LLM generative agents reflect human behavior in contested information environments to effectively simulate disinformation campaigns (Proteus)

Technical Report ·
DOI:https://doi.org/10.2172/2547078· OSTI ID:2547078

Disinformation poses a significant and evolving threat to today’s online environment. Individuals encounter challenges in detecting disinformation, subsequently influencing their behavior and decision-making processes. Our research examines the potential use of large language model (LLM) generative agents (LGAs) to replicate human behavior to better understand how disinformation is spread in online environments. Using human subjects research, we first investigate how personality traits, individual differences, and demographic factors relate to decision-making in simulated online disinformation environments. Then, we examine whether LGAs can effectively replicate human responses in the same simulated online environments when assigned personality traits, demographic characteristics and behavioral attributes. Our findings indicate that LGAs can align with human decisions in these scenarios; however, alignment is contingent upon scenario context, persona settings and LLM selection. Results provide valuable insights for methodology refinement in future research and in utilizing LGAs to model complex national security challenges such as disinformation campaigns.

Research Organization:
Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)
Sponsoring Organization:
USDOE National Nuclear Security Administration (NNSA); USDOE Laboratory Directed Research and Development (LDRD) Program
DOE Contract Number:
NA0003525
OSTI ID:
2547078
Report Number(s):
SAND--2025-03624
Country of Publication:
United States
Language:
English