A GPU-Accelerated Population Generation, Sorting, and Mutation Kernel for an Optimization-Based Causal Inference Model
- University of Illinois at Urbana-Champaign, National Center for Supercomputing Applications
- ORNL
We develop a GPU-accelerated machine learning generative adversarial network model that can be used with observational data for the purpose of constructing causal inferences. The theoretical basis of our machine learning model is novel and is conceptualized to be operable and scalable for high performance computing platforms. Our GPU-accelerated code enables large-scale parallelization of the computation within a common and accessible computing environment. This will expand the reach of our model and empower research in new substantive domains while maintaining the underlying theoretical properties.
- Research Organization:
- Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (United States)
- Sponsoring Organization:
- USDOE
- DOE Contract Number:
- AC05-00OR22725
- OSTI ID:
- 2438997
- Resource Relation:
- Conference: The 13th International Workshop on Parallel and Distributed Algorithms for Decision Sciences (PDADS 2023) - Salt Lake City, Utah, United States of America - 8/7/2023 4:00:00 AM-8/7/2023 4:00:00 AM
- Country of Publication:
- United States
- Language:
- English
Similar Records
FPGAs as a Service to Accelerate Machine Learning Inference [PowerPoint]
ExaTN: Scalable GPU-Accelerated High-Performance Processing of General Tensor Networks at Exascale
LLM-Inference-Bench: Inference Benchmarking of Large Language Models on AI Accelerators
Technical Report
·
2019
·
OSTI ID:1570210
+19 more
ExaTN: Scalable GPU-Accelerated High-Performance Processing of General Tensor Networks at Exascale
Journal Article
·
2022
· Frontiers in Applied Mathematics and Statistics
·
OSTI ID:1876320
+2 more
LLM-Inference-Bench: Inference Benchmarking of Large Language Models on AI Accelerators
Conference
·
2025
·
OSTI ID:2563712
+6 more