Skip to main content
U.S. Department of Energy
Office of Scientific and Technical Information

Pretraining Billion-Scale Geospatial Foundational Models on Frontier

Conference ·

As AI workloads increase in scope, generalization capability becomes challenging for small task-specific models and their demand for large amounts of labeled training samples increases. On the contrary, Foundation Models (FMs) are trained with internet-scale unlabeled data via self-supervised learning and have been shown to adapt to various tasks with minimal fine-tuning. Although large FMs have demonstrated significant impact in natural language processing and computer vision, efforts toward FMs for geospatial applications have been restricted to smaller size models, as pretraining larger models requires very large computing resources equipped with state-of-the-art hardware accelerators. Current satellite constellations collect 100+TBs of data a day, resulting in images that are billions of pixels and multimodal in nature. Such geospatial data poses unique challenges opening up new opportunities to develop FMs. We investigate billion scale FMs and HPC training profiles for geospatial applications by pretraining on publicly available data. We studied from end-to-end the performance and impact in the solution by scaling the model size. Our larger 3B parameter size model achieves up to 30% improvement in top1 scene classification accuracy when comparing a 100M parameter model. Moreover, we detail performance experiments on the Frontier supercomputer, America's first exascale system, where we study different model and data parallel approaches using PyTorch's Fully Sharded Data Parallel library. Specifically, we study variants of the Vision Transformer architecture (ViT), conducting performance analysis for ViT models with size up to 15B parameters. By discussing throughput and performance bottlenecks under different parallelism configurations, we offer insights on how to leverage such leadership-class HPC resources when developing large models for geospatial imagery applications.

Research Organization:
Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (United States)
Sponsoring Organization:
USDOE
DOE Contract Number:
AC05-00OR22725
OSTI ID:
2438962
Resource Relation:
38th IEEE International Parallel and Distributed Processing Symposium (IPDPS) - San Francisco, California, United States of America - 3/27/2024-5/31/2024
Country of Publication:
United States
Language:
English

Similar Records

OReole-FM: successes and challenges toward billion-parameter foundation models for high-resolution satellite imagery
Conference · Tue Oct 01 00:00:00 EDT 2024 · OSTI ID:2481211

Towards Diverse and Representative Global Pretraining Datasets for Remote Sensing Foundation Models
Conference · Mon Jul 01 00:00:00 EDT 2024 · OSTI ID:2447323

Optimizing Distributed Training on Frontier for Large Language Models
Conference · Wed May 01 00:00:00 EDT 2024 · OSTI ID:2438819

Related Subjects