Understanding Generative AI Content with Embedding Models
- PNNL
The construction of high-quality numerical features is critical to any quantitative data analysis. Feature engineering has been historically addressed by carefully hand-crafting data representations based on domain expertise. This work views the internal representations of modern deep neural networks (DNNs), called embeddings, as an implicit form of traditional feature engineering. For trained DNNs, we show that these embeddings can reveal interpretable, high-level concepts in unstructured sample data. We use these embeddings in natural language and computer vision tasks to uncover both inherent heterogeneity in the underlying data and human-understandable explanations for it. In particular, we find empirical evidence that there is inherent separability between real data and those generated from AI models.
- Research Organization:
- Pacific Northwest National Laboratory 2
- Sponsoring Organization:
- DOE
- DOE Contract Number:
- AC05-76RL01830
- OSTI ID:
- 2481996
- Country of Publication:
- United States
- Language:
- English
Similar Records
Learning Global Proliferation Expertise Evolution Using AI-Driven Analytics and Public Information