skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Los Alamos works on a biologically realistic computer network

Abstract

Brain neuroscientists and computer scientists call this field neuromimetic computing—building computers inspired by how the cerebral cortex of the brain works. Its cortical processes rely on billions of small biological “switches,” called neurons. To learn, they store and process information. Using an approach called neural networks, researchers are developing computers that simulate neurons and their interconnections. Then computers can learn about their surroundings, interpret data, and make predictions based on it. In practice, however, researchers attempting to simulate neural processing at anything close to the scale and complexity of the brain’s cortical circuits have been stymied by limitations on computer memory and computational power. All that has changed with the new Trinity supercomputer at Los Alamos, which became fully operational in mid-2017. Trinity has unique capabilities designed for the stockpile stewardship mission, which includes highly complex nuclear simulations that have replaced the testing of nuclear weapons. All this capability means Trinity allows a fundamentally different approach to large-scale cortical simulations, enabling an unprecedented leap in the ability to model neural processing. To test that capability on a limited-scale problem, computer scientists and neuroscientists at Los Alamos created a “sparse prediction machine” that runs on a neural network on Trinity. Amore » sparse prediction machine is designed to work like the brain: researchers expose it to data—in this case, various videos of a car driving down a road—without labeling the data in any way. Then the program sorts through that data frame by frame, focuses on the important information, and develops a prediction about the car’s motion. With Trinity’s power, the Los Alamos team simulates the way a brain handles information in its neurons but uses the fewest neurons at any given moment to explain the information at hand. That’s the “sparse” part, and it makes the brain very efficient—and, hopefully, a computer more efficient, too.« less

Authors:
Publication Date:
Research Org.:
LANL (Los Alamos National Laboratory (LANL), Los Alamos, NM (United States))
Sponsoring Org.:
USDOE National Nuclear Security Administration (NNSA)
OSTI Identifier:
1487142
Resource Type:
Multimedia
Country of Publication:
United States
Language:
English
Subject:
97 MATHEMATICS AND COMPUTING; TRINITY; SPARSE PREDICTION MACHINE; NEUROMIMETIC COMPUTING

Citation Formats

Kenyon, Garrett. Los Alamos works on a biologically realistic computer network. United States: N. p., 2017. Web.
Kenyon, Garrett. Los Alamos works on a biologically realistic computer network. United States.
Kenyon, Garrett. Mon . "Los Alamos works on a biologically realistic computer network". United States. https://www.osti.gov/servlets/purl/1487142.
@article{osti_1487142,
title = {Los Alamos works on a biologically realistic computer network},
author = {Kenyon, Garrett},
abstractNote = {Brain neuroscientists and computer scientists call this field neuromimetic computing—building computers inspired by how the cerebral cortex of the brain works. Its cortical processes rely on billions of small biological “switches,” called neurons. To learn, they store and process information. Using an approach called neural networks, researchers are developing computers that simulate neurons and their interconnections. Then computers can learn about their surroundings, interpret data, and make predictions based on it. In practice, however, researchers attempting to simulate neural processing at anything close to the scale and complexity of the brain’s cortical circuits have been stymied by limitations on computer memory and computational power. All that has changed with the new Trinity supercomputer at Los Alamos, which became fully operational in mid-2017. Trinity has unique capabilities designed for the stockpile stewardship mission, which includes highly complex nuclear simulations that have replaced the testing of nuclear weapons. All this capability means Trinity allows a fundamentally different approach to large-scale cortical simulations, enabling an unprecedented leap in the ability to model neural processing. To test that capability on a limited-scale problem, computer scientists and neuroscientists at Los Alamos created a “sparse prediction machine” that runs on a neural network on Trinity. A sparse prediction machine is designed to work like the brain: researchers expose it to data—in this case, various videos of a car driving down a road—without labeling the data in any way. Then the program sorts through that data frame by frame, focuses on the important information, and develops a prediction about the car’s motion. With Trinity’s power, the Los Alamos team simulates the way a brain handles information in its neurons but uses the fewest neurons at any given moment to explain the information at hand. That’s the “sparse” part, and it makes the brain very efficient—and, hopefully, a computer more efficient, too.},
doi = {},
journal = {},
number = ,
volume = ,
place = {United States},
year = {2017},
month = {10}
}

Multimedia:

Save / Share:
Save to Playlist
You must Sign In or Create an Account in order to save documents to your library.