skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Machine learning to analyze images of shocked materials for precise and accurate measurements

ORCiD logo [1];  [2];  [2];  [2];  [3]; ORCiD logo [3];  [3];  [1]
  1. Department of Chemistry, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA, Institute for Soldier Nanotechnology, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
  2. Nevada National Security Site, North Las Vegas, Nevada 89030, USA
  3. Los Alamos National Laboratories, Los Alamos, New Mexico 87545, USA
Publication Date:
Sponsoring Org.:
OSTI Identifier:
Grant/Contract Number:
25946-3183; AC52-06NA25396
Resource Type:
Journal Article: Publisher's Accepted Manuscript
Journal Name:
Journal of Applied Physics
Additional Journal Information:
Journal Volume: 122; Journal Issue: 10; Related Information: CHORUS Timestamp: 2017-11-09 15:38:44; Journal ID: ISSN 0021-8979
American Institute of Physics
Country of Publication:
United States

Citation Formats

Dresselhaus-Cooper, Leora, Howard, Marylesa, Hock, Margaret C., Meehan, B. T., Ramos, Kyle J., Bolme, Cindy A., Sandberg, Richard L., and Nelson, Keith A. Machine learning to analyze images of shocked materials for precise and accurate measurements. United States: N. p., 2017. Web. doi:10.1063/1.4998959.
Dresselhaus-Cooper, Leora, Howard, Marylesa, Hock, Margaret C., Meehan, B. T., Ramos, Kyle J., Bolme, Cindy A., Sandberg, Richard L., & Nelson, Keith A. Machine learning to analyze images of shocked materials for precise and accurate measurements. United States. doi:10.1063/1.4998959.
Dresselhaus-Cooper, Leora, Howard, Marylesa, Hock, Margaret C., Meehan, B. T., Ramos, Kyle J., Bolme, Cindy A., Sandberg, Richard L., and Nelson, Keith A. Thu . "Machine learning to analyze images of shocked materials for precise and accurate measurements". United States. doi:10.1063/1.4998959.
title = {Machine learning to analyze images of shocked materials for precise and accurate measurements},
author = {Dresselhaus-Cooper, Leora and Howard, Marylesa and Hock, Margaret C. and Meehan, B. T. and Ramos, Kyle J. and Bolme, Cindy A. and Sandberg, Richard L. and Nelson, Keith A.},
abstractNote = {},
doi = {10.1063/1.4998959},
journal = {Journal of Applied Physics},
number = 10,
volume = 122,
place = {United States},
year = {Thu Sep 14 00:00:00 EDT 2017},
month = {Thu Sep 14 00:00:00 EDT 2017}

Journal Article:
Free Publicly Available Full Text
Publisher's Version of Record at 10.1063/1.4998959

Save / Share:
  • A supervised machine learning algorithm, called locally adaptive discriminant analysis (LADA), has been developed to locate boundaries between identifiable image features that have varying intensities. LADA is an adaptation of image segmentation, which includes techniques that find the positions of image features (classes) using statistical intensity distributions for each class in the image. In order to place a pixel in the proper class, LADA considers the intensity at that pixel and the distribution of intensities in local (nearby) pixels. This paper presents the use of LADA to provide, with statistical uncertainties, the positions and shapes of features within ultrafast imagesmore » of shock waves. We demonstrate the ability to locate image features including crystals, density changes associated with shock waves, and material jetting caused by shock waves. This algorithm can analyze images that exhibit a wide range of physical phenomena because it does not rely on comparison to a model. LADA enables analysis of images from shock physics with statistical rigor independent of underlying models or simulations.« less
  • Purpose: We investigated whether integration of machine learning and bioinformatics techniques on genome-wide association study (GWAS) data can improve the performance of predictive models in predicting the risk of developing radiation-induced late rectal bleeding and erectile dysfunction in prostate cancer patients. Methods: We analyzed a GWAS dataset generated from 385 prostate cancer patients treated with radiotherapy. Using genotype information from these patients, we designed a machine learning-based predictive model of late radiation-induced toxicities: rectal bleeding and erectile dysfunction. The model building process was performed using 2/3 of samples (training) and the predictive model was tested with 1/3 of samples (validation).more » To identify important single nucleotide polymorphisms (SNPs), we computed the SNP importance score, resulting from our random forest regression model. We performed gene ontology (GO) enrichment analysis for nearby genes of the important SNPs. Results: After univariate analysis on the training dataset, we filtered out many SNPs with p>0.001, resulting in 749 and 367 SNPs that were used in the model building process for rectal bleeding and erectile dysfunction, respectively. On the validation dataset, our random forest regression model achieved the area under the curve (AUC)=0.70 and 0.62 for rectal bleeding and erectile dysfunction, respectively. We performed GO enrichment analysis for the top 25%, 50%, 75%, and 100% SNPs out of the select SNPs in the univariate analysis. When we used the top 50% SNPs, more plausible biological processes were obtained for both toxicities. An additional test with the top 50% SNPs improved predictive power with AUC=0.71 and 0.65 for rectal bleeding and erectile dysfunction. A better performance was achieved with AUC=0.67 when age and androgen deprivation therapy were added to the model for erectile dysfunction. Conclusion: Our approach that combines machine learning and bioinformatics techniques enabled designing better models and identifying more plausible biological processes associated with the outcomes.« less
  • Purpose: To investigate the method to automatically recognize the treatment site in the X-Ray portal images. It could be useful to detect potential treatment errors, and to provide guidance to sequential tasks, e.g. automatically verify the patient daily setup. Methods: The portal images were exported from MOSAIQ as DICOM files, and were 1) processed with a threshold based intensity transformation algorithm to enhance contrast, and 2) where then down-sampled (from 1024×768 to 128×96) by using bi-cubic interpolation algorithm. An appearance-based vector space model (VSM) was used to rearrange the images into vectors. A principal component analysis (PCA) method was usedmore » to reduce the vector dimensions. A multi-class support vector machine (SVM), with radial basis function kernel, was used to build the treatment site recognition models. These models were then used to recognize the treatment sites in the portal image. Portal images of 120 patients were included in the study. The images were selected to cover six treatment sites: brain, head and neck, breast, lung, abdomen and pelvis. Each site had images of the twenty patients. Cross-validation experiments were performed to evaluate the performance. Results: MATLAB image processing Toolbox and scikit-learn (a machine learning library in python) were used to implement the proposed method. The average accuracies using the AP and RT images separately were 95% and 94% respectively. The average accuracy using AP and RT images together was 98%. Computation time was ∼0.16 seconds per patient with AP or RT image, ∼0.33 seconds per patient with both of AP and RT images. Conclusion: The proposed method of treatment site recognition is efficient and accurate. It is not sensitive to the differences of image intensity, size and positions of patients in the portal images. It could be useful for the patient safety assurance. The work was partially supported by a research grant from Varian Medical System.« less
  • Purpose: Accurate image segmentation is a crucial step during image guided radiation therapy. This work proposes multi-atlas machine learning (MAML) algorithm for automated segmentation of head-and-neck CT images. Methods: As the first step, the algorithm utilizes normalized mutual information as similarity metric, affine registration combined with multiresolution B-Spline registration, and then fuses together using the label fusion strategy via Plastimatch. As the second step, the following feature selection strategy is proposed to extract five feature components from reference or atlas images: intensity (I), distance map (D), box (B), center of gravity (C) and stable point (S). The box feature Bmore » is novel. It describes a relative position from each point to minimum inscribed rectangle of ROI. The center-of-gravity feature C is the 3D Euclidean distance from a sample point to the ROI center of gravity, and then S is the distance of the sample point to the landmarks. Then, we adopt random forest (RF) in Scikit-learn, a Python module integrating a wide range of state-of-the-art machine learning algorithms as classifier. Different feature and atlas strategies are used for different ROIs for improved performance, such as multi-atlas strategy with reference box for brainstem, and single-atlas strategy with reference landmark for optic chiasm. Results: The algorithm was validated on a set of 33 CT images with manual contours using a leave-one-out cross-validation strategy. Dice similarity coefficients between manual contours and automated contours were calculated: the proposed MAML method had an improvement from 0.79 to 0.83 for brainstem and 0.11 to 0.52 for optic chiasm with respect to multi-atlas segmentation method (MA). Conclusion: A MAML method has been proposed for automated segmentation of head-and-neck CT images with improved performance. It provides the comparable result in brainstem and the improved result in optic chiasm compared with MA. Xuhua Ren and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000), and the Shanghai Pujiang Talent Program (#14PJ1404500).« less
  • Purpose: In radiation treatment planning, delineation of gross tumor volume (GTV) is very important, because the GTVs affect the accuracies of radiation therapy procedure. To assist radiation oncologists in the delineation of GTV regions while treatment planning for lung cancer, we have proposed a machine-learning-based delineation framework of GTV regions of solid and ground glass opacity (GGO) lung tumors following by optimum contour selection (OCS) method. Methods: Our basic idea was to feed voxel-based image features around GTV contours determined by radiation oncologists into a machine learning classifier in the training step, after which the classifier produced the degree ofmore » GTV for each voxel in the testing step. Ten data sets of planning CT and PET/CT images were selected for this study. The support vector machine (SVM), which learned voxel-based features which include voxel value and magnitudes of image gradient vector that obtained from each voxel in the planning CT and PET/CT images, extracted initial GTV regions. The final GTV regions were determined using the OCS method that was able to select a global optimum object contour based on multiple active delineations with a level set method around the GTV. To evaluate the results of proposed framework for ten cases (solid:6, GGO:4), we used the three-dimensional Dice similarity coefficient (DSC), which denoted the degree of region similarity between the GTVs delineated by radiation oncologists and the proposed framework. Results: The proposed method achieved an average three-dimensional DSC of 0.81 for ten lung cancer patients, while a standardized uptake value-based method segmented GTV regions with the DSC of 0.43. The average DSCs for solid and GGO were 0.84 and 0.76, respectively, obtained by the proposed framework. Conclusion: The proposed framework with the support vector machine may be useful for assisting radiation oncologists in delineating solid and GGO lung tumors.« less