skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: SU-F-J-180: A Reference Data Set for Testing Two Dimension Registration Algorithms

Abstract

Purpose: To create and characterize a reference data set for testing image registration algorithms that transform portal image (PI) to digitally reconstructed radiograph (DRR). Methods: Anterior-posterior (AP) and Lateral (LAT) projection and DRR image pairs from nine cases representing four different anatomical sites (head and neck, thoracic, abdominal, and pelvis) were selected for this study. Five experts will perform manual registration by placing landmarks points (LMPs) on the DRR and finding their corresponding points on the PI using computer assisted manual point selection tool (CAMPST), a custom-made MATLAB software tool developed in house. The landmark selection process will be repeated on both the PI and the DRR in order to characterize inter- and -intra observer variations associated with the point selection process. Inter and an intra observer variation in LMPs was done using Bland-Altman (B&A) analysis and one-way analysis of variance. We set our limit such that the absolute value of the mean difference between the readings should not exceed 3mm. Later on in this project we will test different two dimension (2D) image registration algorithms and quantify the uncertainty associated with their registration. Results: Using one-way analysis of variance (ANOVA) there was no variations within the readers. When Bland-Altmanmore » analysis was used the variation within the readers was acceptable. The variation was higher in the PI compared to the DRR.ConclusionThe variation seen for the PI is because although the PI has a much better spatial resolution the poor resolution on the DRR makes it difficult to locate the actual corresponding anatomical feature on the PI. We hope this becomes more evident when all the readers complete the point selection. The reason for quantifying inter- and -intra observer variation tells us to what degree of accuracy a manual registration can be done. Research supported by William Beaumont Hospital Research Start Up Fund.« less

Authors:
; ;  [1]
  1. William Beaumont Hospital, Rochester Hills, MI (United States)
Publication Date:
OSTI Identifier:
22634777
Resource Type:
Journal Article
Resource Relation:
Journal Name: Medical Physics; Journal Volume: 43; Journal Issue: 6; Other Information: (c) 2016 American Association of Physicists in Medicine; Country of input: International Atomic Energy Agency (IAEA)
Country of Publication:
United States
Language:
English
Subject:
60 APPLIED LIFE SCIENCES; 61 RADIATION PROTECTION AND DOSIMETRY; ACCURACY; ALGORITHMS; BIOMEDICAL RADIOGRAPHY; COMPUTER CODES; HEAD; HOSPITALS; IMAGE PROCESSING; IMAGES; NECK; PELVIS; SPATIAL RESOLUTION

Citation Formats

Dankwa, A, Castillo, E, and Guerrero, T. SU-F-J-180: A Reference Data Set for Testing Two Dimension Registration Algorithms. United States: N. p., 2016. Web. doi:10.1118/1.4956088.
Dankwa, A, Castillo, E, & Guerrero, T. SU-F-J-180: A Reference Data Set for Testing Two Dimension Registration Algorithms. United States. doi:10.1118/1.4956088.
Dankwa, A, Castillo, E, and Guerrero, T. 2016. "SU-F-J-180: A Reference Data Set for Testing Two Dimension Registration Algorithms". United States. doi:10.1118/1.4956088.
@article{osti_22634777,
title = {SU-F-J-180: A Reference Data Set for Testing Two Dimension Registration Algorithms},
author = {Dankwa, A and Castillo, E and Guerrero, T},
abstractNote = {Purpose: To create and characterize a reference data set for testing image registration algorithms that transform portal image (PI) to digitally reconstructed radiograph (DRR). Methods: Anterior-posterior (AP) and Lateral (LAT) projection and DRR image pairs from nine cases representing four different anatomical sites (head and neck, thoracic, abdominal, and pelvis) were selected for this study. Five experts will perform manual registration by placing landmarks points (LMPs) on the DRR and finding their corresponding points on the PI using computer assisted manual point selection tool (CAMPST), a custom-made MATLAB software tool developed in house. The landmark selection process will be repeated on both the PI and the DRR in order to characterize inter- and -intra observer variations associated with the point selection process. Inter and an intra observer variation in LMPs was done using Bland-Altman (B&A) analysis and one-way analysis of variance. We set our limit such that the absolute value of the mean difference between the readings should not exceed 3mm. Later on in this project we will test different two dimension (2D) image registration algorithms and quantify the uncertainty associated with their registration. Results: Using one-way analysis of variance (ANOVA) there was no variations within the readers. When Bland-Altman analysis was used the variation within the readers was acceptable. The variation was higher in the PI compared to the DRR.ConclusionThe variation seen for the PI is because although the PI has a much better spatial resolution the poor resolution on the DRR makes it difficult to locate the actual corresponding anatomical feature on the PI. We hope this becomes more evident when all the readers complete the point selection. The reason for quantifying inter- and -intra observer variation tells us to what degree of accuracy a manual registration can be done. Research supported by William Beaumont Hospital Research Start Up Fund.},
doi = {10.1118/1.4956088},
journal = {Medical Physics},
number = 6,
volume = 43,
place = {United States},
year = 2016,
month = 6
}
  • A data set consisting of DNA sequences from a large-scale shotgun DNA cloning and sequencing project has been collected and posted for public release. The purpose is to propose a standard genomic DNA sequencing data set by which various algorithms and implementations can be tested. This set of data is divided into two subsets, one containing raw DNA sequence data (1023 clones) and the other consisting of the corresponding partially refined or edited DNA sequence data (820 clones). Suggested criteria or guidelines for this data refinement are presented so that algorithms for preprocessing and screening raw sequences may be developed.more » Development of such preprocessing, screening, aligning, and assembling algorithms will expedite large-scale DNA sequencing projects so that the complete unambiguous consensus DNA sequences will be made available to the general research community in a quicker manner. Smaller scale routine DNA sequencing projects will also be greatly aided by such computational efforts. 8 refs., 2 tabs.« less
  • The authors have developed a set of tools, genfrag, to fragment and optionally mutate a DNA sequence to generate benchmark data sets for testing DNA sequence assembly algorithms. Data parameters can be systematically and independently varied to explore the range of data -- and corresponding performance of assembly tools -- encountered on large-scale random, or [open quotes]shotgun,[close quotes] sequencing projects. 13 refs., 1 fig.
  • We evaluated 4 volume-based automatic image registration algorithms from 2 commercially available treatment planning systems (Philips Syntegra and BrainScan). The algorithms based on cross correlation (CC), local correlation (LC), normalized mutual information (NMI), and BrainScan mutual information (BSMI) were evaluated with: (1) the synthetic computed tomography (CT) images, (2) the CT and magnetic resonance (MR) phantom images, and (3) the CT and MR head image pairs from 12 patients with brain tumors. For the synthetic images, the registration results were compared with known transformation parameters, and all algorithms achieved accuracy of submillimeter in translation and subdegree in rotation. For themore » phantom images, the registration results were compared with those provided by frame and marker-based manual registration. For the patient images, the results were compared with anatomical landmark-based manual registration to qualitatively determine how the results were close to a clinically acceptable registration. NMI and LC outperformed CC and BSMI, with the sense of being closer to a clinically acceptable result. As for the robustness, NMI and BSMI outperformed CC and LC. A guideline of image registration in our institution was given, and final visual assessment is necessary to guarantee reasonable results.« less
  • Purpose: Accurate deformable registration is essential for voxel-based comparison of sequential positron emission tomography (PET) images for proper adaptation of treatment plan and treatment response assessment. The comparison may be sensitive to the method of deformable registration as the optimal algorithm is unknown. This study investigated the impact of registration algorithm choice on therapy response evaluation. Methods: Sixteen patients with 20 lung tumors underwent a pre- and post-treatment computed tomography (CT) and 4D FDG-PET scans before and after chemoradiotherapy. All CT images were coregistered using a rigid and ten deformable registration algorithms. The resulting transformations were then applied to themore » respective PET images. Moreover, the tumor region defined by a physician on the registered PET images was classified into progressor, stable-disease, and responder subvolumes. Particularly, voxels with standardized uptake value (SUV) decreases >30% were classified as responder, while voxels with SUV increases >30% were progressor. All other voxels were considered stable-disease. The agreement of the subvolumes resulting from difference registration algorithms was assessed by Dice similarity index (DSI). Coefficient of variation (CV) was computed to assess variability of DSI between individual tumors. Root mean square difference (RMS{sub rigid}) of the rigidly registered CT images was used to measure the degree of tumor deformation. RMS{sub rigid} and DSI were correlated by Spearman correlation coefficient (R) to investigate the effect of tumor deformation on DSI. Results: Median DSI{sub rigid} was found to be 72%, 66%, and 80%, for progressor, stable-disease, and responder, respectively. Median DSI{sub deformable} was 63%–84%, 65%–81%, and 82%–89%. Variability of DSI was substantial and similar for both rigid and deformable algorithms with CV > 10% for all subvolumes. Tumor deformation had moderate to significant impact on DSI for progressor subvolume with R{sub rigid} = − 0.60 (p = 0.01) and R{sub deformable} = − 0.46 (p = 0.01–0.20) averaging over all deformable algorithms. For stable-disease subvolumes, the correlations were significant (p < 0.001) for all registration algorithms with R{sub rigid} = − 0.71 and R{sub deformable} = − 0.72. Progressor and stable-disease subvolumes resulting from rigid registration were in excellent agreement (DSI > 70%) for RMS{sub rigid} < 150 HU. However, tumor deformation was observed to have negligible effect on DSI for responder subvolumes with insignificant |R| < 0.26, p > 0.27. Conclusions: This study demonstrated that deformable algorithms cannot be arbitrarily chosen; different deformable algorithms can result in large differences of voxel-based PET image comparison. For low tumor deformation (RMS{sub rigid} < 150 HU), rigid and deformable algorithms yield similar results, suggesting deformable registration is not required for these cases.« less
  • Purpose: Deformable image registration (DIR) is increasingly being used in various clinical applications. Although there are several DIR packages all making successful attempts at modeling complex anatomical changes using even more complex mathematical approximations, they are all subject to various uncertainties. Many studies have attempted to quantify the spatial uncertainty with DIR. This is the first study to compare the uncertainty for interfraction DIR for 5 different commercially-available algorithms. The aim of this study was to benchmark the performance of the most commonlyused DIR algorithms offered through these 5 software packages: Eclipse, MIM, Pinnacle, RaySearch, and Velocity. Methods: A setmore » of 10 virtual H'N phantoms [Pukala et al. MedPhys. 40(11) 2013] with known deformations were used to determine the spatial errors that might be seen when performing DIR. The “ground-truth” deformation vector field (DVF) was compared to the DVF output of the 5 commercially-available algorithms in order to evaluate spatial errors for six regions of interest (ROIs): brainstem, cord, mandible, left parotid, right parotid, and the external body contour. Results: We found that each software package had varying uncertainties with the various ROIs, but were generally all comparable to one another – with mean spatial errors for each algorithm below 3.5 mm for each ROI (averaged across all phantoms). We also found that no single algorithm was the clear winner over the other 4 algorithms. However, at times, we found huge maximum errors in our results (e.g. phantom #9 maximum errors: right parotid = 22.9 mm, external contour = 30.5mm) with the varying DIR algorithms. Conclusion: Although our evaluation was limited to H'N patients, we show that our methods are a single-assessment analysis tool that could be used by any physicist, within any type of facility, to compare their DIR software before initiating widespread use within their daily radiotherapy practice.« less