skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Trusted Computing Technologies, Intel Trusted Execution Technology.

Abstract

We describe the current state-of-the-art in Trusted Computing Technologies - focusing mainly on Intel's Trusted Execution Technology (TXT). This document is based on existing documentation and tests of two existing TXT-based systems: Intel's Trusted Boot and Invisible Things Lab's Qubes OS. We describe what features are lacking in current implementations, describe what a mature system could provide, and present a list of developments to watch. Critical systems perform operation-critical computations on high importance data. In such systems, the inputs, computation steps, and outputs may be highly sensitive. Sensitive components must be protected from both unauthorized release, and unauthorized alteration: Unauthorized users should not access the sensitive input and sensitive output data, nor be able to alter them; the computation contains intermediate data with the same requirements, and executes algorithms that the unauthorized should not be able to know or alter. Due to various system requirements, such critical systems are frequently built from commercial hardware, employ commercial software, and require network access. These hardware, software, and network system components increase the risk that sensitive input data, computation, and output data may be compromised.

Authors:
;
Publication Date:
Research Org.:
Sandia National Laboratories
Sponsoring Org.:
USDOE
OSTI Identifier:
1011228
Report Number(s):
SAND2011-0475
TRN: US201109%%416
DOE Contract Number:
AC04-94AL85000
Resource Type:
Technical Report
Country of Publication:
United States
Language:
English
Subject:
99 GENERAL AND MISCELLANEOUS//MATHEMATICS, COMPUTING, AND INFORMATION SCIENCE; ALGORITHMS; DOCUMENTATION; FOCUSING

Citation Formats

Guise, Max Joseph, and Wendt, Jeremy Daniel. Trusted Computing Technologies, Intel Trusted Execution Technology.. United States: N. p., 2011. Web. doi:10.2172/1011228.
Guise, Max Joseph, & Wendt, Jeremy Daniel. Trusted Computing Technologies, Intel Trusted Execution Technology.. United States. doi:10.2172/1011228.
Guise, Max Joseph, and Wendt, Jeremy Daniel. Sat . "Trusted Computing Technologies, Intel Trusted Execution Technology.". United States. doi:10.2172/1011228. https://www.osti.gov/servlets/purl/1011228.
@article{osti_1011228,
title = {Trusted Computing Technologies, Intel Trusted Execution Technology.},
author = {Guise, Max Joseph and Wendt, Jeremy Daniel},
abstractNote = {We describe the current state-of-the-art in Trusted Computing Technologies - focusing mainly on Intel's Trusted Execution Technology (TXT). This document is based on existing documentation and tests of two existing TXT-based systems: Intel's Trusted Boot and Invisible Things Lab's Qubes OS. We describe what features are lacking in current implementations, describe what a mature system could provide, and present a list of developments to watch. Critical systems perform operation-critical computations on high importance data. In such systems, the inputs, computation steps, and outputs may be highly sensitive. Sensitive components must be protected from both unauthorized release, and unauthorized alteration: Unauthorized users should not access the sensitive input and sensitive output data, nor be able to alter them; the computation contains intermediate data with the same requirements, and executes algorithms that the unauthorized should not be able to know or alter. Due to various system requirements, such critical systems are frequently built from commercial hardware, employ commercial software, and require network access. These hardware, software, and network system components increase the risk that sensitive input data, computation, and output data may be compromised.},
doi = {10.2172/1011228},
journal = {},
number = ,
volume = ,
place = {United States},
year = {Sat Jan 01 00:00:00 EST 2011},
month = {Sat Jan 01 00:00:00 EST 2011}
}

Technical Report:

Save / Share:
  • We begin with the following definitions: Definition: A trusted volume is the computing machinery (including communication lines) within which data is assumed to be physically protected from an adversary. A trusted volume provides both integrity and privacy. Definition: Program integrity consists of the protection necessary to enable the detection of changes in the bits comprising a program as specified by the developer, for the entire time that the program is outside a trusted volume. For ease of discussion we consider program integrity to be the aggregation of two elements: instruction integrity (detection of changes in the bits within an instructionmore » or block of instructions), and sequence integrity (detection of changes in the locations of instructions within a program). Definition: Faithful Execution (FE) is a type of software protection that begins when the software leaves the control of the developer and ends within the trusted volume of a target processor. That is, FE provides program integrity, even while the program is in execution. (As we will show below, FE schemes are a function of trusted volume size.) FE is a necessary quality for computing. Without it we cannot trust computations. In the early days of computing FE came for free since the software never left a trusted volume. At that time the execution environment was the same as the development environment. In some circles that environment was referred to as a ''closed shop:'' all of the software that was used there was developed there. When an organization bought a large computer from a vendor the organization would run its own operating system on that computer, use only its own editors, only its own compilers, only its own debuggers, and so on. However, with the continuing maturity of computing technology, FE becomes increasingly difficult to achieve« less
  • An analysis of variability in test script execution time was performed using the Shewhart Control Chart Method. These charts show that the CRAY systems are not in control with respect to test script execution times. However, when one analyzes the performance of specific tasks within the test script, the situation is improved. It was concluded that the test script itself is not a controlled process because it is the superposition of several distinct distributions and that existing data do not form the basis for a simple method of performance monitoring. Suggestions are made regarding future actions which might lead tomore » the desired result. 3 refs.« less
  • The purpose of this study was to examine the use of distributed adaptive routing algorithms on concurrent-class computers. The implemented routing algorithm allowed each node to select the next node based on two criteria: the fewest number of hops; and the smallest delay time. This study was limited to the comparison of a distributed adaptive routing algorithm, implemented at the applications layer, with the current static routing and with a simulation of the current routing implemented at the applications layer. The comparison with the simulated current static routing provides a measure of the possible performance gain had the adaptive routingmore » algorithm been implemented at the network layer. Each of three configuration was comprised of four processes: a Host Process, a Routing Process, a Ring Control Process, and a Network Loading Process. The Host Process controlled the loading of the processes onto the IPSC, the Routing Process controlled the message routing, the Ring Control Process provided the baseline message passing, while the Network Loading Process provided communications congestion on selected links. The metric used to compare the Routing Process performance was the average delay time for passing a message around the ring.« less
  • With the recent availability of commercial parallel computers, researchers are examining new classes of problems for benefits from parallel processing. This report presents results of an investigation of the set of problems classified as search-intensive. The specific problems discussed in this report are the backtracking search method of the N-queens problem and the Least-Cost Branch and Bound search of deadline job scheduling. The object-oriented design methodology was used to map the problem into a parallel solution. While the initial design was good for a prototype, the best performance resulted from fine tuning the algorithms for a specific computer. The experimentsmore » of the N-queens and deadline job scheduling included an analysis of the computation time to first solution, the computation time to all solutions, the speedup over a VAX 11/785, and the load balance of the problem when using an Intel Personal SuperComputer(IPSC). The IPSC is a loosely couple multiprocessor system based on a hypercube architecture. Results are presented that compare the performance of the IPSC and VAX 11/785 for these classes of problems.« less
  • Developing dependable software for large, complex, real-time systems is one of the major challenges now facing the software industry. The software R and D community is responding to this challenge; numerous efforts have been initiated on various aspects of real-time software development. In this paper, we review and evaluate ongoing R and D efforts in light of the needs of strategic defense systems. We identify and discuss four recent developments that hold promise for facilitating the design and implementation of real-time software for strategic defense systems: (1) rate monotonic scheduling theory, (2) real-time extensions to the IEEE Portable Operating Systemmore » Interface for Computer Environments (POSIX), (3) several distributed real-time operating system prototypes, and (4) various methods for enhancing real-time system robustness by trading precision of results for timeliness of results. We also point out an area of major concern to real-time software developers and, in particular, to the SDIO: the lack of analytical methods for evaluating the performance of complex real-time systems. We conclude with a series of recommendations on how the SDIO should follow up on the real-time R and D topics covered in the paper.« less