Skip to main content
U.S. Department of Energy
Office of Scientific and Technical Information

Effective Vectorization with OpenMP 4.5

Technical Report ·
DOI:https://doi.org/10.2172/1351758· OSTI ID:1351758
 [1];  [1];  [1]
  1. Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
This paper describes how the Single Instruction Multiple Data (SIMD) model and its extensions in OpenMP work, and how these are implemented in different compilers. Modern processors are highly parallel computational machines which often include multiple processors capable of executing several instructions in parallel. Understanding SIMD and executing instructions in parallel allows the processor to achieve higher performance without increasing the power required to run it. SIMD instructions can significantly reduce the runtime of code by executing a single operation on large groups of data. The SIMD model is so integral to the processor s potential performance that, if SIMD is not utilized, less than half of the processor is ever actually used. Unfortunately, using SIMD instructions is a challenge in higher level languages because most programming languages do not have a way to describe them. Most compilers are capable of vectorizing code by using the SIMD instructions, but there are many code features important for SIMD vectorization that the compiler cannot determine at compile time. OpenMP attempts to solve this by extending the C++/C and Fortran programming languages with compiler directives that express SIMD parallelism. OpenMP is used to pass hints to the compiler about the code to be executed in SIMD. This is a key resource for making optimized code, but it does not change whether or not the code can use SIMD operations. However, in many cases critical functions are limited by a poor understanding of how SIMD instructions are actually implemented, as SIMD can be implemented through vector instructions or simultaneous multi-threading (SMT). We have found that it is often the case that code cannot be vectorized, or is vectorized poorly, because the programmer does not have sufficient knowledge of how SIMD instructions work.
Research Organization:
Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility (OLCF)
Sponsoring Organization:
DOE Office of Science; USDOE
DOE Contract Number:
AC05-00OR22725
OSTI ID:
1351758
Report Number(s):
ORNL/TM--2016/391
Country of Publication:
United States
Language:
English

Similar Records

Automatic Parallelization Using OpenMP Based on STL Semantics
Conference · Tue Jun 03 00:00:00 EDT 2008 · OSTI ID:945633

OpenMP in VASP: Threading and SIMD
Journal Article · Tue Dec 18 19:00:00 EST 2018 · International Journal of Quantum Chemistry · OSTI ID:1493397

The Nexus task-parallel runtime system
Conference · Fri Dec 30 23:00:00 EST 1994 · OSTI ID:390589

Related Subjects