Search engine for discovering works of Art, research articles, and books related to Art and Culture
ShareThis
Javascript must be enabled to continue!

Compiler Technology for Parallel Scientific Computation

View through CrossRef
There is a need for compiler technology that, given the source program, will generate efficient parallel codes for different architectures with minimal user involvement. Parallel computation is becoming indispensable in solving large‐scale problems in science and engineering. Yet, the use of parallel computation is limited by the high costs of developing the needed software. To overcome this difficulty we advocate a comprehensive approach to the development of scalable architecture‐independent software for scientific computation based on our experience with equational programming language (EPL). Our approach is based on a program decomposition, parallel code synthesis, and run‐time support for parallel scientific computation. The program decomposition is guided by the source program annotations provided by the user. The synthesis of parallel code is based on configurations that describe the overall computation as a set of interacting components. Run‐time support is provided by the compiler‐generated code that redistributes computation and data during object program execution. The generated parallel code is optimized using techniques of data alignment, operator placement, wavefront determination, and memory optimization. In this article we discuss annotations, configurations, parallel code generation, and run‐time support suitable for parallel programs written in the functional parallel programming language EPL and in Fortran.
Title: Compiler Technology for Parallel Scientific Computation
Description:
There is a need for compiler technology that, given the source program, will generate efficient parallel codes for different architectures with minimal user involvement.
Parallel computation is becoming indispensable in solving large‐scale problems in science and engineering.
Yet, the use of parallel computation is limited by the high costs of developing the needed software.
To overcome this difficulty we advocate a comprehensive approach to the development of scalable architecture‐independent software for scientific computation based on our experience with equational programming language (EPL).
Our approach is based on a program decomposition, parallel code synthesis, and run‐time support for parallel scientific computation.
The program decomposition is guided by the source program annotations provided by the user.
The synthesis of parallel code is based on configurations that describe the overall computation as a set of interacting components.
Run‐time support is provided by the compiler‐generated code that redistributes computation and data during object program execution.
The generated parallel code is optimized using techniques of data alignment, operator placement, wavefront determination, and memory optimization.
In this article we discuss annotations, configurations, parallel code generation, and run‐time support suitable for parallel programs written in the functional parallel programming language EPL and in Fortran.

Related Results

The verified CakeML compiler backend
The verified CakeML compiler backend
AbstractThe CakeML compiler is, to the best of our knowledge, the most realistic verified compiler for a functional programming language to date. The architecture of the compiler, ...
High-level compiler analysis for OpenMP
High-level compiler analysis for OpenMP
Nowadays, applications from dissimilar domains, such as high-performance computing and high-integrity systems, require levels of performance that can only be achieved by means of s...
A Study on Parallel Computation for 3D Magneto‐Telluric Modeling Using the Staggered‐Grid Finite Difference Method
A Study on Parallel Computation for 3D Magneto‐Telluric Modeling Using the Staggered‐Grid Finite Difference Method
AbstractComputation time and memory requirements are two common problems for magnetotelluric (MT) modeling of three‐dimensional conductivity structure. We develop a new parallel pr...
Mapping Ada onto embedded systems: memory constraints
Mapping Ada onto embedded systems: memory constraints
Running Ada programs on a self-targeting system with "virtually" unlimited memory (such as a mainframe), is quite different from running Ada on an embedded target. On self-targetin...
CakeML
CakeML
We have developed and mechanically verified an ML system called CakeML, which supports a substantial subset of Standard ML. CakeML is implemented as an interactive read-eval-print ...
Nature Inspired Parallel Computing
Nature Inspired Parallel Computing
Parallel computing is more and more important for science and engineering, but it is not used so widely as serial computing. People are used to serial computing and feel parallel c...
Three‐Dimensional Magnetotelluric Parallel Inversion Algorithm Using the Data‐Space Method
Three‐Dimensional Magnetotelluric Parallel Inversion Algorithm Using the Data‐Space Method
AbstractUp until now, the key issue in practical applications of three‐dimensional magnetotelluric (3D MT) inversion is low efficiency of computation. By further analysis of the da...
Some studies on 100% banana parallel laid and 60:40% banana: polypropylene cross laid non-woven fabrics
Some studies on 100% banana parallel laid and 60:40% banana: polypropylene cross laid non-woven fabrics
AbstractGlobal trend towards sustainable developments have brought natural, renewable biodegradable raw material into the focus, but due to lack of technical knowhow, only a small ...

Back to Top