|Haskell is a lazy functional language with a strong static type system and excellent support for parallel programming. The language features of Haskell make it easier to write correct and maintainable programs, but execution speed often suffers from the high levels of abstraction. My thesis examines ways to make Haskell programs run faster by improving the effectiveness of low-level program optimizations traditionally used for imperative languages such as C. While much past research focuses on high-level optimizations that take advantage of the functional properties of Haskell, relatively little attention has been paid to the optimization opportunities in the low-level imperative code that is generated as part of the translation to machine code. In this thesis, I examine the properties of low-level Haskell code and techniques for optimizing these codes. I claim three unique contributions in this work.
The first contribution is to expose some of the properties of low-level Haskell code by looking at the mix of operations performed by the selected benchmark codes and comparing them to the low-level codes coming from traditional programming languages. The low-level measurements reveal that Haskell programs have a code shape different from traditional codes. In particular, the control flow from the compiler's perspective is obscured by lazy evaluation and higher-order functions used by Haskell programs. To reveal the control flow to the compiler and enable more program optimizations, I propose the use trace-based compilation techniques. A trace-based compiler operates on single-entry, multiple-exit regions of frequently executed instructions called a program trace.
My second contribution is a study on the effectiveness of a dynamic binary trace-based optimizer running on Haskell programs. My results show that while viable program traces frequently occur in Haskell programs the overhead associated with maintaing the traces in a binary optimization system outweigh the benefits we get from running the traces. The large overheads from the dynamic system lead to the idea of building and optimizing traces offline using a profiling run to find valid program traces.
My final contribution is to build and evaluate a static trace-based optimizer for Haskell programs. The static optimizer uses profiling data to find traces in a Haskell program and then restructures the code around the traces to increase the scope available to the low-level optimizer. My results show that we can successfully build traces in Haskell programs, and the optimized code yields a speedup of up to 86% with an average speedup of 5% across 32 benchmarks.