This project is funded by the NSF ITR program and by Texas ATP. Project members are: Keith Cooper , Linda Torczon and Devika Subramanian .

The context

At design time, the compiler writer selects a set of optimizations and an order in which to apply them. These choices have a direct impact on the quality of code that the compiler can generate. Making good choices is difficult for several reasons. The sheer number of transformations that have been described in the literature is daunting. The relationship between features of the input program and improvements (or degradations) produced by a transformation is not formally characterized. The interactions between these transformations are poorly understood.

The question

Can we build an adaptive compiler that changes its selection of transformations and its application order based on the input program, the performance characteristics of the target machine, and an explicit objective function? Such a compiler would sample the space of transformation sequences in a randomized manner, learning a model of the impact of various sequences on a given program, and adapting its sampling based on the learned model.

Preliminary Results on fmin

We have built such a compiler. It has allowed us to explore the impact of transformation selection and ordering on the code that the compiler produces, and to characterize the search space in which the adaptive compiler operates. We present our results on a benchmark program called fmin . We chose fmin a 150 line Fortran program with a complex control-flow graph and forty-four basic blocks. It displays interesting behavior from an optimization standpoint and is small enough to study exhaustively.

Publications

Talks