The Scalar Compiler Group at Rice engages in research intended
to improve the quality of optimization and code generation for
microprocessor-based, uniprocessor systems.
(We work closely with the folks at Rice who focus on compilers
for parallel machines, as well.
However, our focus remains uniprocessor systems.)
In the past, this group has been known as the
Massively Scalar Compiler Project (MSCP).
In general, the goals of the Scalar Compiler Group are
Our primary focus is on problems that arise for uniprocessor,
microprocessor-based systems. We try to ensure that our techniques
work equally well in multiprocessor systems.
- to develop new techniques in code optimization and code
generation that represent real improvements over existing
- to transfer this knowledge to industrial compiler groups
in a way that enables them to quickly evaluate and deploy
new methods, and
- to implement and distribute prototype implementations that
demonstrate these new techniques, show the kind of engineering
required to make them practical, and serve as a guide for
reimplementation in both commercial and research systems.
Over time, our work has been supported by a variety of funding sources, including
the US Department of Energy (through the Los Alamos Computer Science
Institute), the National Science Foundation (NSF), the Defense Advanced Research
Projects Agency (DARPA), and the State of Texas Advanced Technology Program (ATP).
Problems that we attack
In general, we investigate problems that arise in compiling code for
uniprocessor microprocessor-based systems. Our particular areas of
interest are code improvement techniques (often misnamed "optimization") and
code generation issues. A common misconception is that
these problems were all solved in 1980.
Algorithm development and experimentation are integral
parts of developing better compiler technology.
We choose problems by looking at the reasons that compiled code
does not meet the user's expectations.
The first step in most
of our investigations is to examine the output of the compiler and
to understand what improvement is possible.
The second step is to examine past work on the problem;
this involves reading the literature and
talking at length with people in academia and in industry.
The third step is to devise a new approach to the problem;
it is followed quickly by implementation and experimentation.
This usually leads to a protracted cycle of refinement, reimplementation, and
We tend to recommend a technique only when we would be happy to buy a
compiler that used it.
Experimentation is critical to our work. Many of the interesting problems
in compilation are, in essence, resource allocation problems. Most of these
are NP-Complete (register allocation, instruction scheduling). Thus, our
techniques try to choose a good solution out of an exponential-sized
(or larger) set. Compilers are, in general, constrained to using
polynomial-time algorithms. This has two key consequences. First,
compilers cannot afford to discover guaranteed optimal solutions to
these problems. Second, ideas that look good at the marker-board often
fall apart in practice. (We say that they "rationalize well.")
Thus, experimental validation is an essential
part of discovering techniques that genuinely improve run-time performance.
Our experience suggests that an effective transition from research idea
to commercial implementation requires good ideas, mutual interest, and a
mechanism for transmitting the detailed technical knowledge required for
implementation. Often, this last issue is hardest to accomplish.
We have found that the best way to transfer detailed knowledge is through
a carefully written reference implementation, one that explains
data structures, design decisions, and details at a level beyond what
can be published in a conference or journal.
The full-time staff of the scalar compiler group consists of
Keith D. Cooper,
Linda Torczon, and
works with us on adaptive compilation and on search-based algorithms
for a variety of problems. However, she has broad interests that
go far beyond compilation.
As with any successful research group, we have a cadre of fine
graduate students, including Jaspon Eckhardt,
Jeff Sandoval, Yi Guo, and Dave Peixotto.
We give talks on our work.
We write software and make it
available via the web.
Alumni and Their Whereabouts
- Preston Briggs, PhD 1992 Google
- Karim Esseghir, MS 1993, ... lost to us...
- Dan Grove, MS 1993 Mysterious startup company
- Chris Vick, MS 1994 Sun Microsystems
- Cliff Click, PhD 1995 Azul Systemss
- L. Taylor Simpson, PhD 1996 Qualcomm
- Nathaniel McIntosh, PhD 1997 (joint with Ken Kennedy) HP
- John Lu, PhD 1998 LSI Logic
- Edmar Wienskoski, PhD 1998 Freescale
- Jingsong He, MS 2000 Dell Computer
- Philip Schielke, PhD 2000 Texas Instruments
- Tim Harvey, PhD 2003 Rice
- Li Xu, PhD 2003, University of Massachusetts, Lowell
- Alex Grosul, PhD 2005, nVidia
- Todd Waterman, PhD 2005, Texas Instruments
- Anshuman Das Gupta, PhD 2006, Qualcomm
This page is maintained by Keith Cooper.