Weren't all these problems solved in 1980?
In general, we are look at problems that arise in compiling code for
uniprocessor systems. Opportunities for improving on past work
arise from three principle sources.
Of course, we are also interested in related problems.
These include interprocedural analysis and optimization,
microprocessor architecture, programming environments, and
the design of run-time systems.
- The relative cost of operations has changed radically since
the early 1980's. On a VAX 11/780, for example, floating-point multiply
was expensive and memory operations had no wait states. Today, memory
operations are expensive and floating-point multiplies can usually
be issued every cycle. On some machines, certain categories of branches
are effectively free. Different cost structures make different code
They also make performance sensitive to issues that were insignificant
in the past.
- Today's microprocessors achieve their high performance by relying
on a number of features that were not popular on minicomputers and
microcomputers in the early 1980's. Examples include non-allocating
load and store instructions and the ability to issue several instructions
in each cycle. (Some of these features appeared in mainframes and
supercomputers in the 1960's and 1970's; those machines included hardware
to manage them, like the "scoreboard" in the CDC 6600. Today's microprocessors
rely on the compiler to manage these resources.)
Compilers must be able to use these features.
- Compiler-based static analysis techniques have improved steadily
in the last decade. Dependence analysis, static single assignment form,
and interprocedural analysis have all found their way into commercial
These tools provide the optimizer and code
generator with sharper information about the flow of values; this, in turn,
can lead to improved run-time performance.
So, the short answer is that, in 1980, many of these problems
were solved, but innovations in microprocessors and their
compilers have made seeking new solutions important.