Abstract
Despite the astonishing increases in processor performance over the last 40 years, delivered application performance remains a critical issue for many important problems. Compilers play a critical role in determining that performance. A modern optimizing compiler contains many transformations that attempt to increase application performance. However, the best combination of transformations is an application-specific issue. Recent systems such as FFTW and ATLAS have demonstrated how code that adapts its behavior to target machine parameters can deliver better performance than code that adopts a single strategy for all machines. Unfortunately, developing these systems required significant amounts of time by experts. Adaptive compilation (systems where the compiler chooses an appropriate set of optimizations and parameters for each application) offer the promise of custom performance similar to FFTW or ATLAS without requiring the time investment of experts. In this paper we detail an experiment with adaptive, feedback-driven blocksize choice. The experiment demonstrates two critical issues. First, an adaptive blocking strategy can automatically produce performance similar to that achieved by ATLAS. This result suggests that we can make customized ATLAS-like performance available, automatically, across a wider range of programs. Secondly, the command-line parametrization of existing commercial and research compilers is inadequate to express the complex strategies that an adaptive system will need. For example, the blocksize parameters to the MIPS compiler are applied uniformly to all loops; for more complex applications, the compiler will need to specify blocksizes on a much finer granularity.
Get full access to this article
View all access options for this article.
