Abstract
Recently some scientific computing users have discovered that they can replace 64-bit with 32-bit operations for carefully selected portions of the computation, and still retain acceptable accuracy in the final results. In addition, developers of some emerging applications such as machine learning have discovered that they can achieve acceptable results with only 16-bit precision in certain portions of the code. At the other end of the precision spectrum, some users have explored using 128-bit arithmetic in some particularly demanding applications, while others have done computations using much higher precision—hundreds or even thousands of digits. Such work has underscored the need to develop new mathematical and software frameworks to support a dynamically variable level of precision, and, more generally, to rethink what “reproducibility” means in a variable precision environment. This article summarizes some of the work being done in this arena, and lists research problems that need to be solved.
Keywords
Get full access to this article
View all access options for this article.
