Computer arithmetic: Difference between revisions
New stub Tag: Removed redirect |
more specific stub templaes |
||
Line 11: | Line 11: | ||
Another way for limiting the rounding errors consist of using [[multiple-precision arithmetic]]. The [[software library]] [[GNU GMP]] is a ''de facto'' standard for that. |
Another way for limiting the rounding errors consist of using [[multiple-precision arithmetic]]. The [[software library]] [[GNU GMP]] is a ''de facto'' standard for that. |
||
{{stub}} |
{{math-stub}} |
||
{{comp-sci-stub}} |
Revision as of 16:48, 7 March 2024
Computer arithmetic is an area that belongs to both computer science and mathematics.
Historically, computer arithmetic dealt with the design of arithmetic logic units.
Presently, computer arithmetic deals mainly with the problem of having computer implementations of arithmetic operations that are as efficient as possible and introduce the smallest possible rounding errors, including overflows and underflows.
Rounding errors are unavoidable since numbers with infinitely many digits must be represented with a finite number of bits
The most used representation of numbers in computers is floating-point arithmetic. The norm IEEE 754 specifies how floating numbers must be represented, and establish that floating-point operations must be implemented in such a way that the result of an operation is always the exact rounding of the exact mathematical result. This could seem to be an evidence, but it is far to be simple (see FDIV bug, for example).
Another way for limiting the rounding errors consist of using multiple-precision arithmetic. The software library GNU GMP is a de facto standard for that.