Solution to Precision Problem in JavaScript Numbers
November 27, 2020

We know that unlike many other programming languages, JavaScript does not define different types of numbers, like integers, short, long, floating-point etc.

JavaScript numbers are always 64-bit Floating Point, so there are exactly 64 bits to store a number: 52 of them are used to store the digits, 11 of them store the position of the decimal point (they are zero for integer numbers), and 1 bit is for the sign.

Value (Fraction)ExponentSign
52 bits (0 – 51) 11 bits (52 – 62)1 bit (63)

If a number is too big, it would overflow the 64-bit storage, potentially giving an infinity:

console.log( 1e309 ); // Infinity
console.log( 1e308 ); // 1e+308

Precision (or imprecision?!)

Integers are accurate up to 15 digits and by integers I mean numbers without a period or exponent notation.

That means,

var x = 999999999999999;   // x will be 999999999999999
var y = 9999999999999999;  // y will be 10000000000000000

The maximum number of decimals is 17, but floating point arithmetic is not always 100% accurate:

var x = 0.2 + 0.1;         // x will be 0.30000000000000004

So, this will result in a false!

console.log( 0.1 + 0.2 == 0.3 ); // false

A number is stored in memory in its binary form, a sequence of bits – ones and zeroes. But fractions like 0.1, 0.2 that look simple in the decimal numeric system are actually unending fractions in their binary form.

In other words, what is 0.1? It is one divided by ten 1/10, one-tenth. In decimal numeral system such numbers are easily representable. Compare it to one-third: 1/3. It becomes an endless fraction 0.33333(3).

So, division by powers 10 is guaranteed to work well in the decimal system, but division by 3 is not. For the same reason, in the binary numeral system, the division by powers of 2 is guaranteed to work, but 1/10 becomes an endless binary fraction.

There’s just no way to store exactly 0.1 or exactly 0.2 using the binary system, just like there is no way to store one-third as a decimal fraction.

The numeric format IEEE-754 solves this by rounding to the nearest possible number. These rounding rules normally don’t allow us to see that “tiny precision loss”, but it exists.

The same issue exists in many other programming languages.

PHP, Java, C, Perl, Ruby give exactly the same result, because they are based on the same numeric format.

Work around the problem?

The most reliable method is to round the result with the help of a method toFixed(n):

console.log( 0.1 + 0.2 == 0.3 ); // false

let sum = 0.1 + 0.2;

console.log( sum.toFixed(2) == 0.3 ); // true

toFixed always returns a string. It ensures that it has 2 digits after the decimal point.  We can use the unary plus to coerce it into a number:

let sum = 0.1 + 0.2;
console.log( +sum.toFixed(2) ); // 0.3

One more solution

var x = (0.2 * 10 + 0.1 * 10) / 10;       // x will be 0.3



Leave a Reply

Most Read

#1 How to check if radio button is checked or not using JavaScript? #2 How to set opacity or transparency using CSS? #3 Pagination in CSS with multiple examples #4 How to make HTML form interactive and using CSS? #5 Solution to “TypeError: ‘x’ is not iterable” in Angular 9 #6 How to uninstall Cocoapods from the Mac OS?

Recently Posted

Mar 3 How to embed YouTube or other video links in WordPress? Mar 3 How to change the Login Logo in WordPress? Mar 3 substring() Method in JavaScript Mar 3 Window setInterval() Method in JavaScript Mar 2 How to zoom an element on hover using CSS? Mar 2 the box-sizing property in CSS

You might also like these

OOP, Class and Objects Strategies For Beginners (PHP)PHPALTER DATABASE in PostgreSQLPostgresWordPress: How to display slider repeater fields in ACF?WordPressIntroduction to Angular modules Part 3: NgModules vs JavaScript modules and Angular librariesAngularHow to detect the Blog Page in WordPress?WordPressHow to list all PHP variables to debug the script?PHP