const number = 0.1 + 0.2 console.log(number); // ☝️ It doesn't output 0.3, // but 0.30000000000000004!
Floating-point arithmetic is the most common way computers can store decimal numeric values, and the IEEE 754 standard is a base-2 standard. That is, it's designed to work with binary (i.e. "0" and "1") numbers, which is how computers handle data.
The problem appears when you try to store a base-10 decimal number in a base-2 system. Let's say we want to store the number "0.1". If you want to store it in binary format, the number would look like this:
The "0011" sequence loops infinitely! It's exactly the same thing that happens when you want to write 10/3 in base-10: "3.333333..."
Obviously, it's impossible to store an infinite number of decimals, you have to make the cut somewhere. But that also means that you lose some precision when storing the number. After that cut off and the resulting precision loss, the number that is actually stored is 0.100000001490116119384765625". In base-2 systems, this problem happens in lots of cases.
If you create a variable with the value "0.1" (that is stored as "0.100000001490116119384765625", remember), when you get that variable, the extra digits will be removed and you'll get "0.1" back.
But there are some situations in which the rounding method doesn't work, and the result is a number with lots of weird decimal places.
What if I actually need precision?
There are situations where you do need decimal precision. An example is when you are dealing with money. Don't use decimal numbers directly. Instead, use a library like decimal.js.
If you're interested in reading more, I suggest floating-point-gui.de. It's a simple explanation of the problem and how floating-point arithmetic works.
And if you want a fun tool to play with, there's the IEEE-754 Floating Point Converter. On that site, you can enter a number and see what is actually being stored.