Since a computer can only work with a finite range of values, there will be a limit to the kind of numbers that it can represent. If you are working with integers, you probably know the limits: 16bit = 65535, 32bit = 2147483647 * 2.
Now what if you want to work with real numbers, i.e. numbers that have decimals? You could represent all numbers, say divided by ten, that would give you two decimal places and is called fixed (decimal) point. But what if you need four digit precision? Then you are limited to values smaller than 200000 for 32bit. To solve this, computers use floating point numbers. Floating point numbers are represented with a mantissa (e.g. 1.5) and an exponent (e.g. 6), such as 1.5 * 10^6 = 1500000. This enables floating point number to represent very large numbers and very small numbers, at the cost of precision. Large numbers have less precision that small numbers, which is acceptable for most problems.
The problem with floating point numbers, is that rounding errors occurs when a computer performs calculations based on these numbers. Because a floating point number carries a slight imprecision, this imprecision accumulates with each calculation. To visualize what happens, I have produced a small animated GIF that shows the rotation of a rectangle. There are two rectangles, a red and a green one. The green rectangle is rotated with an angle that is calculated based on the time passed, where the red one is rotated with an angle that is gradually incremented. If there were no floating point rounding errors, the two would always be on top of each other.
After around 600000 operations, the rounding errors start to mean a difference of a whole degree and the drift between the two rectangles starts to appear. After this it just gets worse and worse.
If this was a real world scenario, e.g. an animation, it would appear to look fine, but after a while it would start to run out of sync with other animations.
If this was for a physics or chemistry simulation, it would produce correct results for short test samples, but incorrect results for long simulations. And if this was for a finance application, e.g. stock trading .... ouch!
And this demonstration only accumulates errors by addition, if you use multiplication it just explodes (well multiplies actually). The only way to avoid this is either to not use floating point operations or ensure that the values are not accumulated.