# A floating point loop

Audrey wrote and executed the following Python loop:

i = 1
while i != 0:
i = i - 0.1


Thinking that we should have i==0 after subtracting 0.1 from 1 ten times, Audrey thought that the loop would complete after 10 iterations. After 10000 years, though, her loop had yet complete. Can you explain why?

Note: You should try to explore the problem by playing with the code on your computer. You might want to include a bail out number of iterations using a modified version of the loop like so:

i = 1
cnt = 0
while i != 0 and cnt < 20:
i = i - 0.1
cnt = cnt+1
print(cnt)


If you do so, you should find that the loop was actually executed 20 times, rather than the expected 10 times. It might help to put a print command inside the loop.

• putting a print(i) inside the loop, one can see what's going on with each itrated i:

in: i = 1
cnt = 0
while i != 0 and cnt < 20:
i = i - 0.1
cnt = cnt+1
print(i)
print(cnt)
out: 0.9
0.8
0.7000000000000001
0.6000000000000001
0.5000000000000001
0.40000000000000013
0.30000000000000016
0.20000000000000015
0.10000000000000014
1.3877787807814457e-16
-0.09999999999999987
-0.19999999999999987
-0.2999999999999999
-0.3999999999999999
-0.4999999999999999
-0.5999999999999999
-0.6999999999999998
-0.7999999999999998
-0.8999999999999998
-0.9999999999999998
20


It skips right over 0,because of the floating point error. For this to end in 10 iterations: i=super close to zero, needs to be defined to be the end of the loop.
I'm not hip on my python syntax, but I'll get back to this problem when I figure it out.