A floating point loop
Audrey wrote and executed the following Python loop:
i = 1 while i != 0: i = i - 0.1
Thinking that we should have i==0
after subtracting 0.1
from 1 ten times, Audrey thought that the loop would complete after 10 iterations. After 10000 years, though, her loop had yet complete. Can you explain why?
Note: You should try to explore the problem by playing with the code on your computer. You might want to include a bail out number of iterations using a modified version of the loop like so:
i = 1 cnt = 0 while i != 0 and cnt < 20: i = i - 0.1 cnt = cnt+1 print(cnt)
If you do so, you should find that the loop was actually executed 20 times, rather than the expected 10 times. It might help to put a print
command inside the loop.
Comments
putting a print(i) inside the loop, one can see what's going on with each itrated i:
It skips right over 0,because of the floating point error. For this to end in 10 iterations: i=super close to zero, needs to be defined to be the end of the loop.
I'm not hip on my python syntax, but I'll get back to this problem when I figure it out.