You are losing precision. This is because float in Python implementing double floating point precision which only guarantee precision up to the 15/16-th digit.
When you do:
1,000,000,000 + 0.000001
1,000,000,000.000001 + 0.000001
# and so on, note that you are adding the 16-th digit
# but 1,000,000,000.000001 is not actually exactly 1,000,000,000.000001
# behind is something like 1,000,000,000.000001014819 or 1,000,000,000.000000999819
Continuously, you are breaking the precision limit, there are some other values after the last 1 in the 0.000001 which is represented only as 0.000001. Thus you got accumulative error.
Things would have been different if, say, you initialize your variable as 0. This is because in the computation:
0.000000 + 0.000001
0.000001 + 0.000001
0.000002 + 0.000001
#and so on
Although the actual value of 0.000001 isn't exactly 0.000001, but the 16-th digit imprecision is far from the significant numbers:
0.000000 + 0.00000100000000000000011111
0.000001 + 0.00000100000000000000011111 #insignificant error
You could also avoid the error by using decimal value instead of double:
from decimal import *
variable = Decimal(1000000000)
addition = Decimal(1e-6)
for i in xrange(1000000):
variable = variable+ addition #0.000001
variable = variable-Decimal(1000000000) #1 subtracts with 1 billion again
variable