*The original question was*: 0.999… = 1 does not make sense with respect to my conception of the number line, I do not know much about number classes but for the number line from lets say 0 to 1 there is an infinite number of points, so there is a number right next to 1 we cant write it in entirety because the decimal expansion is infinite but the difference between that 1 and that number is 1*10^(-infinity) (sorry if I am abusing notation). so that number to me should be 0.999… but it is not where am I missing the point?

**Physicist**: In the language of mathematics there are “dialects” (sets of axioms), and in the most standard, commonly-used dialect you can prove that 0.999… = 1. The system that’s generally taught now is used because it’s useful (in a lot of profound ways), and in it we can prove that 0.99999… = 1. If you want to do math where 1/infinity is a definable and non-zero value, you can, but it makes math unnecessarily complicated (for most tasks). The way the number system is generally taught (at the math-major level, where the differences become important) is that the real numbers are defined such that (very long story short) 1/infinity = 0 and there isn’t a “next number” for any number. That is, if you think you’ve found a number, x, that’s closer to 1 than any other number, then I can find a number half way between it and 1, (1+x)/2, that’s even closer. That’s not a trivial statement. In the system of integer numbers there *is* a next number; for 3 it’s 4, for 26 it’s 27, etc.. In the system of real numbers *every* number can be added, subtracted, multiplied, and divided without “leaving” the real numbers. That leads to the fact that we can squeeze a new number between any two different numbers. In particular, there’s no greatest number less than one. If there were, then you couldn’t fit another number between it and one, and that would make it a big weird exception. Point is: it’s tempting to say that 0.999… is the “first number below 1″, but that’s not a thing.

The term “real numbers” is just a name for a “sand box” of mathematical tools that have become standard because they’re useful. However! There are other systems where “very very very slightly less than 1″ , or more precisely “less than one, but greater than every number that’s less than one”, makes mathematical sense. These systems aren’t invalid or wrong, they’re just… not as pretty and fluid as the simple (as it reasonably can be), solid, dull as dishwater, real number system.

In the set of “real numbers” (as used today) a number can be *defined* as the limit of the decimal expansion taken one digit at a time. For example, the number “2″ is {2, 2.0, 2.00, 2.000, …}. The “square root of 2″ is {1, 1.4, 1.41, 1.414, 1.4142, …}. The number, and everything you might ever want to do with it (as a real number), can be done with this sequence of ever-longer decimals (although, in practice, there are usually more sophisticated methods).

These sequences are “equivalent” and describe the same number if they get (arbitrarily) closer and closer to that same number forever. Two sequences don’t need to be identical to be equivalent. The sequences {1, 1.0, 1.00, 1.000, …} and {0, 0.9, 0.99, 0.999, …} both get closer and closer to each other and to the value “1″ forever, so they’re equivalent. In absolutely every way that counts (in terms of the real numbers), the number “0.99999…” and the number “1″ or “1.0000…” are exactly the same.

It does seem very bizarre that two numbers that look different can be the same, but there it is. This is *basically* the only exception; you can write things like “0.5 = 0.49999…”, but the same thing is going on.