I’ve recently seen a lot of demonstrations of why the decimal 0.999… equals 1.
These are endlessly cycling the internet, simply because all the simple explanations aren’t really compelling. You see smart people responding “can’t you just…” or simply not convinced by bare assertion.
The truth is that is that dealing with these things is actually a lot more complex than a glib twitter answer. You should feel uneasy with these explanations. This same subject confused mathematicians of earlier centuries, leading to awkward theories like “infinitesimals”, which ultimately fell out of favour.
I’m going to take you through a proof that 0.999… = 1, with rigour. Rigour is a term used in maths for building from a solid foundation and proceeding from there in sufficiently small steps. Thus, the majority of the article is not the proof but the definitions. How can we talk about infinity in a way that makes sense? The trick, as we’ll see, is to only talk about finite things we already understand, and define infinity in terms of those.
This article is aimed at those with high school level maths. There’s a proof halfway down, but it’s skippable.
Table of Contents
An Introduction to Infinity – the Red Jar
Let’s start with a simple scenario. It’s necessarily unrealistic, but maths was never concerned with actual reality, just what follows from the rules set out.
Suppose there is a large red jar. Every minute, another \(10\) litres of water is poured into the jar. The jar magically grows larger, and the source of water never stops or runs out. How much water is eventually in the jar?
Well, it’s basically a nonsense question. What does “eventually” mean? We haven’t defined that yet. But here are some true facts that I can say about the jar.
- After \(10\) minutes, there are \(100\) litres in the jar.
- After \(100\) minutes, there are \(1000\) litres in the jar
- After \(T\) minutes, there are \(10T\) litres in the jar.
These are all facts involving finite amounts of time and finite amounts of water. They are straightforwardly provable with the tools we already have, basic arithmetic. That last statement is true for all \(T\), but resist the urge to think of it as an infinite set of statements – it’s one statement, that has a variable, \(T\), in it. It too is also provable with arithmetic.
Now we’re going to play a stupid 2 player game. Here are the rules: On your turn, name a number, \(N\). Then on my turn, I name a time, \(T\). Finally, you name a time \(U\) that is greater than \(T\). If there are less than \(N\) litres of water in the jar at time \(U\), then you win. Otherwise, I win.
It should be clear, this game is rigged. I can always win. No matter which \(N\) you pick, all I have to do is pick \(T=N/10\). At that time, there will be \(N\) litres in the jar, and it only increases from there, so no choice of \(U\) works – I will win.
In other words, there is no upper bound on the amount of water in the jar. Any bound you might suggest will definitely get shattered at some later point. We call any number \(N\) where you definitely win an upper bound. Any number where I can win is not an upper bound.
We’ll call any such process that increases without limit as “tends to infinity”. Infinity, in this definition, is not a number at all, it’s just a description of a sequence of numbers. And the description itself only involves finite numbers, so our definition is solid.
Clearly, not all sequences “tend to infinity”. If we stopped filling the jar after an hour, then it’ll never have more than \(600\) litres of water, so it has an upper bound. And you could win the game by saying \(601\) liters.
Let’s look at another jar, with another process for filling it.
The Blue Jar
This time, there’s a large blue jar. It’s still being filled with water, but differently. The first minute, half a litre of water is added. The second minute, a quarter litre is added, and the third an eighth. Each subsequent minute, we pour in half the amount we poured in the previous minute.
Can we say that this sequence also “tends to infinity”? The answer is no, but we’ll have to do some maths to prove it.
We’ll use a sequence of variables \(x_1, x_2, \cdots \). to stand for the amount of water in the jar after minute 1, minute 2, etc. We can write \(x_i\) to stand for the amount of water in the jar after minute \(i\).
So
\[
\begin{align*}
x_1 &= \frac{1}{2}\\
x_2 &= \frac{1}{2} + \frac{1}{4}\\
x_3 &= \frac{1}{2} + \frac{1}{4} + \frac{1}{8}\\
\vdots &
\end{align*}
\]
We can summarize this as:
\[ x_i = \frac{1}{2} + \cdots + \frac{1}{2^i} \]
Or equivalently \( x_i = x_{i-1} + \frac{1}{2^i} \), which makes it clear that we are starting with the previous volume of water (\(x_{i-1}\)) in the jar, and adding to it.
But that doesn’t really tell us the precise value unless we want to do a lot of sums. Instead, I’m going to prove that \( x_i = 1 – \frac{1}{2^i} \), using a proof by induction. You can skip the proof if you are intuitively happy that each minute, we pour in water equal to half the remaining space in the jar, which is sized to fit one litre of water.
A Proof by Induction
Theorem:
If \( x_1 = \frac{1}{2}\) and \(x_i=x_{i-1} + \frac{1}{2^i}\), then
\[ x_i = 1 – \frac{1}{2^i} \]
Proof:
Proof by induction works in two steps. First, we prove the base case, \(x_1\). It is clear that \(x_1 = \frac{1}{2} = 1 – \frac{1}{2^1}\). Second, we prove the inductive case. We assume it is true for \(i-1\), and then seek to prove that it is true for \(i\).ex
By assumption: \(x_{i-1} = 1-\frac{1}{2^{i-1}}\)
So we know that
\[
\begin{align*}
x_i &= x_{i-1} + \frac{1}{2^i} \\
&= 1-\frac{1}{2^{i-1}} + \frac{1}{2^i} \\
&= 1-\frac{1}{2^i}
\end{align*}
\]
Which completes the inductive case.
So we know that the \( x_i = 1 – \frac{1}{2^i}\) for \(i=1\) (the base case), and that if it’s true for \(i=1\), it’s true for \(i=2\) (the inductive case), then if it’s true for \(i=2\), it’s true for \(i=3\) (induction again), etc. So we know it’s true for any value of \(i\), and the theorem is proved.
You may be thinking this proof is in some sense sneaking in an infinity. We jumped from knowing a fact about specific values of \(i\), to knowing a fact about all \(i\). Well, you’d be right. This process of induction is an “axiom”, something you just have to accept if you want to do productive maths. You don’t have to accept it, but then you end up with different, and usually more boring conclusions. But remember, to prove the theorem for any given i, you only need finitely many uses of the inductive case. So as with the red jar, we’ve dealt with the entire range of values by only considering finite work for each value.
Back to the Blue Jar
End of proof, start reading here if you are skipping the maths.
So we’ve established that the amount of water in the blue jar gets closer and closer to 1 litre as time goes on. We could say that 1 litre is an upper bound. If we played that stupid game from before, you could name 1 litre, and I would be stumped, there’s no time where the water level exceeds that. So you’d win.
Thus, the blue jar sequence does not “tend to infinity”. This is despite constantly pouring in more water!
Instead, we’ll play another stupid game. In this game, you’ll pick a number \( \varepsilon \) ( \( \varepsilon \) is the Greek letter epsilon, which is traditionally used in maths to represent variables for this purpose). Then I’ll pick a time \(T\). Then, you pick a time \(U\) which is greater than \(T\). If at time \(U\), the distance between the amount of water and 1 litre is less than epsilon, then I win. Otherwise, you win.
Again, this game is rigged. There’s no value of \(\varepsilon\) that works. I can always pick \(T = \log_2(\frac{1}{\varepsilon}) + 1\). At that time, the amount of water will be \(1-\frac{1}{2^T}\), which works out as \(1 – \varepsilon/2\). The distance of that to 1 is \(\varepsilon/2\), which is less than \(\varepsilon\). Later values of U will have values even closer to 1.
When I can always win this game, we say the sequence “tends to 1“. Again, this is a description of a sequence of numbers, perfectly valid in a finite-only world.
Decimal Sequences
Ok, I think we’re finally ready to tackle decimal numbers. Imagine we live in a world where only fractions have been discovered. There is decimal notation, like \(0.123\), but that is understood as just a shorthand for \(\frac{123}{1000}\). There are no recurring decimals, or decimals with an unending amount of digits.
But what we do have, is sequences. Here’s one sequence:
\[
\begin{align*}
x_1 &= 0.3 & =& \frac{3}{10}\\
x_2 &= 0.33 & =& \frac{33}{100}\\
x_3 &= 0.333 & =& \frac{333}{1000}\\
\vdots
\end{align*}
\]
It doesn’t take long to recognize this sequence works very similarly to the blue jar sequence from above, and get
\[ x_i = x_{i-1} + \frac{3}{10^i} \]
We can adapt the proof we used above, and show that this new sequence tends to \(\frac{1}{3}\). That is, we can play the epsilon game, this time comparing distances of the sequence to the value \(\frac{1}{3}\). I’ll always win, as the sequence gets arbitrarily close.
We’ll call this sequence “\(0.\dot{3}\)”, pronounced “zero point three recurring”.
There are lots of other sequences. Here’s one: \(x_1=5, x_2=5, x_3=5, x_4=5\cdots\) I shouldn’t need to tell you that this sequence tends to \(5\).
Here’s another sequence: \(x_1=3, x_2=3.1, x_3=3.14, x_4=3.141, x_5=3.1415\cdots\) You’ll have to take my word there’s a straightforward, if long, definition for elements in this sequence. Unlike my other examples, there’s no fraction that we pick can this process tends to, so we’ll just call the sequence \(\pi\).
There’s also sequences that don’t settle down, like \(x_1=1, x_2=0, x_3=1, x_4=0, x_5=1\cdots\). We’ll ignore these for now. There’s another epsilon game we can play that can be used to filter out sequences like this, but I won’t go into it.
Real Numbers
It turns out there’s a lot we can do with sequences. If we’ve got two sequences \(x_i\) and \(y_i\), we can define a new sequence \(z_i = x_i + y_i\). It turns out that if \(x_i\) tends to a fraction \(A\), and \(y_i\) tends to a fraction \(B\), then \(z_i\) will tend to \(A + B\). You can try and prove this yourself as an exercise, but it can be found in plenty of textbooks.
You can make equivalent operations for all the basic arithmetic operations, like subtraction, and division. In every case, if we work with sequences that tend to some fractions, and do some operations on those real numbers, the result will tend to the fraction you’d get if you did the same operations on the starting fractions. So in some sense, these sequences behave identically to their fraction counterparts.
So these sequences behave just like numbers, for the most part. Maybe we should just call them numbers? Mathematicians use the term real numbers, to distinguish them from from fractions, integers, and other sorts of numbers that mathematicians work with.
But there is a catch. Here are two sequences:
\[
\begin{align*}
x_1&=0.3, & x_2&=0.33, &x_3&=0.333,&\cdots\\
y_1&=0.3, & y_2&=0.333, &y_3&=0.33333,&\cdots\\
\end{align*}
\]
These are two different sequences, but they both tend to \(\frac{1}{3}\). For various reasons, it’s not useful to have duplicate numbers that behave identically. So when defining the real numbers, we’ll consider certain sequences as equivalent.
Two sequences correspond to the same real number if the difference of the sequences tends to zero.
So \(x_i\) and \(y_i\) are equivalent, as \((y_i-x_i)\) tends to 0.
In practice, it gets extremely wordy to always be talking about sequences of things, so real numbers are usually referred to in a shorthand. We’ve seen how “\(0.\dot{3}\)” and “\(5\)” are examples are such shorthand. And it gets tedious to say “sequences are equivalent”. We’ll just say that the real numbers are equal instead, and start using an equals sign. While we’re being liberal with equals signs, we might as well say that a real number that tends to a fraction “equals” that fraction. They behave identically, so why not? Whether they are “really” equal is something we leave to the philosophers.
This sort of thing is called “abuse of notation” in maths. Familiar symbols are repurposed for new meanings that are roughly similar to the old ones. You can’t accomplish anything in maths without shorthand notations, then shorthands of shorthands, lest things become too verbose. But a mathematician should always be capable of translating back to raw elements if needed. At least until expert research level mathematics where you become such a pro at reasoning that you often drift off from such low-level thinking entirely.
What is 0.999…?
So let’s practise that unpacking process now. When we write \(0.999\cdots = 1\). What are we actually saying?
The left-hand sign is an infinite decimal. We know that is a shorthand for the real number represented by the sequence \(x_1=0.9, x_2=0.99, x_3=0.999, \cdots\)
And likewise, the right-hand side is a real number represented by sequence \(y_1=1, y_2=1, y_3=1,\cdots\)
And the equals sign is asserting that these two real numbers are the same, which means that the difference of the sequences tends to zero. Is that something we can prove?
Well, \(y_1-x_1=0.1, y_2-x_2=0.01,\cdots\) It’s fairly easy to prove that \(y_i-x_i = \frac{1}{10^i}\). This sequence tends to zero as if you pick any \(\varepsilon\), no matter how small, I can show that the sequence eventually gets smaller.
Thus the sequences are equivalent, which means the real numbers are equal, so yes, it is true that \(0.999\cdots = 1\).
Phew!
We’ll finish with some concluding thoughts, but well done, we got there.
I think this above also explains why people things intuitively that 0.999… and one are different. In terms of raw sequences, they are different. In terms of ink on the page, they are different. But the only sensible way to work with real numbers is to consider two sequences equivalent if they tend to the same thing. And these two sequences do exactly that.
Appendix
What I didn’t do
So this is a lot longer article than any of the explanations you’ve seen elsewhere. The problem with shorter explanations is that they involve doing algebra on infinite sequences of operations. The use of numerical sequences above very carefully avoids that. All the maths is done with finitely many steps, and we use the epsilon game to reason about long-term behaviour while still talking about finite things.
Algebra can be done on infinite sequences, but it is not intuitive. Certain common sense things no longer apply in these cases, and what can safely be done, needs to be proved. The mechanisms above are one such way of proving things.
As an example of the sorts of dangers, consider this infinite operation:
\[ 1-1+1-1+1\cdots \]
Depending on where you insert the brackets, that could be interpreted as
\[ (1-1)+(1-1)+(1-1)+\cdots = 0+0+0+\cdots =0 \]
or
\[ 1 + (-1 + 1) + (-1 + 1)+\cdots = 1 + 0 + 0 +\cdots = 1 \]
In other words, nonsense. Something as innocuous as putting in brackets is totally invalid!
Another thing I didn’t do is use \(\infty\), the infinity symbol, at all. While it’s ok to talk about infinity as a concept, or as a symbol, I certainly didn’t treat it as a number. Again, you can treat it as a number in some cases, but not all the rules work with it, so you need to prove everything over again and forget your intuitions.
Construction
What we did in this article was essentially define real numbers, starting from nothing more than fractions and sequences. By defining something complex in terms of simpler building blocks, we make a solid foundation. If we ever want to know a fact about real numbers, and we can’t prove it in terms of real number facts already known, then we can resort to thinking about real numbers as sequences, and prove things there.
This general idea is called construction. This is not the only way to construct real numbers, there are other formulations that are equivalent. And reals are not the only things that are constructed – most mathematical objects have a definition that works this way. In my first year at university, we proceeded in order – first, we defined natural numbers (counting numbers from 0). Then we learnt a construction for integers (whole numbers positive and negative) and proved that it was equivalent to natural numbers where they overlapped. Then we constructed fractions from integers, real numbers from fractions, and so on.
At this point, maths starts becoming a lot more cohesive subject. You begin to understand that maths isn’t a whole bunch of different rules, tricks and techniques, but instead a vast tree, or web of knowledge. So vast, that we must use different notation, terminology and ideas to tackle different parts. But there are more commonalities than you’d believe.
Real Catastrophe
In my haste to get to the 0.999… case, I glossed over a lot of the complexities of working with real numbers. There are lots of interesting ideas here that you’ll have to read about elsewhere.
- There are many sequences that don’t actually correspond to any real number. We really only want to consider convergent sequences, which is another variant on the epsilon-delta game.
- Some real numbers, such as \(\sqrt{2}\) or \(\pi\), do not have a fraction that they tend to at all
- In some sense, there are “more” real numbers than there are fractions, despite there being an infinite amount of both.
- Not all sequences can be easily described on paper. Some cannot be computed at all.
The Epsilon Delta Game
In the article, I introduced several games. These are all variants on the epsilon-delta game, which is the key definitions for limits. What I was calling “tends to” is better known as “the limit of a sequence. It is an extremely powerful technique for dealing with all sorts of things, and features heavily in the definitions of calculus.
on-unique representation is kinda the crux of it. Interestingly enough there are number systems which don’t have this issue. One is the negative base systems. Another (iirc) is the p-adics. Interestingly enough the p-adics in a lot of these weird cases actually work closer to our intuitions. Some of these different ways of thinking about numbers are actually great. There’s something very satisfying when you can encapsulate some desirable property in an extra dimension for example (like complex numbers). It’s just an extra dimension with some tacked on rules in that case. But those rules are what produce all the beauty.
Personally, I prefer a vinculum (bar) to note the repeating decimals, as I find it more clearly marks which of the decimals repeats, though it seems it varies by location. Wikipedia has some examples: https://en.wikipedia.org/wiki/Repeating_decimal#Notation
Also, I think the key issue in the matter is whether 0.999… < 1 or not. You acknowledge that the = is a bit of an abuse of notation in your proof, so I don’t think it would be fair to rule it out “because they are =”. While I wouldn’t claim it is rigorous, I am inclined to say yes, that 0.999… < 1, as every other element in the sequence (all the finite 0.9, 0.99, 0.999, …) are less than one. Basically, I see it as 0.9… – 1 should correspond to the limit of zero from the left, and 1-0.9… should correspond to the limit from the right (granted, this would only have a difference for piecewise functions like the Heaviside step function).
Lastly, I love your site. So many great posts, thank you for sharing.