Subsection 2.8.1 The doubling map
Let \(H\) denote the half-open, half-closed unit interval:
The doubling map \(d\) is the function \(d:H\to H\) defined by
A graph of the doubling map together with a typical cobweb plot starting at an irrational number is shown in figure FigureĀ 2.8.1
As it turns out, the doubling map is particularly easy to analyze if we consider its effect on the binary representation of a number. Suppose that \(x\in H\) has binary representation
where each \(b_i\) is a zero or a one. (In computer parlance, the \(b_i\)s are called the bits of the number.) Of course, some numbers have multiple binary representations. For example,
To ensure uniqueness, we agree to consistently represent numbers with a terminating binary expansion, if possible. Thus, representations that end with a repeating 1 are excluded.
Now, the effect of \(d\) on the binary representation of \(x\) is simple:
That is, the effect of \(d\) is to simply shift the bits of \(x\) to the left, discarding the bit that shifted into the ones place. This observation makes it very easy to find orbits with specific properties. Suppose, for example, we want an orbit of period 3. Simply pick (almost) any number of the form
The only caveat is that we can't have all \(b_i\)s the same for that would lead to either zero (which is fixed) or one (which is not in \(H\)). As a concrete example,
has period 3. In fact, it's easy to verify that
under the doubling map.
Another nice feature of this representation is that there is a simple correspondence between the binary expansion of a number and its position in the unit interval. Every number with a binary expansion starting with a zero lies in the left half of the unit interval, while every number starting with a one lies in the right half. The first two bits of a number specify in which quarter of the interval the number lies; the first three bits specify in which eighth of the unit interval the number lies, as shown in figure FigureĀ 2.8.2
More generally, given \(n\in\mathbb N\text{,}\) we can break the unit interval up into \(n\) pieces with length \(1/2^n\) and endpoints \(i/2^n\) for \(i=0,1,\ldots,2^n\text{.}\) These are called dyadic intervals and their endpoints (numbers of the form \(1/2^n\)) are called dyadic rationals. The first \(n\) bits of a number specify in which \(n^{\text{th}}\) level dyadic interval that number lies. In fact, the left hand endpoint of a dyadic interval has a terminating binary expansion which tells you exactly the first \(n\) bits of all the points in that interval.
Now, suppose that
Thus, the binary expansions of \(x_1\) and \(x_2\) agree up to at least the \(n^{\text{th}}\) spot but potentially disagree after that. Then, our geometric understanding of dyadic intervals allows us to easily see that,
Of course, there's also a simple algebraic proof of this fact, based on the fact that the bits cancel for \(k\leq n\)