Skip to main content
\( \newcommand{\lt}{ < } \newcommand{\gt}{ > } \newcommand{\amp}{ & } \)

Section25.3Toward the Riemann Hypothesis

Riemann, though, was after bigger fish. He didn't just want an error term. He wanted an exact formula for \(\pi(x)\), one that could be computed. Computed by hand, or by machine, if such a machine came along, as close as one pleased. And this is where \(\zeta(s)\) becomes important, because of the Euler product formula: \begin{equation*}\sum_{n=1}^{\infty} \frac{1}{n^s}=\prod_{p}\frac{1}{1-p^{-s}}\end{equation*}

Somehow \(\zeta\) does encode everything we want to know about prime numbers. And Riemann's paper, “On the Number of Primes Less Than a Given Magnitude”, is the place where this magic really does happen. (The paper is also available in translation in the appendix of [C.3.4].) Seeing just how it happens is our goal to close the book.

We'll begin by plotting \(\zeta\), to see what's going on. As you can see, \(\zeta(s)\) doesn't seem to hit zero very often. Maybe for negative \(s\) …

Subsection25.3.1Zeta beyond the series

Wait a minute! What was that plot? Shouldn't \(\zeta\) diverge if you put negative numbers in for \(s\)? (Recall our definition in Definition 24.2.1.) After all, then for \(s=-1\) we'd get things like \begin{equation*}\sum_{i=1}^\infty n\end{equation*} and somehow I don't think that converges.

But it turns out that we can evaluate \(\zeta(s)\) for nearly any complex number \(s\) we desire. The first graphic below color-codes where each complex number lands by matching it to the color in the second graphic.

The important point isn't the picture itself, but that there is a picture. To wit, \(\zeta\) can be defined for (nearly) any complex number as input. Why would that be the case? One way to see that we could define this function for complex values comes by trying to define each term \(\frac{1}{n^s}\) in \(\zeta(s)=\sum_{n=1}^\infty \frac{1}{n^s}\) more precisely.

Suppose we let \(s\) be a complex number, using the long-standing notational convention \begin{equation*}s=\sigma+it\end{equation*} Then we can rewrite this term as \begin{equation*}\frac{1}{n^s}=n^{-s}=e^{-s\log(n)}=e^{-(\sigma+it)\log(n)}=e^{-\sigma\log(n)}e^{-it\log(n)}\end{equation*} Now we use a fact you may remember from calculus, which is very easy to prove with Taylor series. (See Exercise 25.9.1): \begin{equation*}e^{ix}=\cos(x)+i\sin(x)\end{equation*} Applying this, we get \begin{equation*}\frac{1}{n^s}=e^{-\sigma\log(n)}e^{-it\log(n)}=n^{-\sigma}\left(\cos(t\log(n))-i\sin(t\log(n))\right)\end{equation*}

Using this analysis, if \(\sigma>1\), since \(\cos\) and \(\sin\) always have absolute value less than or equal to one, we still have the same convergence properties as with regular series. So if we take the imaginary and real parts separately, we can rewrite \begin{equation*}\zeta(s)=\sum_{n=1}^\infty \frac{1}{n^s}=\sum_{n=1}^\infty\frac{\cos(t\log(n))}{n^s}+i\sum_{n=1}^\infty\frac{\sin(t\log(n))}{n^s}\end{equation*}

That doesn't explain the part of the complex plane to the left of \(\sigma=1\) of the picture above. All I will say is that it is possible to extend \(\zeta\) there, and Riemann did it. (In fact, Riemann is largely responsible for advanced complex analysis.) As an example, \(\zeta(-1) = -\frac{1}{12}\), which is very close to saying that \begin{equation*}\zeta(-1) = 1+2+3+4+5+6+7+8+9+10+\cdots=-\frac{1}{12}\end{equation*}

Investigate further whether this has any meaning in Exercise 25.9.2.

Subsection25.3.2Zeta on some lines

Let's get a sense for what the \(\zeta\) function looks like. First, observe a three-dimensional plot of its absolute value for \(\sigma\) between 0 and 1 (which will turn out to be all that is important for our purposes).

To get a better idea of what happens, we compare two plots. One is a one-dimensional plot of \(\left|\zeta\right|\) for different inputs with the same \(\sigma\). On the other side is the two-dimensional colored complex plot of \(\zeta(\sigma+it)\), where \(\sigma\) is the real part, chosen by you, and then we plot \(t\) out as far as requested. The line which we are viewing on the complex plane in the first graphic is dashed in the second one.

Remark25.3.1

It is not really possible to fully visualize a complex function of complex input. So we often pick some line in the complex plane, such as where the real part equals 1 (sort of like \(x=1\)) or where the imaginary part equals 1 (sort of like \(y=1\)); then we either treat this as input to a parametric curve, or similarly look at the output and in one way or another reduce it to one real number, and plot this as normal.

You'll notice that the only places the function has absolute value zero (which means the only places it hits zero) are when \(\sigma=1/2\).

Another way to visualize \(\zeta\) in a useful way is with the parametric graph of each vertical line in the complex plane as mapped to the complex plane. You can think of this as where an infinitely thin slice of the complex plane is “wrapped” to.

This image is reasonably famous, because the only time the curve seems to hit the origin at all is precisely at \(\sigma=1/2\), and at \(\sigma=1/2\) the curve seems to hit the origin lots of times. For any other \(\sigma\) the curve just misses the origin, somehow.

Now it's true that \(\zeta\) is also zero at negative even integer input, but these are well understood. The pictures demonstrate the mysterious part. And so we have the following crucial question – where is \(\zeta(s)=0\)?

The importance of this problem is evidenced by it having been selected as one of the seven Millennium Prize problems by the Clay Math Institute (each holding a million-dollar award), as well as having many recent popular books devoted to it. In what follows we will loosely follow the very interesting exposition of Prime Obsession by John Derbyshire, [C.3.1].