Skip to main content
\( \newcommand{\lt}{ < } \newcommand{\gt}{ > } \newcommand{\amp}{ & } \)

Section7.2From Linear to General

In this section, we will take two ideas we already used with linear congruences, and see how they can be modified to apply in any polynomial situation.

Subsection7.2.1Combining solutions

One of the most important things we can do is study congruences with prime (power) modulus, because we can combine their solutions to get solutions for any congruences when we combine the Chinese Remainder Theorem and Fundamental Theorem of Arithmetic (recall Proposition 6.5.1). Even more interestingly, we can combine the numbers of solutions.

Informally, if you want to get the total number of solutions of a polynomial congruence, just write the modulus as a product of prime powers \(n=\prod_{i=1}^k p_i^{e_i}\), find out how many solutions the congruence has with each prime power modulus, then multiply those numbers for the total number of solutions.

Example7.2.1

For instance, if \(f(x)\equiv 0\) has 2 solutions modulo 3, 1 solution modulo 5, and 3 solutions modulo 7, it would have \(2\cdot 1\cdot 3=6\) solutions modulo \(105=3\cdot 5\cdot 7\).

We will state this for the general case of a coprime factorization of \(n\), though again the prime power factorization is usually the most useful.

Subsection7.2.2Prime power congruences

We have already discussed prime power congruences in Subsection 6.5.2. Recall that in Examples 6.5.3 and 6.5.4 we took the (obvious) solution of \(2x\equiv 3\text{ (mod }5)\) (namely, \(x=[4]\)), and from it relatively easily got solutions (mod \(25\)) and even (mod \(125\)).

But that is essentially the same as asking for solutions to \(2x-3\equiv 0\), a linear congruence. Let's see if we can generalize this for more general polynomial congruences.

The key was taking the already known fact \(5\mid 3-2\cdot 4\) and then cancelling out \(5\) from the entire congruence to get that \begin{equation*}\frac{3-2\cdot 4}{5}\equiv 2k\text{ (mod }5\text{)}\; .\end{equation*} We were able to solve the resulting congruence \(-1\equiv 2k\) (mod \(5\)), which had solution \(k\equiv 2\) (mod \(5\)). Finally, we plugged that back in to get a solution to \(2x\equiv 3\) (mod \(5^2\)), which was \begin{equation*}4+5k=4+5\cdot 2=14\text{ (mod }5^2\text{)}\end{equation*} as the solution.

But can we use this to get solutions to more advanced congruences as well, like the simple quadratics we've started exploring in this chapter? The answer is yes, with a minor caveat. The preceding discussion was just a basic form of the following.

Proof
Example7.2.4

Let's use this to take solutions to \(x^2+1\equiv 0\text{ (mod }5)\) and get solutions modulo \(25\) and \(125\).

First, by inspection the solutions modulo \(5\) are \([2],[3]\) (or \([\pm 2]\)). So solutions modulo 25 will look like \(3+k\cdot 5\) or \(2+k\cdot 5\). Further, \(f'(x)=2x\), so for either solution modulo 5 the technical derivative condition is met.

Let \(x_1=3\). Then the condition for \(k\) is \begin{equation*}\frac{f(x_1)}{5}+k\cdot (2x_1)\equiv 0 \text{ (mod }5)\end{equation*} which simplifies to \(2+6k\equiv 0\), which solves to \(k\equiv -2\equiv 3\). Then our solution to the congruence modulo \(25\) would be \begin{equation*}x_2=x_1+3\cdot 5\equiv 18\text{ (mod }25)\end{equation*}

And indeed \(18^2+1=325\) is divisible by twenty-five. Try the same procedure with \(x_1=2\) to get the solution \(x_2\equiv 7\). (See also Example 16.1.4.)

Example7.2.5

The same process with \(e=3\) and \(x_2\equiv 7\) yields, as a condition for \(k\), \begin{equation*}\frac{7^2+1}{25}+14k\equiv 0\text{ (mod }5)\end{equation*} This gives \(k=2\), and indeed \begin{equation*}x_3 = x_2+2\cdot 5^2 = 57\end{equation*} yields \begin{equation*}57^2+1=3250\equiv 0 \text{ (mod }125)\end{equation*}

This is a very powerful technique. What is most interesting is that this is even interpretable as Newton's method in calculus. How? Note that the result above can be rearranged as \begin{equation*}x_{e}=x_{e-1}-\frac{f(x_{e-1})}{f'(x_{e-1})}\end{equation*} since \(p^{e-1}\mid f(x_{e-1})\) and the technical condition is tantamount to saying \(f'(x_{e-1})\) has an inverse. (Unlike in the Newton case, it is also possible for there to be solutions here if \(\gcd(p,f'(x_{e-1}))\neq 1\), but only if \(\frac{f(x_{e-1})}{p^{e-1}}\) itself is also divisible by \(p\); we omit details of this case.)

If you didn't notice this, don't feel bad! In the linear case, where \(f(x)=2x-3\), the derivative was just \(f'(x)=2\) and it was not at all obvious that anything more than a trick was involved. Still, it's another fascinating place where ideas from calculus can invade the world of number theory.