Now, these methods are the beginnings of how people *really* factor big numbers. Typically, one does trial division up to a certain size (maybe the first few hundred or thousand primes), then perhaps some modification of Fermat to make sure that there aren’t any factors close to the square root if you are attacking something like RSA where that would otherwise be advantageous.

Then what?

Another important modern technique that is beginning to show up in introductory textbooks is Lenstra’s elliptic curve method; see once again either

[E.4.19] or

[E.2.10, Chapter 18.7] for details at the level of this text.

A rather less elementary, but potentially extremely important, technique for factoring is

*Shor’s algorithm*. Explaining this quantum computational algorithm in any detail goes

*well* beyond this text; its main significance is that

*if* a sizable quantum computer implementing this algorithm could be built, it could perform the specific tasks of factoring large RSA moduli in polynomial (rather than exponential) time. As of this writing, we’re still a long way from reliable physical implementations of any size, but they are actively being pursued. See

[E.6.12, Section 7.3] for details and even worked-out factorization samples.

We won’t touch more on those topics, but

Algorithm 12.4.7 brought up a concept important in factoring, not just finding primes. Namely, we could come up with some

*probabilistic/random* methods. That’s right, we are going to try to find a factor

*randomly*!

###
Subsection 12.6.1 The Pollard Rho algorithm

Here is the essence of this random (or ‘Monte Carlo’) approach; it is highly recursive, like many good algorithms.

####
Algorithm 12.6.1. Generic routine for “random” factoring.

Follow these steps.

Pick some polynomial that will be easy to compute mod \((n)\text{.}\)

Plug in an essentially random seed value. (Often the seed is \(2\) or \(3\text{.}\))

Compute the polynomial’s value at the seed.

If that has a non-trivial gcd with \(n\text{,}\) we have a factor. Otherwise, plug the new value *back into* the polynomial, and repeat (and hope it eventually succeeds).

Below is code for the method we’ll discuss in this section. It has a modification to the generic algorithm which I will discuss below.

The essence of the method is that by plugging in the values of the polynomial modulo \(n\text{,}\) we are generating a ‘pseudo-random’ sequence of numbers.

\(x_0\equiv 2\) (mod \(n\))

\(x_1\equiv f(x_0)\) (mod \(n\))

\(x_2\equiv f(x_1)\) (mod \(n\))

\(x_{i+1}\equiv f(x_i)\text{,}\) all (mod \(n\)).

Such a ‘pseudo-random’ sequence might be *better* than the sequences we used for trial division or Fermat factorization, precisely because it will likely hit some small(ish) factors and some fairly large factors, a good mix. It might also be good that it could give us numbers which, although not a factor of \(n\text{,}\) might at least *share* a factor with \(n\text{.}\)

A first choice of seed and polynomial might be

\(x_0=2\) and

\(f(x)=x^2+1\text{.}\) These choices could be different, but they are typical;

John Pollard’s original paper used

\(f(x)=x^2-1\text{,}\) for instance.

####
Example 12.6.2.

Let’s try computing what we get for some specific numbers. Picking

\(n=8051\) as in

[E.2.4, Example 3.25], we get results as in the following interact.

Notice that for \(n=8051\text{,}\) the term \(x_{4}\equiv x_{19}\) (mod \(n\)), so the sequence, while seeming fairly random, will not come up with every possible divisor. With seed \(3\text{,}\) \(x_{13}\equiv x_{28}\) is the first repeat; if instead we change the modulus to \(8053\text{,}\) there is no repeat through \(x_{50}\text{.}\) So you can see the output will be hard to predict.

Although the outputs of \(x_{i+1}=f(x_i)\) might already seem to be fairly random, we will not actually try to find common divisors of these numbers with \(n\text{.}\) Instead, we will try to see if all the *differences* \(x_i-x_j\) share a common factor with \(n\text{,}\) using the (highly efficient to compute) greatest common divisor. That gives a lot more opportunities to find a common factor than just comparing with \(x_i\text{!}\) And hopefully it’s just as (or more) ‘random’, and just as effective at finding factors.

However, having all possible differences to check might slow things down too much. So instead there is a final modification to the algorithm.

First, since there are finitely many possible outcomes modulo any particular modulus \(d\text{,}\) the sequence of results will eventually repeat not just modulo \(n\text{,}\) but for any \(d\text{.}\) In particular, suppose \(d\) is a divisor of \(n\) such that \(\gcd(x_i-x_j,n)=d\) for a specific pair \(x_i\) and \(x_j\) with \(j>i\text{.}\)

Now consider the values of the sequence of \(x_\ell\) modulo \(d\text{.}\) Because polynomials are well-defined in modular arithmetic, we have that

\begin{equation*}
x_i\equiv x_j\text{ (mod }d)
\end{equation*}

implies that, for any \(m\) we have

\begin{equation*}
x_{m+i}\equiv f^m(x_i)\equiv f^m(x_j)\equiv x_{m+j}\text{ (mod }d)
\end{equation*}

as well. When we let \(m=j-i\text{,}\) this becomes

\begin{equation*}
x_j\equiv x_{2j-i}\text{ (mod }d)
\end{equation*}

which means the sequence (modulo \(d\text{,}\) the common divisor, not \(n\)) repeats itself every \(j-i\) terms (after \(i\text{,}\) of course, if \(j-i\lt i\)).

Now let \(s\) be an integer such that \(s(j-i)\geq i\text{.}\) Then \(x_{s(j-i)}\) appears after the periodic behavior (modulo \(d\)) begins, so

\begin{equation*}
x_{s(j-i)}\equiv x_{s(j-i)+(j-i)}\equiv \cdots \equiv x_{s(j-i)+s(j-i)}\text{ (mod }d)\text{.}
\end{equation*}

If we now let \(k=s(j-i)\) this congruence means

\begin{equation*}
x_k\equiv x_{2k}\text{ (mod }d)
\end{equation*}

so \(d\) is a divisor of \(x_{2k}-x_k\) specifically, not just \(x_i-x_j\text{.}\)

Finally, this means instead of checking all possible differences for a common divisor, we only have to check \(\gcd(x_{2k}-x_k,n)\) for all \(k\) in the algorithm.

####
Algorithm 12.6.3. Pollard Rho factoring algorithm.

Follow these steps.

Pick some polynomial \(f(x)\) that will be easy to compute (mod \(n\)) (such as \(x^2+1\text{,}\) though other quadratics might be used).

Plug in an essentially random seed value \(x_0\text{.}\) (Often the seed is \(2\) or \(3\text{.}\))

Compute the polynomial’s value at the seed, \(f(x_0)=x_1\text{.}\)

Continue plugging in \(f(x_i)=x_{i+1}\text{,}\) modulo \(n\text{.}\)

For each \(k\) we check whether

\begin{equation*}
1<\gcd(x_{2k}-x_k,n)=d<n\text{.}
\end{equation*}

Since the algorithm doesn’t always find a factor for any given combination of seed, number, and polynomial, there is nothing to prove per se. However, probabilistically (just like with Miller-Rabin) it should succeed for \(k\) in the neighborhood of the size of the square root of the smallest factor of \(n\text{.}\) (This is also the justification for introducing the algorithm in the original papers introducing this method and its variants.) So if \(n\) has a big, but not too big, divisor, this test should help us find that divisor.

####
Example 12.6.4.

Let’s try this with \(n=9991\text{.}\) Keeping \(x^2+1\) and seed \(2\text{,}\) the numbers we would get for the first six rounds are

\begin{equation*}
x_0=2,x_1=5,x_2=26,x_3=677,x_4=8735,x_5=8950
\end{equation*}

\begin{equation*}
x_6=4654,x_7=9220,x_8=4973,x_9=3005,x_{10}=8153\text{.}
\end{equation*}

This gives differences as follows:

\(\displaystyle x_2-x_1=26-5=21\)

\(\displaystyle x_4-x_2=8735-26=8709\)

\(\displaystyle x_6-x_3=4654-677=3977\)

\(\displaystyle x_8-x_4=4973-8735=-3762\)

\(\displaystyle x_{10}-x_5=8153-8950=-797\)

These factor as follows:

That is an impressive list of eight different prime factors that could potentially be shared with \(9991\) in just five iterations. These differences have the following gcds with \(9991\text{:}\)

Indeed the *third* one already caught a *common* divisor with \(9991\text{.}\)

###
Subsection 12.6.2 More factorization

In general, there are other such probabilistic algorithms, and they are quite successful with factoring numbers which might have reasonably sized but not gigantic factors.

Things don’t automatically work quickly even with today’s far more powerful hardware.

Hmm, what now? Let’s change the seed.

No one method will factor every number quickly. Luckily, we have bigger guns at our disposal in Sage (especially in the component program Pari), that polish thing off rather more quickly.

That is a little better than two hours on a mainframe, or even on your computer, I hope you’ll agree.

Real factorization algorithms use several different methods to attack different types of factors. We can try to simulate this in a basic way by creating a Sage interact. Evaluate the first cell to define things (don’t forget to keep the rho method defined); then you can evaluate the second cell, which is the interact.

####
Sage note 12.6.8. Building interacts.

An interact is just a Sage/Python function, except with

`@interact`

before it. There are many different input widgets you can use; this one demonstrates using a list and an input box which takes any input. See the

interact documentation or

Quickstart for many examples and more details.