Earlier, my essay on the mathematical problem P vs NP was published on Paul’s blog.
A short memory refresher:
The P versus NP problem is a major unsolved problem in theoretical computer science. In informal terms, it asks whether every problem whose solution can be quickly verified can also be quickly solved.
In that essay, some samples of this question were mentioned, like The Traveling Salesman problem and also the subset sum problem.
As a demonstration of the issue, I chose for the difficulty of factorization of large numbers, and some examples of this were given.
My expectation of the time needed for factoring a number as large as 1018 (or confirming its primality) appeared to be somewhat pessimistic. I thought for hours, but that turned out differently. This pessimism was undoubtedly based on experiences with previous computing platforms, like my Texas Instruments 58C programmable calculator in the eighties.
My MacBook Air 2016 with an Intel i7 2,24 GHz processor – which is getting rapidly outdated itself – had little trouble finding the two factors of such a number. Its C# program output:
The factors of 1.000.000.054.000.000.693 are 1.000.000.021 and 1.000.000.033.
Elapsed time 6,1 sec.
For this calculation, I used the C# type ulong, which has a maximum value of 264.
Calculations with this type are performed in the 64-bit memory registers of a modern computer.
For larger numbers c# has the type BigInteger, which is based on manipulating byte arrays. It will be obvious that this slows down the calculations significantly.
The short computation time may seem impressive, but the point is that the test numbers I chose were not very large, even for the primitive algorithm, called trial division, I used.
This method works just as the name suggests: start dividing the number by 3, and if the quotient does not yield an integer number, raise the divider by two. Keep doing this until a factor is found or when the square root of the test number is reached, whichever comes first. The square root popping up means the number is a square or is prime.
Much better factorization algorithms than trial division have been developed and used in mathematics, like Pollard-Rho and the sophisticated Quadratic Sieve and the General Number Field Sieve (GNFS) methods.
However advanced these algorithms are, they are still unable to crack the really large numbers as used in modern data encryption. See my essay The secrets of RSA encryption for more on this.
And don’t forget we are talking regular supercomputers here! Or large clusters of parallel working desktop computers.
But what about quantum computing? These devices can perform impossible calculations in a breeze, right? For more on this, read my articles Shor’s Algorithm and Shor’s Algorithm, the quantum part.
Besides, keep in mind that regular computers are deterministic, and quantum computers are probabilistic.
That’s why we will use quantum computers to make an educated guess of the result and regular computers to check this.
Even so, this is the foremost theory. A real functional quantum computer is still future.
Anyway, from this, the question arises: what can a modest laptop achieve in this field?
And what happens when we replace the simple trial division method with a more powerful algorithm like Pollard-Rho?
The sciencedirect.com website explains:
Pollard’s “rho” method for integer factorization iterates a simple polynomial map and produces a nontrivial divisor of n when two such iterates agree modulo this divisor. Experience and heuristic arguments suggest that a prime divisor p should be detected in O(p) steps.
The last phrase of this explanation means that (from experience) it can be expected that small factors are found rapidly, after which the quotient can be calculated.
It also means that Pollard-Rho is not great as a primality tester.
But for that, we have Miller-Rabin of course.
Miller-Rabin is a lightning-fast primality finder. Although heuristic (probabilistic), after some iterations, the chance of a wrong decision becomes extremely unlikely.
Abundant code examples of Pollard-Rho and Miller-Rabin can be found on the internet, and I wrote my c# factor-finding function around these. It works like this:
-
The number N is checked by Miller-Rabin for being prime.
-
If not, Pollard-Rho finds the first (lowest value) factor f1.
-
The quotient N/f1 is checked by Miller-Rabin for being prime.
-
If not, the quotient is recursively offered to Pollard-Rho, which finds the next factor f2.
-
This continues until the last quotient fx, is a prime.
A list of 1000 numbers was parallel batch processed on my 4-core i7 Mac; the computer did this in about 5 seconds.
Did I say lightning fast?
Below an short excerpt of the output list:
…
Factors of 15.212.333.333.505.666.665.243: 15212333333505666665243 (prime)
Factors of 15.236.308.642.147.913.578.829: 11423, 24012407, 55547419589
Factors of 15.260.283.950.790.160.492.407: 3, 3, 17761, 820873, 116299197991
Factors of 15.284.259.259.432.407.405.993: 3, 13, 2339, 54139, 3094848094847
Factors of 15.308.234.568.074.654.319.563: 23, 665575416003245839981
Factors of 15.332.209.876.716.901.233.151: 43, 827, 388081, 450971, 2463541
Factors of 15.356.185.185.359.148.146.711: 15356185185359148146711 (prime)
Factors of 15.380.160.494.001.395.060.299: 73, 210687130054813630963
Factors of 15.404.135.802.643.641.973.887: 3, 3, 11, 31, 4877, 6910747, 148923317
Factors of 15.428.111.111.285.888.887.473: 3, 7, 581869549, 1262605955137
Factors of 15.452.086.419.928.135.801.033: 7, 1201, 1838002428919725919
Factors of 15.476.061.728.570.382.714.617: 19, 19, 8009, 16349, 327403836317
Factors of 15.500.037.037.212.629.628.203: 881, 159796823, 110100346781
Factors of 15.524.012.345.854.876.541.787: 3, 143551, 257407, 140041305097
Factors of 15.547.987.654.497.123.455.353: 17, 41, 127363, 175145155318123
…
Compared with the earlier example with two large factors, the response time of this list seems surprisingly short. This is just another indication that Pollard-Rho is particularly fast at finding smaller factors.
This success made me wonder how far I could go in tackling larger numbers on this simple platform, armed with Pollard-Rho and Miller-Rabin. Because a high number of factors isn’t required to demonstrate the heavy workload, I started to work on RSA numbers.
==========================
Wikipedia:
RSA numbers are a balanced set of large semiprimes (numbers with exactly two prime factors) that were part of the RSA Factoring Challenge. The challenge was to find the prime factors of each number. It was created by RSA Laboratories in March 1991 to encourage research into computational number theory and the practical difficulty of factoring large integers. The challenge was ended in 2007.
The numbers are ordered and named after their number of decimal digits (later after the amount of binary digits). Like RSA-100, RSA-288.
==========================
The RSA challenge is based on a balanced pair of factors, thus assuring the maximum difficulty of finding them.
My personal supercomputer WolframAlpha gave me the desired pairs of primes of about equal size when I requested for example: what is the nearest prime to 1e12 (999999999989) and what is the nearest prime to 1e12 + 20 (1000000000039).
Multiplying these numbers and entering the result into my computer program made
Pollard-Rho found the first factor, after which Miller-Rabin evaluated the quotient to be prime and so delivered the second factor.
The same procedure with a few more pairs created this overview:
- 1.000.000.000.027.999.999.999.571 ≈ 1e24
Factors 999.999.999.989 and 1.000.000.000.039.
Elapsed time 1.8 s - 170.299.999.999.582.900.000.000.231 ≈ 170e24
Factors 12.999.999.999.989 and 13.099.999.999.979
Elapsed time 23 s - 4.400.000.000.000.060.200.000.000.000.147≈ 4,4e30
Factors 2.200.000.000.000.007 and 2.000.000.000.000.021
Elapsed time 89 s - 100.000.000.000.001.600.000.000.000.006.039 ≈ 1e32
Factors 10.000.000.000.000.061, 10.000.000.000.000.099
Elapsed time 172 s - 1.000.000.000.000.000.180.000.000.000.000.000.531 ≈ 1e36
Factors 1.000.000.000.000.000.003, 1.000.000.000.000.000.177
Elapsed time 637 s
=======================
Unfortunately, the calculation times are not particularly reproducible; they can vary 100 percent, 200 or even 300 percent.
The reason for this can be found in the heuristic nature of Pollard-Rho.
In probability theory, the birthday problem asks for the probability that, in a set of n randomly chosen people, at least two will share a birthday. This is known as the birthday paradox, basically related pairs are easier to find than looking for individual values.
Pollard-Rho follows this strategy. Instead of searching for the answer directly, it randomly searches for a pair of numbers with a GCD non trivial. The smaller factors are rapidly found, with the worst case being n1/4 iteration steps.
So far, we have been dealing with numbers up to RSA-36. We came a long way from the simple Texas Instruments 58C programmable calculator and the equal primitive method of trial division, to determine the factors of a number.
And it is obvious that even if the Mac keeps calculating a whole year undisturbed, we can raise the result to RSA-40 but not much higher.
We can improve our algorithm by having the Max use its multicore capabilities. The search for matching pairs will speed up, of course.
And what about using another smart factoring algorithm?
Elliptic curves, Quadratic Sieve perhaps?
This justifies further research.
Eef, January 2026
Sources
- Wikipedia
- reddit.com (mr_bitshift)
- mathworld.wolfram.com
- math.umd.edu
Editorial comment:
Eef closes the argument above with ‘This justifies further research‘. This is not a realistic close, but more a promise to dig even deeper. So, it is safe to assume Eef’s last words haven’t been written on the subjects of RSA and P vs NP.
If you leave your email address in the ‘Wish to receive our newsletter‘ panel below, you’ll be notified whenever a new post is published, and that includes Eef’s articles.
Paul
2026-02
#arty, #chatGPT, #chatpgt, #internet, #math, #mathematics, #nvsp, #primenumber, #primes, #rivestshamiradleman, #rsa, #rsaencryption
