P vs NP – A promise fulfilled

P versus NP – An early encounter

My first programming experience was in the early 80’s. I had bought myself a Texas instruments 58-c programmable calculator, and I was curious what could be done with such a device. That turned out to be very modest.
The calculator was capable of 480 program steps but held then zero memories for data storage. Each memory used for data consumed 8 program steps, so with the storage of e.g. 10 integers, the programming space was diminished with 80 steps.
The 58-c’s memory was retained even after switching off the calculator, but before creating another program, memory first had to be erased.


Note:
This article is part of a series of articles. Please check here for the other articles in this series and the best order in which to read them.


My TI 58-c, was a cut-down version of the TI 59 which was more sophisticated. The 59 had twice the memory space and it had a magnetic card reader for saving the precious program. Of course I would prefer these fine qualities, but unfortunately my budget was insufficient.

Anyway, my newly acquired calculator was a nice device and with it came a manual with some programming examples. The first one I tried was a calculation of my biorythm. I hadn’t the slightest idea what someone’s biorythm represented but this was nicely explained in the booklet as “…peoples’ daily lives are significantly affected by rhythmic cycles with periods of exactly 23, 28 and 33 days, typically a 23-day physical cycle, a 28-day emotional cycle, and a 33-day intellectual cycle”.
I typed in the full example, fired the application and promptly found that on that particular day, all three of my personal cycles coincided at their absolute lowest values.

This observation didn’t discourage me to proceed with the second example: a program for testing a given natural number at being prime.

I studied the algorithm which had a straight forward approach. The number to check (the dividend) was tested for being odd and from there on divided by every odd number starting with 3.
When the outcome of this division had a remainder, the divisor was raised by 2 and the loop continued with the next attempt. This iteration went on until the square root of the dividend was reached or no remainder was found, whichever came first.
When division by the truncated root still had a remainder, the number to check was labeled to be prime.

Mind that this iteration does not have to go beyond the square root, if there exists a bigger factor than the root, a smaller factor would have been found earlier.
I tested some numbers, prime candidates, and I was not surprised to find, that the time it took to find a prime was proportional with the size of the number.

Finally I tested the number 1.000.000.021 another candidate in my opinion.
This turned out to be correct and it took my calculator 20 hours! to deliver this result.
I know, this isn’t very impressive but it still is a whole lot faster than I could have done it, armed with pencil and paper.

For the sake of confirmation I fed my calculator with a next candidate: 1.000.000.033 and again it took the device 20 hours to label this number as prime.

Decades later I made a similar calculation on my MacBook Air. The answer appeared so fast that I couldn’t even blink an eye. So I decided to instruct the program to make a list of all primes less than 1.000.000.022. The Mac did this in 4 hours. The computer found 50.847.537 primes in the range from 2 until 1.000.000.022. Roughly estimated: the TI calculator would have needed about 60.000 years for this. And then again, a Mac is just a Mac. An abacus compared to a modern supercomputer. Speaking about progress…

It occurred to me that multiplying my two primes (1.000.000.021 x 1.000.000.033) was a breeze.
Even done with pencil and paper. The answer is 1.000.000.054.000.000.693, slightly more than 1018. Testing this number for being prime would take about the same time as testing one of its factors of course, after all the iteration would end at the first factor detected.
But a prime in the proximity of 1.000.000.054.000.000.693 would take much much more time to find than our number 1.000.000.021. Roughly about 30.000 times as long.

Abundantly I also must point out that our integer number system is characterized by unique factorization. A non prime number has one unique array of factors, there is no other way in which this non prime can be factorized.

So what do we have here?
On the one side we have two prime numbers that we can multiply in less than no time into a number as big as 1018.
On the other side it takes a modern computer many hours to check a prime near 1018 for being prime.
So it is extremely hard to factorize a big number but when done, it is very simple to obtain the original number from these factors.

This was my first encounter with the P vs NP problem.

P vs NP

There is a movie that has the P vs NP problem as a main theme: The Traveling Salesman. I haven’t seen this movie but I assume that in the plot, not much time is reserved for going into depth about the matter. Also in The Imitation Game, a film about Alan Turing, the problem may be mentioned briefly.

Wikipedia explains the rather mysterious subject as:
The P versus NP problem is a major unsolved problem in theoretical computer science. In informal terms, it asks whether every problem whose solution can be quickly verified can also be quickly solved.

Then Wiki put this somewhat into perspective:
The informal term quickly, used above, means the existence of an algorithm solving the task that runs in polynomial time, such that the time to complete the task varies as a polynomial function on the size of the input to the algorithm (as opposed to, say, exponential time).

Publications about this problem often speak of P = NP or P ≠ NP.

There still is no proof of one or the other, so it is safer to write P vs NP.

Now what do this P and this NP actually mean?
Both concepts are defined in the computer science where they represent the class of complexity of the problems to be solved.

P is the class of problems where a problem is solved in polynomial time, and let us assume this means “quickly solved”. I come back on this later.
NP is the class of problems from which the result of the solution can be checked relatively fast.

Of course this means that every problem in P is also in NP. When a solution can be found quickly, it can also be checked quickly.

So P stands for Polynomial time which isn’t too bad in terms of complexity of the solution. kn2 + 4n is an example of a polynomial. Solution time rises when the input (n) of a problem increases but it doesn’t rise dramatically. The coefficient k is some fixed number and it  expresses the overhead and efficiency of the algorithm.

Problems that are solved in exponential time, are much worse off. Here we find the input n as an exponent in the complexity expression like kn + 4n. Imagine what this means when n = 1.000 or n = 1.000.000.

NP stands for Nondeterministic Polynomial time, Non Polynomial time for short.

Wiki says this about nondeterministic polynomial time:
A nondeterministic Turing machine can move to a state that is not determined by the previous state. Such a machine could solve an NP problem in polynomial time by falling into the correct answer state (by luck), then conventionally verifying it. Such machines are not practical for solving realistic problems but can be used as theoretical models.

Now that is not much of an explication.
It is as if the mathematicians did their best to make the stuff as incomprehensible as possible. Searching along on the internet I found a better explication. Here it is in my own words:

Think of a huge maze where you are in the center and must find the way out.
The maze is well structured and you have an instruction set that describes how to proceed. This is the set: Go straight until the third intersection, then turn right. After that turn left at the second intersection. Then repeat these steps until you are out of the maze.
This maze problem belongs clearly in the P class since it ‘quickly’ can be solved in polynomial time.

Then we have a maze that seems totally without structure. Every turn and intersection is absolutely random, many corridors are dead ends. Now we deal with a nondeterministic polynomial time problem. It is a continuous trial and error and we need luck in our choices to get out. We can ‘check’ the maze in both cases quickly once out, by leaving marks during our outward  tour behind or by unwinding a coil of wire when advancing. So this one belongs in the NP class.

The NP classification doesn’t stop here. We can read further on the topic:
NP-Complete.
Mathematicians can prove that some NP problems are NP-Complete. An NP-Complete problem is at least as difficult to solve as any other NP problem. This means that if someone found a method to solve any NPComplete problem quickly, they could use that same method to solve every NP problem quickly. The Traveling Salesman problem is NP-Complete. Once a simple solution is found for it, every other NPComplete problem can be easily solved along the same line.

NP-hardness.
This is the defining property of a class of problems that are informally “at least as hard as the hardest problems in NP“. A simple example of an NP-hard problem is the subset sum problem.

The discussion about these topics becomes very technical and this is not the place to cover all the sophisticated science.

Instead I want to proceed with the problem I bounced into with my TI 58-C programmable calculator: splitting up a large integer in its prime factors.

Integer factorization

I have already shown that testing a large number N for being prime or factorizing it in it’s prime factors is not a mean feat.
My simple algorithm did this by trying to divide the number by every odd number smaller than the square root of N. This method is called Trial Division.
Mathematicians say the Time Complexity of this algorithm is O(√n).
The O stands for the Big O function which determines an upper boundary for a given function.

Of course we can easily enhance the Trial Division somewhat. For example, divisors that have 5 as last digit can be skipped. And when a division does not deliver a factor, all multiples of this divisor up to √n can be sieved out.

Pollard’s Rho Algorithm has a Time Complexity of O(∜n) which is much better than (even our enhanced) trial division.

Currently, the General Number Field Sieve (GNFS) is the fastest known algorithm for factoring large numbers.
Its Time Complexity: (O(e^{(\sqrt[3]{\frac{64}{9}} + o(1)) \cdot (\log N)^{1/3} \cdot (\log \log N)^{2/3}}))
I have absolutely no idea how to interpret this.

Anyway, even with this fairly optimized algorithms, factorization remains a cumbersome job. This feature is of great importance for the security of our encrypted data.

We are reassured by the security experts about the encryption technology:
With existing computing technology, one estimate holds it would take 300 trillion years to “brute force” an RSA 2048-bit key.

However the security experts continue with:
A perfect Quantum Computer could do this in 10 seconds.
And that makes a massive difference. A quantum computer with 4099 perfectly stable qubits could break the RSA-2048 encryption in 10 seconds (instead of 300 trillion years – wow).

We may conclude from this that factorization certainly doesn’t belong in class P since there is nothing ‘quick’ in a timescale of 300 trillion years needed for a 2048-bit number.

Forget about peeking in the quantum computer method and learning its algorithm. The shady world of quantum particles and waves is not accessible. The fragile cubits collapse the very moment they are observed. Giving nothing away.

It is also clear that factorization belongs in class NP since its result easily can be verified in polynomial time.

And then to think that there are much tougher (more computationally intensive) problems than factoring, e.g., the traveling salesman problem, and most graph-covering problems.

I asked ChatGPT on how to scale factorization and this was his answer:
The best-known classical algorithms for this task, such as the General Number Field Sieve, have a runtime that is sub-exponential but still super-polynomial. This means that they are faster than brute force methods that take exponential time, but still slower than polynomial time methods.

There still is a group of mathematicians who believe that P = NP. They think that when a result quickly can be verified, there always exists a solution equally simple but that we just don’t know about it. And as I tried to explain: once a solution for a NP-complete problem has been found, all NP-complete problems are tackled. That’s why this group struggles on to find the holy grail.

The group of non-believers, the ones that say N ≠ NP is much bigger.
One of their arguments is: A solution for at least one of those problems would have been found long ago. Since the greatest minds have tried. But in vain.

And another (most remarkable): This isn’t how it works in real life. There isn’t one divine solution that covers them all. We have to struggle for each one of them.

 

Eef
2023-11


Notes:

Time complexity once more

The main difference between a polynomial and an exponential function is that with a polynomial, the (input) variable is raised to a power. On the other hand, with an exponential function, the variable is the exponent.

Polynomial vs exponential?

an3 + bn2 + cn + d          —> polynomial

an          —> exponential where n is the number of elements in the solution’s algorithm.

A polynomial based solution is much preferable over the exponential steps. Example:

an5 gives with n = 100 and a = 3 —> 3 x 1010 = 30.000.000.000 steps but 5n ≈ 7,9 x 1069 steps

Eef
2023-11


Author’s note

Entirely In the style of The Hitchhiker’s Guide, I am adding a fourth part to my trilogy of math articles as published in Paul’s blog.
The objective is like the previous three: trying to explain complex stuff in simpler terms than can be found on the internet.
This time it is about a somewhat foggy subject that is far removed from everyday reality, but is occasionally mentioned in some film: P vs NP, a major unsolved problem in computer science.
I gave it a try and I hope that I can shed some light on the matter with this presentation.

Eef
2023-11


Editorial comment:

Not being much of a math wizz myself, I had to look up a number of concepts mentioned above. I found the following sites to be helpful:

As well as these YouTube video’s:

Paul
2023-11

Click here for all math essays on this site