Mathematical issues

Some remarkable mathematical issues

Although the Dutch newspapers Volkskrant and NRC can be blamed for being too political (D66) biased to be independent as journals ought to be, it must be said that both have excellent science sections.
On an irregular base even topics of mathematical nature have been published.

Of course for a layman’s audience, mathematics lacks the ability of other science disciplines of being easily visualized.
So the topics in these sections were about trivial stuff like the latest calculated decimals of π or some large regions on the number line without any prime number called prime deserts, a newly found Mersenne number, etc.
But that hardly scratches the surface.

So let me try and bring up some interesting mathematical issues, some of them proven, some not. Not as an in depth lecture, but more as readable text to stimulate curiosity.

  • Fermat Last Theorem

Probably the most famous example among the hypotheses, conjectures, theorems and proofs is Fermat’s Last Theorem.
Many of us are familiar with the Pythagorean Theorem a2 + b2 = c2.
When a, b and c are positive integers, we call them Pythagorean triples.
Infinity is always lurking around in math and there is an infinite number of these triples among them the most well-known: {3, 4 and 5}.
Other examples are: {36, 77 and 85}, {120, 209 and 241}

Fermat’s equation is written as xn + yn = zn and the theorem says:
there are no integer solutions for n > 2.

Of course there was a zealous hunt for such integers long before Fermat’s was born.
But he was the first who claimed in a letter written around 1637, to have proven that these numbers don’t exist.

Unfortunately his proof was never found and more than 300 years gifted mathematicians, professional and amateur, have struggled to deliver one. But failed.
Until Andrew Wiles managed to solve the puzzle. In 1994 after 357 years.

He could succeed because his extraordinary perseverance but also because of a paradigm shift that took place.

In the times of Euler and Gauss it was still possible to comprehend all branches of mathematics, at least for brains of their immensity.
This is no longer true.

Mathematical disciplines can exist rather isolated with barely any interfaces to other branches.
As was the case with elliptic curves and modular forms, branches that could smoothly coexist without awareness from one another.
But then Taniyama and Shimura entered the scene and delivered their Taniyama-Shimura conjecture.

Simon Singh says at his introduction of the T-S conjecture:
“Modular forms stand very much on their own within mathematics. They would seem completely unrelated to Wiles subject of study in Cambridge: elliptic equations…. Modular forms and elliptic equations live in completely different regions of the mathematical cosmos.”

Taniyama and Shimura stated in 1955 that every elliptic curve can change into a modular form.
In essence they seem to be two appearances of the same structure.
Proof of this conjecture was quite a different story, this seemed to be way out of grasp of the current mathematical possibilities, not to be expected within a century.

And then Gerhard Frey came along.
In 1985 he gave a lecture about elliptic curves that, as his audience knew of course, has the form: y2 = x3 + ax + b.

Then he continued with something odd. He said “Let’s assume that Fermat was wrong and that integer solutions > 2 do exist for his famous equation. And because I don’t know what the value of these integers are, I call them A, B and C. Fermat’s equation then becomes AN + BN = CN  where A, B and C are integers and N > 2.”

In rearranged form he forced this into an elliptic curve (by a deft series of complicated maneuvers —> Simon Singh). It became: y2 = x3 + (AN – BN)x2 – ANBN, which indeed is an elliptic curve as we can clearly see.
After all AN, BN and CN are supposed to be integer solutions of Fermat.

Frey then said that this curve was so weird that it was definitely not a modular form. He proclaimed that with a successful integer solution for  Fermat’s equation greater than 2, the Taniyama-Shimura conjecture was proven false.

Of course he awed his audience with this observation.
But then they were stunned some more because Frey didn’t give a proof for his claim and they all hurried home to start working on it in order to snatch away the praise before the others.
So did Ken Ribet but the proof appeared much tougher than he initially thought and it took him a year to deliver it.

Formally spoken (Wikipedia):
“Ken Ribet proved soon after enough of conjecture to deduce that the Taniyama-Shimura Conjecture implies Fermat’s Last Theorem.
This approach provided a framework for the subsequent successful attack on Fermat’s Last Theorem by Andrew Wiles in the 1990s.”

Wiles was an expert on elliptic curves and he saw his chance.
His technique consisted of ‘counting’ the elements in the set of the elliptic curves and comparing this to the number of elements in the set of modular forms, thus applying a one-on-one connection, very much the same method as earlier explained in my post ’Cantor’s Continuum Hypothesis’.
This doesn’t seem too difficult, but it appeared to be insanely complex to apply this one-on-one correspondence. It took Wiles 7 years of hard work to make this happen.
And with this achievement Fermat’s Last Theorem was proven to be true.

It is obvious that Fermat’s claimed solution in his 1637 letter if any, would be quite different from the sophisticated calculations that Wiles delivered. There is great doubt that Fermat had a simple proof. And this doubt is well founded because Fermat in later years gave a proof for some single integer values. There would be no need for this if he already had a final proof.

Simon Singh wrote an excellent book about the quest of Andrew Wiles for solving the riddle. Fermat’s Last Theorem.

  • The Riemann Hypothesis

All non-trivial zeros of the zeta function have real part 1/2.

It is obvious that the definition of the Riemann Hypothesis does not have the transparency of the Fermat Equation, it is nearly incomprehensible to the mathematically uneducated brain.

Riemann suggested his conjecture in a paper written in 1859 under the title called “On the number of prime numbers less than a given quantity”.

Odd enough the conjecture was not particularly important for his main topic but he mentioned it just the same.
He even tried to prove the claim as he wrote: “…but I have put aside the search for such a proof after a few fleeting vain attempts because it is not necessary for the immediate objective of my investigation“.

Finally Riemann delivered in his paper an exceptionally accurate formulae  with which the number of primes ‘less than a given quantity’ can be calculated.
The non-trivial zeroes play an important role in this, but it is not required that they ‘have real part one half’.

Unfortunately his conjecture still hasn’t been proven, despite the tremendous amount of effort put in by many famous and less famous mathematicians. Because of all this work, this conjecture has been promoted into a hypothesis.

Meanwhile the Hypothesis – and also Riemann’s paper – appeared to be extremely hard to grasp and decennia of publications by a few gifted mathematicians didn’t help since they were merely to be understood by their peers.

Fortunately the mathematician John Derbyshire decided that Riemann’s work was far too valuable to be reserved for this lucky few and he decided to attempt to explain the matter to a much greater audience. Or die trying.

Derbyshire’s step by step approach of serving the ingredients which are necessary to understand the complicated set-up of Riemann’s work are as good as it gets.
True, in that he suggests that a certain knowledge of analysis, calculus, complex numbers and functions is desirable.
It takes him a whole book to deliver his explanations but he keeps his pace and is never boring; he really opens up a fascinating area of math that previously was inaccessible for us non-geniuses.

From this book I have plucked a few more or less exiting items that are worth knowing.

  • The zeta function

ζ(s) = 1+ 1/2s + 1/3s + 1/4s + 1/5s + 1/6s +1/7s + 1/8s + 1/9s + 1/10s + 1/11+ …
The zeta function gives us the sum over all reciprocals of the natural numbers raised to a certain power. More elegantly written as:

The zeros of a function are the numbers for which the function value is 0.
The example f(x) =2x2 – x  -10 gives us two zeros: 2,5 and -2.
Substitute x with one of these and the function hands over the value 0.

Does the zeta function have zeros? Of course it has or else there wouldn’t be an Hypothesis. A specific value in argument s will cause zeta to deliver a zero.

The Riemann Zeta function has the value zero for every even negative integer as argument.
ζ(s) has an infinity of those zeros, but the negative even integers are not very exiting arguments. They are, let’s say….trivial.
One might think, but when ζ(s) is fed with a negative argument, the series will immediately get divergent. So where then is the zero?
At that point Derbyshire is leading us into the concept of domain stretching. He shows that often a function not can be represented properly by a series. There is much, much more to it.

So what about these ‘non-trivial’ zeros?
Well, the zeta function also has a zero value for certain arguments that are complex numbers. The numbers with a real part and an imaginary part, e.g. 0,8475 + 67,365i.
Recall that those numbers are not presented on the one-dimensional number line but on the two-dimensional complex plane. The real part is measured on the x-axis and the imaginary part on the y-axis. Complex numbers as arguments of a function make them non-trivial.

Riemann asserted that the non-trivial zeta zeros were of the form 1/2 + zi, where z is real. Not just some zeros, all of them! No exceptions!
Mind that the coefficient z needs to be real, when it is a complex number, the hypothesis would instantly be invalidated.

In fact he claimed that all these non-trivials are on one line on the complex plane. This line is perpendicular to the x-axis on position 1/2 and is now called the critical line.

  • The critical strip

There is an area on the complex plane perpendicular to the x-axis with x positions > 0 and < 1 along the y-axis. This area is called the critical strip.
It has been proven already that all non-trivial zeros of zeta lie in the critical strip.

In the middle of this strip lies the critical line on x position 1/2.
And it has also been proven that an infinite number of non-trivial zeros lie on the critical line.
But does this not mean that Riemann was right with his claim?
No, since apart from this infinite number on the line, there may be some zeros not on it.
Or even an infinite number not on it.

That’s maybe confusing but struggling with infinities always is.
However it turns out to be, all of them are within the critical strip.
And the Riemann Hypothesis says that they all are on the critical line and not one of them off that line.

  • Euler’s Product (The Golden Key)

In the literature it is always emphasized that the zeta function and the prime numbers
are tightly connected. Derbyshire’s book about the Riemann Hypothesis has even as title: Prime Obsession.
So where are these primes?
It was Leonhard Euler who rewrote the zeta function in an unexpected way:

In this presentation of ζ(s) we can see that the sum of terms over all natural numbers is equal to the product of terms over all prime numbers.
This is called the Golden Key since this is the key used by Bernhard Riemann to obtain the end result of his 1859 paper.
The conversion of zeta into an infinite product of terms over the primes gave Riemann the tool that he needed on his search for a function that delivers the exact number of primes ‘up to a given quantity’ (more on this later), meanwhile leaving us behind with the unproven Riemann Hypothesis as a by-product.

Derbyshire gives an extensive elaboration on the proof of Euler’s product and describes that he has long searched in a lot of textbooks for a clear example. He finally arrived at Euler’s own proof that appeared to be crystal-clear. The original can’t be beaten.

So now we have the zeta function expressed in terms of the prime numbers but why the excitement over the non-trivial zeroes? What do they have to do with all this?

  • Turning the Key

From this point on in Prime Obsession, the author builds quietly but steadily towards his most important revelation: turning the Golden Key. That is the great achievement in Bernhard Riemann’s paper, that’s what it is all about.

Riemann turned the key by inverting the zeta function in yet another way. Like this:

The calculus version of the Riemann zeta function.

The J function under the integral is a step function over all prime numbers. This function has a horizontal line along the x-axis until it reaches a prime number value. Then it takes a vertical step up and continues its horizontal tour towards the next prime. And on and on.

I must be honest, after my first read I missed this extremely important episode, probably still stunned about the last insanely difficult manipulations that were applied to zeta.

After finishing the book I had an exchange of letters with John Derbyshire because I had some additional questions and a few comments.
Here’s an short extract from this:



So the question remains: what did we really learn about the distribution of primes, besides that there is a tight connection of primes and non-trivial zeros of the zeta function?


That’s what we learned!  It is of great theoretical importance:  the biggest, strongest “bridge” from the math of numbers, counting, and arithmetic to the math of functions, limits, and calculus.

John Derbyshire

And then I saw it, the zeta function was drawn by Riemann into the realm of analysis.

The bridge from Riemann connected two previously independent areas of math. Like Andrew Wiles did with elliptic curves and modular forms. But the regions of number theory and analysis are much, much bigger. And Bernard Riemann showed how the tools of these regions can be exchanged among them.

  • The Prime Number Theorem

Up to this revelation, Derbyshire did not bring the non-trivials as a part of Riemann’s progress. So where they are and what role do they play? We’ll come to that.

Earlier I promised to say more about the computation of the number of primes ‘up to a given quantity’.
It was Friedrich Gauss who suggested an approximation of this number of primes ‘up to a given quantity’ as:

The symbol π here is not used as the well-known number 3,1415926… but it is overloaded and is now the prime counting function. As an example π(100) = 25, π(1000) = 168.
Of course π(N) is just a notation. There isn’t a formulae behind that gives us the exact number of primes up to N. They have to be counted, each one of them.
Furthermore Derbyshire uses the log function always with base e. He does not care about the notation ln, so log(x) stands for loge(x); the twiddle sign means it is approximate equal to.

This extremely important approximation is called the Prime Number Theory, the PNT for short.
The relative difference between the PNT and π(N) is dwindling away when N floats towards infinity. The theory was finally proved in 1896.
Shortly after there was an even better approximation found in the log integral of N, shorted to Li(N).

  • The log integral

The old PNT still holds because it has been proved that the relative difference with the real number of primes ‘up to a given quantity’ comes to zero at infinity.

But the log integral does a better job at this because the relative difference with π(‘to a given quantity’) is outranking the N/log(N) approach by far.
And of course the relative difference with the real number of primes comes at infinity to zero as well.

A consequence of the (old) PNT is: the probability that N is prime is ~ 1 /logN
Started with the function 1/log(t), calculation of the area under this function, gives us a approximation of the number of primes up to a given value.
That is to be done by integration of 1/log(t).

I’ll summarize that integration is the addition of infinite portions, hence we slice the area and add them up like:

dt stands for the infinitesimals that we sliced and that should be added up from 0 to x.
Unfortunately there is no easy mathematical appearance for this integral so for the ease of reading it is aliased as Li(x).

I urge the readers to go to the site of WolframAlpha where one finds a supercomputer waiting for assignments.
Try this:  Li(100,000,000) – (Number of primes < 100,000,000).

The super’s answer is 754.37544803146756907360936898156559977812534574689857793623137915.
Rounded it is 754.

The formidable Gray calculate the relative error as:
(Li(100,000,000) – (Number of primes < 100,000,000)) / (Number of primes < 100,000,000) * 100 = 0,01309…%.
How’s that for an approximation?

From Derbyshire’s book I copied a table that shows the differences between the real value of π(N) and the approximations of both PNT’s. Be aware that the English notation use the comma as group separator and a point for the decimals. Just the other way around of the Dutch method.

Clearly can be seen that the log integral Li(N) is the better approximation but still there is a difference with the real number of primes given by π(N). This difference is called the error term and it was Riemann’s achievement in his 1859 paper that he succeeded in eliminating the error term.

There is more to Li(x).

It appeared as if the Li(x) calculation is always slightly positive compared to the real value of π(x). It was Littlewood who proved in 1914 that Li(x) – π(x) changes from positive to negative and back, infinitely times.
His student Samuel Skewes proved that the first change must come before 10↑10↑10↑34 with the assumption that the Riemann Hypothesis is true.
This number has approximate 10↑ten billion trillion trillion digits. That’s not the number, that is the number of digits of the number.

  • Computing the error term

At the end of the second last chapter of his book Derbyshire pauses for a brief while for making an important summary.

  • π (x) can be expressed in terms of J(x), the primes step function.
  • By inverting J(x) can be expressed in terms of the zeta function.
  • Therefore,
    π (x) can be expressed in terms of the zeta function like this:

Expression 1.

  • In the last chapter he then repeats:
  • The prime counting function π can be written in terms of a step function, J.
  • The function J can be written in terms of Riemann’s zeta function ζ.


This means that the properties of the prime function π are in some way coded into the properties of ζ, among its most intimate are the non-trivial zeroes. We now have the first glimpse of the relation of π(x) and the non-trivials of the zeta function.
Mind that we’ve swapped from N (counting) into x (measuring) because we are now in the realm of analysis which works with real numbers.

And without hesitation he performs a last inversion. Expression 1 becomes:

expression 2.

Amazing of course but what do we have here?
J(x) is as we know now tightly connected to the prime counter π(x).
And Li(x) is just the PNT. The better one.

The 2nd, 3rd and 4th terms are Riemann’s long expected corrections for the error term.
From which we may forget right now the 3rd and 4th term, since they contribute only in the slightest way to the correction.

So it is all about the 2nd term. This one:

This term causes the difference between the real number of primes below a given quantity π(x) and the approximation of this Li(x), to vanish.

The p isn’t the Italic letter p, here it is the Greek letter rho that stands for root.
The term is a summation of log integral of ‘a given quantity’ raised to all values of p.
And these possible values of p are the non-trivial zeros of the zeta function. The whole infinity of them. How on earth did they appear here in this term?

  • The non-trivial zeros

John Derbyshire only describes this in broad outline. It comes down to yet another inversion of the J(x) function and he says “The actual process of inversion is rather long and complex (in both senses of that word!), and most of the steps involve math beyond the level I am presenting here.

I will be even briefer: the inversion comes down to expressing the zeta function in terms of its zeros. After all zeta is a polynomial and polynomials can be expressed in terms of their zeros. “The trivial zeros conveniently vanished during the transformation” and that leaves us with an infinity of terms of non-trivial zeros.

I’ve asked Derbyshire how on earth Riemann could have managed such a summation without the help of a large array of supercomputers and he answered “…mathematicians have shortcuts for these things” and he would say no more about it.

At the end of this chapter Derbyshire calculated the number of primes up to 1.000.000 of which we of course know in advance the right value: 78.498.

Eventually Riemann’s function delivered the result: 78.498. Impressive!
Though still a cumbersome computation for big numbers, it is much, much better than counting the primes one by one.

Mathematicians are curious people.
When they see a row of numbers like the non-trivials they immediately start to look for a pattern in the sequence.
The boldest statement about this is the Hilbert-Polya conjecture which says:
The non-trivial zeros of the Riemann zeta function correspond to the eigenvalues of some Hermitian matrix.
What these guys had in mind was not a straight forward N by N matrix but a special one: the Gaussian Random Hermitian matrix. In these matrices the elements are randomly chosen in a very peculiar way. All elements are complex numbers and every element is mirrored by the lead diagonal.

Derbyshire spends a considerable amount of time explaining what an Hermitian matrix is, what it’s important properties are: eigenvalues, trace, lead diagonal and it’s characteristic polynomial.
The most interesting and unexpected switch is his revelation that their eigenvalues are not quite random as one might expect. They are repellent random that is, in their randomly assigned space, they keep as much distance from their neighbors as possible.
The keywords here are pair correlation and form factor. More on these concepts later.

From here the author reveals the stream of thoughts of Hilbert and Polya. Their firm belief is based on the fact that the critical strip can be represented by ‘some Hermitian matrix’ of infinite size.

I can hardly clarify in a few words what is described in Primes Obsession in a whole dedicated chapter but here goes.

Hilbert and Polya saw a great resemblance between the eigenvalues of the for-mentioned Hermitian matrix and the non-trivial zeros of the Riemann zeta function.
They reasoned: the eigenvalues of a matrix that is filled with complex numbers are all unexpectedly real. And for the non-trivial zeros on the critical line (in the form ½ + zi) all z’s are unexpectedly real.

In other words they determined that there is a great similarity between an Hermitian matrix and the critical strip and that the most intimate properties of such a matrix – the eigenvalues on the lead diagonal – correspond to the most intimate properties of the Riemann zeta function: the non-trivial zeros on the critical line. So it is not far fetched to conclude that those non-trivials are the eigenvalues of ‘some’ Hermitian matrix.
I have to explain why the z coefficient of a non-trivial must be a real number and can’t be a complex number, something Derbyshire left out of his story.
Let’s assume that z is complex as z = a + ¼i.
The non-trivial zero can now be written as ½ + (a + ¼i)i which is ½ + ai + ¼i2 and since i2 = -1 the zero becomes ½ – ¼ + ai = ¼ + ai, thus invalidating the RH.
So when the RH is true, all z’s are real.

But the story doesn’t end here.

In the early seventies, the young number theorist Hugh Montgomery’s  had been working on ‘an investigation of the spacing between non-trivial zeros of the zeta function.’

Purely by coincidence he visited in spring 1972 The Institute for Advanced Study in Princeton where he met Freeman Dyson who asked what he was working on these days. Hugh showed him his results and Dyson got very excited. “That’s the form factor for the pair correlation of eigenvalues of random Hermitian matrices, he said”.

These matrices had been used by Dyson to describe the distribution of energy levels of sub atomic particles. Freeman Dyson was a world famous physicist (he died in 2020)  but fortunately he had a degree in math with number theory as favorite. Most physicist would not have understood what Montgomery showed him.

The final conclusion here: there is a link between the behavior of sub atomic particles and our prime numbers.


  • The Riemann Hypothesis

Remember that the definition of the RH is:

All non-trivial zeros of the zeta function have real part 1/2.

In Prime Obsession however, most of the focus lies on Riemann’s 1859 paper and its end result: the Riemann PNT error correction.

Does this make the RH inferior? No absolutely not, but for Riemann’s great achievement: tying-up the prime numbers to the non-trivial zeros of the zeta function by inventing new tools for this, it isn’t necessary for those non-trivials to have their real part on the critical line. As long as they are in the critical strip his function works and it has been proven that they do.

Nowadays a lot of math proofs begin with the phrase, ‘assuming that the RH is true…’.
And when the RH appears to be false, so will all these later proofs. Seems a bit like a card house. A great number of mathematicians think that the RH is true.
But then again, an equal great number of experts think it is false.

We have to wait until some genius on a given day comes with a proof.
Of being true or false.

Exciting times!

Some bonus issues.

  • Bertrand’s postulate.

In 1845 Joseph Bertrand stated that between each integer > 2 and its double always a prime number can be found.
Chebyshev has proved this in 1852.
Paul Erdös proved this again but much more elegant in 1932 and a small rhyme was born:

Chebyshev said it, and I say it again,
there is always a prime number, between n and 2n.

  • Goldbach conjecture

This conjecture says that every even integer >= 4 is the sum of two primes. It is the nowadays version.

Christian Goldbach initially stated around 1742 (with some help from Leonhard Euler) “Every positive even integer can be written as the sum of two primes.” Later this is somewhat adjusted since Goldbach was still in the habit of using 1 as a prime number.
Up till today the postulate has not been proven despite enormous effort. Research confirmed it valid for values up to 4 x 1018.

  • Collatz conjecture (3n + 1)

A dude named Lothar Collatz got early twentieth century the crazy idea to create a sequence of integers by applying a certain protocol to each previous number, thus obtaining the next term of the sequence.
This is the protocol: when the previous number is even, the next one in the sequence is the previous term divided by 2. If the previous number is odd, the next one is the previous term times 3 plus 1.
He found that after a certain number of iterations, the sequence reaches 1.

His conjecture is, that no matter with what number the sequence starts, the procedure will eventually bring it to 1.

Collatz little game isn’t proved but moreover, a famous mathematician stated recently that the Collatz conjecture “is an extraordinarily difficult problem, completely out of reach of present day mathematics.”


January 2023


Sources consulted:

  • Prime Obsession, Bernhard Riemann and The Greatest Unsolved Problem in Mathematics.        by John Derbyshire
    ISBN 0-309-08549-7
  • Fermat’s Last Theorem
    by Simon Singh
    ISBN 1-85702-837-6
  • Wikipedia.