Introduction
In 1644, the Italian mathematician Pietro Mengoli posed a question that would haunt the greatest minds in Europe for nearly a century: what is the exact value of the sum ? The series clearly converges — each term shrinks rapidly — but to what? Mathematicians could compute partial sums to arbitrary precision, yet the exact closed form eluded everyone. The Bernoulli family, among the most prolific mathematical dynasties in history, tried and failed. Jakob Bernoulli himself publicly admitted defeat, writing that he would be grateful to anyone who could find the answer.
Ninety-one years later, in 1735, a 28-year-old Leonhard Euler announced the solution. The answer was not a ratio of integers, not an algebraic number, but something far stranger: . The sum of reciprocal squares of the natural numbers — a purely arithmetic object built from integers alone — equals a constant defined by the geometry of the circle divided by six. This result shocked the mathematical world. It was the first deep, unexpected connection between discrete arithmetic and continuous geometry, and it launched an entire branch of mathematics that would ultimately lead to the Riemann zeta function and some of the deepest unsolved problems in number theory.

The Problem
The Basel Problem asks for the exact value of the infinite series:
Before we can talk about the exact value, we need to know this series converges. One way to see this is by comparison with a telescoping series. Notice that for :
So the partial sum satisfies:
The partial sums are increasing and bounded above by 2, so the series converges. But to what? The first several partial sums tell a suggestive story:
That limit, , is suspiciously close to . But knowing the decimal expansion is not the same as proving the identity. Mathematicians in the early 18th century could compute these partial sums to many decimal places. They could even conjecture the answer. What they could not do was prove it. The gap between numerical evidence and rigorous proof is where the real mathematics lives.
Failed Attempts
The Basel Problem resisted solution for 91 years, and it is worth understanding why. This was not a problem that lacked attention. Some of the finest mathematical minds of the 17th and early 18th centuries worked on it.
Pietro Mengoli posed the problem in 1644 but could not solve it himself. Jakob Bernoulli, who made foundational contributions to probability theory and calculus, attempted the problem repeatedly and published his frustration. He proved that the sum was bounded between and , but tightening these bounds proved extraordinarily difficult. His brother Johann Bernoulli, equally brilliant and famously competitive, also failed.
The difficulty was not computational but conceptual. The tools available — geometric series, comparison tests, telescoping sums — could establish convergence and approximate the value, but no known technique could produce an exact closed form. The problem required a fundamentally new idea: treating the sine function as an infinite polynomial and factoring it by its roots. This was the insight that only Euler had the audacity to try.
The Basel Problem is named after Basel, Switzerland, the hometown of the Bernoulli family and also of Euler. It is fitting that the problem that defeated the Bernoullis was solved by their most famous student — Euler studied under Johann Bernoulli in Basel before moving to St. Petersburg.
Euler's Brilliant Insight — The sin(x)/x Product
Euler's proof begins with one of the most important functions in mathematics: . The Taylor series expansion of the sine function around zero is:
If we divide both sides by , we obtain the function , sometimes called the sinc function:
Now comes the crucial observation. The function has a very specific set of zeros. Since when for any nonzero integer , and dividing by removes the root at , the function has zeros precisely at:
Euler's key insight was an analogy with polynomials. If a polynomial of degree has roots and satisfies , then it factors as:
Euler boldly extended this to the transcendental function , which is not a polynomial but an “infinite polynomial” with infinitely many roots. Since the roots come in pairs , pairing them gives factors of the form . Therefore:
In compact notation:
This is the infinite product representation of . Euler used it before it was rigorously justified. The rigorous foundation came later through Weierstrass's factorization theorem, which confirms that entire functions can be expressed as products over their zeros (with appropriate convergence factors). Euler's mathematical intuition was running decades ahead of the available theory, and he was right.
Euler treated a transcendental function as if it were a polynomial with infinitely many roots. This was not rigorous by modern standards, and contemporaries questioned it. But the answer it produced was correct, and the approach opened doors that led to some of the deepest results in complex analysis.
Matching Coefficients
We now have two representations of the same function . Euler's strategy is to expand both and compare the coefficients of .
The Taylor Series Form
From the Taylor series, we already have:
The coefficient of in this expansion is .
The Infinite Product Form
Now consider the product expansion:
When we multiply this out, the constant term is clearly (the product of all the 1s). The coefficient comes from choosing the term from exactly one factor and the from every other factor. Summing over all such choices:
Factoring out :
Setting Them Equal
Since both expressions represent the same function, their coefficients must be equal:
Multiplying both sides by :
And there it is. Ninety-one years of effort, resolved in a single stroke of algebraic comparison. The sum of the reciprocal squares of all positive integers is exactly .
The entire proof reduces to one idea: express the same function two different ways, then compare coefficients. The Taylor series gives you the coefficient from calculus. The infinite product gives you the coefficient from the zeros of sine. Setting them equal produces the identity. This technique of “matching coefficients” became one of Euler's most powerful tools.
Why This Was Revolutionary
It is hard to overstate the impact of Euler's result. Before 1735, mathematicians had no reason to expect that a sum built entirely from integers — — would have anything to do with , a constant defined by the ratio of a circle's circumference to its diameter. The Basel Problem revealed a hidden bridge between arithmetic and geometry that no one had suspected.

Euler did not stop at . By matching higher-order coefficients in the same product-series comparison, he computed the exact values of for all positive even integers:
The general pattern, later formalized using Bernoulli numbers , is:
Every even zeta value is a rational multiple of a power of . This is a deep structural fact about the integers and their relationship to the circle.
But here is the mystery that remains: the odd zeta values — — resist all similar characterization. In 1978, Roger Apery stunned the mathematical world by proving that is irrational (it is now called Apery's constant, approximately ). But we still do not know whether is related to in any natural way. We do not even know whether is irrational. The even-odd asymmetry in the zeta function is one of the deepest unsolved puzzles in number theory.
Euler's work on the Basel Problem also laid the foundation for the Riemann zeta function, which Bernhard Riemann extended to the complex plane in 1859. Riemann's insight was that the function could be analytically continued to all complex values of (except ), and that the distribution of prime numbers is intimately connected to the zeros of this function. The Riemann Hypothesis — that all nontrivial zeros of lie on the line — remains the most important unsolved problem in mathematics. It is, in a very real sense, a direct descendant of Euler's Basel Problem. I wrote more about this connection in my post on unsolved conjectures in mathematics.
The line from the Basel Problem to the Riemann Hypothesis is direct: Euler computed specific values of the zeta function. Riemann extended it to the complex plane and connected it to prime numbers. The question of where this function vanishes became the central problem of analytic number theory. It all started with the sum of reciprocal squares.
A Modern Proof Sketch — Parseval's Theorem
Euler's proof, while correct, relied on the unproven assumption that transcendental functions can be factored like polynomials. Over the following centuries, mathematicians found numerous independent proofs of the Basel identity. One of the most elegant uses Fourier analysis — a completely different branch of mathematics that did not exist in Euler's time.
Consider the function on the interval . Its Fourier series expansion is:
The Fourier coefficients are (since is odd) and . Now we invoke Parseval's theorem, which states that for a function with Fourier coefficients and :
Parseval's theorem is, at its core, a statement about energy conservation: the total energy of a signal equals the sum of energies of its frequency components. Let us compute each side for .
The Left Side
The Right Side
Since and for all , and , we get:
Equating Both Sides
Parseval's theorem gives us:
Dividing both sides by 4:
The same answer, derived from a completely independent set of ideas. The fact that Euler's product-based argument and the Fourier-analytic approach converge on the same identity is a deep confirmation. The result is not an accident of one method. It is a structural truth about the relationship between integers and the circle that manifests through every lens we turn on it.
Euler's proof uses the zeros of the sine function. The Fourier proof uses the orthogonality of sines and cosines. Both arrive at the same identity. In mathematics, when two completely different approaches yield the same result, it is a strong signal that the result touches something fundamental.
Python Verification
One of the things I love about mathematics is that you can verify deep theoretical results with a few lines of code. Here is a Python script that computes the Basel sum numerically and compares it to :
import math
# Compute partial sums of the Basel series: sum of 1/n^2
def basel_partial_sum(N):
"""Compute the partial sum of 1/n^2 for n = 1, 2, ..., N."""
return sum(1.0 / n**2 for n in range(1, N + 1))
# The exact answer
exact = math.pi**2 / 6
# Show convergence at different partial sums
print("Basel Problem: sum of 1/n^2 = pi^2/6")
print(f"{'N':>12} {'Partial Sum':>18} {'pi^2/6':>18} {'Error':>14}")
print("-" * 68)
for N in [1, 5, 10, 100, 1_000, 10_000, 100_000, 1_000_000]:
s = basel_partial_sum(N)
print(f"{N:>12,} {s:>18.12f} {exact:>18.12f} {abs(s - exact):>14.2e}")
# Output:
# N Partial Sum pi^2/6 Error
# --------------------------------------------------------------------
# 1 1.000000000000 1.644934066848 6.45e-01
# 5 1.463611111111 1.644934066848 1.81e-01
# 10 1.549767731166 1.644934066848 9.52e-02
# 100 1.634983900185 1.644934066848 9.95e-03
# 1,000 1.643934566682 1.644934066848 1.00e-03
# 10,000 1.644834071848 1.644934066848 1.00e-04
# 100,000 1.644924066898 1.644934066848 1.00e-05
# 1,000,000 1.644933066849 1.644934066848 1.00e-06
# The error decreases as ~1/N, confirming convergence
# For an exact verification, we can use the mpmath library:
try:
from mpmath import mp, mpf, pi, nsum, inf
mp.dps = 50 # 50 decimal places
result = nsum(lambda n: 1 / n**2, [1, inf])
target = pi**2 / 6
print(f"\nHigh-precision verification (50 digits):")
print(f" sum 1/n^2 = {result}")
print(f" pi^2/6 = {target}")
print(f" Match: {str(result)[:50] == str(target)[:50]}")
except ImportError:
print("\n(Install mpmath for arbitrary-precision verification)")The convergence rate of the partial sums is , which is relatively slow — you need a million terms to get six decimal places. This is because the error in truncating the series at is approximately . But the mathematical identity is exact. The partial sums converge to not approximately but precisely, to every decimal place, forever.
Conclusion
I keep coming back to the Basel Problem because it captures something essential about mathematics: the most profound truths are often hidden in the simplest questions. The sum looks like it should have a boring answer. It is just adding up fractions with square denominators. There is no circle in sight, no geometry, no trigonometry. And yet the answer is , a quantity that encodes the circumference-to-diameter ratio of every circle that has ever existed or ever will.
This is what drew me to mathematics in the first place. As someone who spends most of my time building AI systems and studying for medical school, I sometimes get asked why I care about 18th-century number theory. The answer is that results like the Basel Problem train a specific kind of thinking: the conviction that hidden structure exists and can be found. When I am debugging a machine learning pipeline or studying a biochemical pathway, I carry the same instinct — that beneath the surface complexity, there is an elegant organizing principle waiting to be uncovered.
Euler was 28 when he proved the Basel identity. He had decades of extraordinary work ahead of him. But this was the result that announced him to the world, that showed a young mathematician could see connections invisible to his predecessors. The sum of reciprocal squares is not just a formula. It is a proof that arithmetic and geometry are two languages describing the same underlying reality, and that the deepest mathematical surprises come not from exotic constructions but from asking the simplest questions and refusing to stop until you find the answer.
A sum built from integers. An answer built from the circle. The Basel Problem is a permanent reminder that mathematics is not a collection of separate subjects — it is one vast, interconnected structure, and the most beautiful results are the bridges between its parts.