← Back to home

What is a number?

What does it mean to know a number? This question might seem trivial—after all, we know that 2 + 2 = 4. But consider π: we celebrate "Pi Day" with 3.14159, yet no one has ever written down the complete value of π. We can approximate it to trillions of digits, but the true number remains forever beyond our grasp. So in what sense do we "know" π?

This isn't a philosophical puzzle. The question cuts to the heart of mathematics itself: What is a number? And more importantly, what does it mean to know the value of a number? As we'll discover, there are multiple answers to both questions, each revealing different aspects of mathematical truth.

From ancient civilizations measuring fields with π ≈ 3.16 to modern algorithms that embrace controlled approximation, mathematics has always balanced the pursuit of exact truth with the practical reality of inexact representation. This tension has driven some of our greatest discoveries—and continues to shape how we understand the very nature of mathematical knowledge.

Multiple Ways of Knowing

We "know" π not through its decimal expansion (which is impossible to complete) but through its definition: the ratio of a circle's circumference to its diameter. Mathematical knowledge often comes through relationships and definitions rather than explicit representation.

Ancient Wisdom: Knowing Through Approximation

The tension between exact knowledge and practical approximation is as old as mathematics itself. Ancient civilizations developed sophisticated ways of "knowing" numbers that couldn't be expressed exactly, creating a tradition that continues today.

The Ancient Egyptians were pragmatists who understood numbers through their utility. The scribe Ahmes, writing in the Rhind Papyrus (c. 1650 BC), recorded a rule for finding the area of a circle: take the diameter, remove one-ninth, and square the result. This implies π ≈ 256/81 ≈ 3.16—impressively close to the true value and good enough for measuring circular fields and granaries.

But the first person to truly grasp the nature of mathematical inexactness was Archimedes of Syracuse (287–212 BC). Using polygons with 96 sides to sandwich a circle's perimeter, he proved that 3.1408 < π < 3.1428. Crucially, Archimedes emphasized that these were bounds, not the exact value. He understood that π was fundamentally unknowable in its complete form, yet could be pinned down with arbitrary precision.

The ratio of the circumference to the diameter lies between these bounds, but can never be expressed exactly as a ratio of whole numbers.
Archimedes

This ancient insight reveals a profound truth: mathematical knowledge isn't always about explicit representation. Sometimes it's about understanding the nature of a quantity and our relationship to it. The Pythagoreans learned this lesson painfully when they discovered that √2 couldn't be expressed as any ratio of integers—their "dangerous secret" that the cosmos of number had fundamental gaps.

The Hierarchy of Unknowability: From Irrational to Uncomputable

As mathematics matured, we discovered that unknowability comes in degrees. Not all "unknowable" numbers are created equal, and understanding these distinctions reveals different ways that mathematical knowledge can be structured.

Types of Numbers

Rational: Can be expressed as p/q where p, q are integers
Irrational: Cannot be expressed as a ratio, but may satisfy polynomial equations
Transcendental: Cannot be the root of any polynomial with integer coefficients
Uncomputable: Cannot be calculated by any algorithm, even in principle

Consider the progression: We know √2 ≈ 1.41421356... is irrational, but we can compute it to any desired precision and it satisfies the simple equation x² = 2. The number π is transcendental—proven by Ferdinand von Lindemann in 1882 to be unreachable by any finite sequence of algebraic operations.

Lindemann's Theorem and Galois Theory

π is transcendental over the rational numbers, meaning it cannot be the root of any polynomial with rational coefficients.

Consequence: Combined with Galois theory's characterization of constructible numbers, this proves the impossibility of squaring the circle. Constructible numbers form a very specific algebraic structure, and transcendental numbers lie forever outside this structure.

But there's an even deeper level of unknowability: uncomputable numbers. These numbers are well-defined mathematically but cannot be calculated by any algorithm, no matter how much time or computational power we have.

Chaitin's Constant: The Ultimate Unknown

Perhaps the most profound example of mathematical unknowability is Chaitin's constant Ω (omega), discovered by Gregory Chaitin in the 1960s. This number represents the probability that a randomly constructed computer program will halt—a seemingly simple question with profound implications.

Chaitin's Constant

$$\Omega = \sum_{p \text{ halts}} 2^{-|p|}$$Where the sum is over all programs p that halt, and |p| is the length of program p in bits.

Ω is perfectly well-defined: it's a specific real number between 0 and 1. Yet it's fundamentally uncomputable. Knowing even the first few bits of Ω would solve the halting problem and answer profound questions in mathematics, including whether famous conjectures like Goldbach's conjecture are true. As Chaitin put it, Ω contains "the maximum amount of mathematical truth that can be compressed into n bits of information."

Chaitin's constant represents the deepest form of mathematical unknowability: we can define it precisely, prove it exists, and even calculate its first few digits, but the bulk of its information is forever beyond reach.

This reveals something profound about mathematical knowledge: definability and computability are different things. We can know what a number is without being able to know what it equals.

Modern Algorithms: Embracing Useful Inexactness

In our digital age, we've learned to harness inexactness as a feature, not a bug. Modern algorithms often deliberately sacrifice perfect accuracy for practical gains—a sophisticated evolution of the ancient Egyptian approach to π.

HyperLogLog: Trading Precision for Scale

Perhaps no modern algorithm better embodies the philosophy of useful approximation than HyperLogLog. Developed by Philippe Flajolet, Nigel Martin and Marianne Durand in 2007, it solves a deceptively simple problem: counting distinct elements in massive datasets.

Exact counting requires storing every unique item—potentially terabytes of memory. HyperLogLog estimates cardinality with remarkable accuracy using just kilobytes of space, achieving roughly 2% standard error when counting billions of unique items.

HyperLogLog Insight

$$\text{If you've seen a hash with } k \text{ leading zeros, you've probably seen about } 2^k \text{ distinct elements}$$

Major tech companies have embraced this controlled imprecision. Google Analytics uses HyperLogLog to estimate unique visitors, Redis implements it for cardinality estimation, and Apache Spark employs it for big data processing. The algorithm represents a conscious choice: accept small errors to achieve massive scalability gains.

The Nature of Mathematical Knowledge

So what does it mean to "know" a number? Our journey from ancient approximations to modern algorithms reveals that mathematical knowledge comes in many forms:

Ways of Knowing Numbers

Through Definition: π is the ratio of circumference to diameter
Through Relationships: e is the unique number where d/dx(e^x) = e^x
Through Equations or Procedures: √2 is computed via x² = 2 using iterative methods
Through Properties: Ω is the halting probability with maximum algorithmic information
Through Approximation: π ≈ 3.14159... to any desired precision

The profound insight is that definitions and relationships are often more fundamental than explicit representations. We know π not by its decimal expansion (which is impossible to complete) but by what it means. We know √2 through computational procedures that converge to its value. We know Ω through its fundamental properties as the measure of algorithmic randomness, even though we can never calculate it.

Remarkably, uncomputable numbers form by far the largest class. While there are only countably many computable numbers (since there are only countably many possible algorithms), there are uncountably many real numbers. This means that almost all numbers cannot even be specified by any procedure, they exist in a realm beyond human description or computation. This view aligns with mathematical Platonism: the idea that numbers "exist" in an abstract realm of perfect forms. The definition is the reality; any finite representation is merely a shadow on the cave wall.

All exact science is dominated by the idea of approximation.
Bertrand Russell

But regardless of one's philosophical stance, the practical reality is clear: we can operate with perfect mathematical concepts while accepting the limitations of finite representation. This isn't a flaw in mathematics—it's a feature that allows us to work with infinite truths using finite minds.

The Future of Mathematical Knowledge

As we push the boundaries of computation and mathematical understanding, the question of what it means to "know" a number becomes ever more relevant. We're discovering new types of mathematical objects; like Chaitin's constant, that challenge our intuitions about knowledge and computability.

Meanwhile, practical algorithms increasingly embrace approximation as a design principle. From machine learning models that work with probabilistic knowledge to quantum computers that operate with inherent uncertainty, we're learning to extract useful information from inexact processes.

Perhaps the deepest lesson is that mathematical knowledge is not about having infinite decimal places, but about having the right conceptual tools to think clearly about infinity itself. Whether we're ancient Egyptians measuring grain stores with π ≈ 3.16 or modern engineers launching spacecraft with 15-digit precision, we're all participants in the same grand endeavor: finite beings grasping infinite truths, one insight at a time. And perhaps most fundamentally, Kurt Gödel showed us that any formal mathematical system will always be either inconsistent or have true statements it can never prove. Some truths will remain unknown not because of computational limits, but because of the very nature of logical systems themselves.

The Paradox Resolved

The "illusion of precision" isn't really an illusion—it's a recognition that mathematical truth operates on multiple levels. Perfect precision exists in the realm of definitions and relationships. Approximate precision serves us in the realm of computation and application. Both are valid forms of mathematical knowledge.

In the end, mathematics teaches us that knowing a number is less about writing it down completely and more about understanding its place in the vast web of mathematical relationships. The ancient Greeks worried that discovering irrational numbers meant the cosmos of number was broken. Instead, they had discovered that it was far richer and more beautiful than they had ever imagined.

Thank you to Nadir Hajouji and Josh Bauer for reviewing this essay.