\chapter{$\Omega$: with a bang: meta-mathematical closure}
% Originally,
% a letter to Josh Nichols-Barrer, 27 Jul 2006
% filed as
% misc/cur/1153988646.22095_3.beatrice:2,S
% ...we'd met up at Grace's birthday party
Here I describe a possible elegant end to core mathematics:
the development (and solution) of a meta-mathematically closed theory.
This kind of perspective dates back to Hilbert's meta-mathematics;
it's distinct in that I'm not talking about consistency or foundations
-- rather, it's a more categorical (less logical) form of meta-mathematics.
This arose from thinking ``what's the real point of self-reference?''
(beyond the foundational question of internal proofs of consistency),
particularly, what's the point of represented functors
(notably introducing stacks and spectra)
and what are ($n$-)category theorists up to?
(Partly the answer is simply
``closure under certain operations means good (algebraic) structure'',
but I want a deeper meaning.)
\section{An analogy}
I imagine a formal analogy between:
\begin{itemize}
\item mathematical theories :: to solve math problems
\item algebraic extensions :: to solve polynomials
\end{itemize}
\dots and wonder about the possibilities of ``meta-mathematical closure''.
That is, you start life with rational numbers.
Then you can't solve $x^2 = 2$, so you throw in $\sqrt{2}$.
Then you can't solve $x^2 = \sqrt{2}$, so you throw in more.
Eventually you realize that it's better to just take the
algebraic closure, and then all your polynomials can be solved.
So if you just want to solve every polynomial, you can go to
$\bar \bQ$ (or if you're a real completist (no pun intended),
$\bC$ and $\bC_p$), and you're done.
Similarly, you start life with, say, Euclidean geometry and
basic number theory. Then you develop Galois theory to
understand why you're having trouble trisecting angles and
solving quintics. Then you develop group theory to
understand what's going on in Galois theory (and Lie theory
to understand why some PDEs are easier to solve than others;
remember that you developed calculus to compute areas of circles
and parabolae).
Thus theory spawns theory, ever raising new questions.
Sounds familiar?
I would like to take the ``theoretical closure'',
and obtain a theory that answers all questions raised by itself.
Recall the digraph of math, with an edge from theory to spawned theory.
I feel that there is a core of tightly connected ``central'' theories,
and that one can imagine this core having a `closed' theory.
\section{Meta-mathematical closure}
One could try to get a ``complete'' theory
(meaning a ``theoretically closed'') by repeated extension,
adding theories only when necessary, but this may never end:
following the algebraic closure analogy, the algebraic closure
is an \emph{infinite} extension.
Conveniently, there is an elegant characterization of the
algebraic closure: it's just solutions of polynomials over
the original field -- you don't need to do an infinite iterative
process of ``extend, extend the extension, etc.'',
and similarly, one might hope for an elegant characterization of such a
``grand unified theory of (part of) math''.
Even better, you know about the complex numbers and the fundamental theorem
of algebra, so you have a concrete algebraically closed field;
similarly, we might hope for a concrete construction of such a closed theory.
\section{Self-reference}
The key trick to such a complete theory is self-reference:
a theory that can study its own theory.
The hope early last century was that logic would be such a
meta-mathematical theory: by formalizing proofs, you could
``meta-solve'' math.
Unfortunately, it turns out that ``formal proofs'' are really formalizing computation.
You get a beautiful and important theory,
but not one that answers itself or the math you'd hoped.
As this example shows, not all self-reference yields structured theories!
As G\"odel's incompleteness theorem (and related: Tarski's indefinability theorem,
the undecidability of the halting problem) demonstrates, certain self-references
yield incomplete theories, and self-referencial computer programs are very difficult
to analyze. Thus self reference is necessary but not sufficient for a closed theory.
More hopeful is the example of representable functors,
especially in the guise of moduli spaces and generalized
(co)homology theories, and spectra and further developments.
That is, if the solution to a question is a functor, which
is represented, you can study the representing object
(and thus the solution)
\emph{in the same theory that you started with}.
So for instance, the point of introducing stacks is that many basic
questions have answer a moduli space, which in general needs
to be a stack -- but then you can study the moduli space as
itself an algebro-geometric object!
So the deeper point of categorical closure (making functors represented)
is (from this POV) to produce a complete theory.
One sees this also in algebraic topology and formal algebra:
at first you develop algebraic invariants of topological
spaces, but conversely formal algebra (higher categories) is
essentially topological, and so one hopes that by having
a suitable theory/category, the objects and their
theory/invariants can be in the same context.
You can think of this as Hegelian: topological spaces are the thesis,
algebraic invariants are the antithesis, and they come together in
a unified theory, which is the synthesis.
More generally, mutual reference (potentially) leads to self-reference
via such a Hegelian process.
More flippantly, one might remark on this closure coming
from a sort of Galois connection between ``theories'' and
``problems'' (a theory maps to the problems it raises; a
problem maps to the theories that solve it; I say
``sort of Galois'' because you need to iterate to get closure;
one might call this a Galois pre-connection). This sort of statement
is not that speculative: I hear that Martin Hyland says that
``syntax and semantics'' are adjoint.
\section{A ``grand unified theory of mathematics''?}
By the above ``grand unified theory of mathematics'',
I don't mean a single theory that encompasses all of math.
I suspect that math \emph{on the whole} is too sprawling to be
well-encompassed by a single theory, but I think that the
\emph{core} of math \emph{is} unified.
The hope is that this core of math would be like particle physics,
and that rather than continuing to find deeper and subtler theories ad infinitum,
we will arrive at some ``elementary particles and forces'',
(in this case, elementary concepts and theorems)
from which all else (within this theory) follows.
This would be a very \emph{satisfying} end, with a bow on top.
This is more visible (and possible?) in very categorical fields,
(that yet have a basis in "general interest" math!
i.e., are not complete dry abstractions)
like algebraic geometry and algebraic topology.
Note that this requires restricting not just the objects of study,
but the questions you ask: for instance there are combinatorial questions
provoked by basic group theory (say, of the symmetric group)
that might not fit into The Big Theory.
If pressed to name such a hypothetical theory, I'd quip
``non-commutative infinite dimensional algebraic geometry''
(should I add ``$p$-adic'' or ``adelic'' and ``$n$-categorical''?),
but this is clearly in jest -- I have no idea.