\chapter{Types of proofs, types of definitions}
A proof is showing that a statement is true (that it follows from
assumptions by valid rules of inference).
The most common statements and proofs in math are:
\begin{description}
\item[implications] $A \implies B$; ``If $A$, then $B$''
\item[quantifiers] Statements involving $\forall, \exists$
\end{description}
implication most basic;
logical steps are a meta version of implication
\section{Direct proofs}
Direct proofs are the cleanest;
to prove $A \implies B$,
you prove $A \implies C_1 \implies \dotsb \implies C_n \implies B$
careful of a distinction:
by the above, I mean that each step follows -from all previous-
Clearest is if it's step-by-step:
each step follows from -the immediately preceeding one-,
w/o needing to remember all the prior ones.
Think of the topology of this graph (of dependencies);
if it's complicated, try to break out separate lemmata!
(so that each step relies on the previous and a lemma or two)
\subsection{Iff: equivalence}
There are two natural ways to prove an equivalence, $A \iff B$:\\
Prove equivalence at each step:
$$A \iff C_1 \iff \dotsb \iff C_n \iff B$$
Prove one way $A \implies D_1 \implies \dotsb \implies D_n \implies B$,
then the other $B \implies E_1 \implies \dotsb \implies E_n \implies A$
(better: use left arrows)
A chain of equivalences is more satisfying, but not always practical.
\section{Indirect proofs: Contraposition and Contradiction}
Indirect proofs refer to any proof that start with ``assume $\not B$''.
The main distinction that one makes is between contraposition and contradiction.
Contraposition
instead of proving $A \implies B$, prove $\not B \implies \not A$
You just re-write an equivalent statement and prove that.
This is completely legit.
you prove $\not B \implies C_1 \implies \dotsb \implies C_n \implies \not A$
eg of statements that are easier to state and use one way,
but easier to prove the other
Fermat's theorem on stationary points
%%%%%%%%%%%%%%
Contradictions are more controversial:
some people love 'em (like Hardy), esp. for their drama and cleverness,
while others reject them.
A contradiction is when you say
``Suppose $A$ and $\not B$. Then x, y, z, contradition!''
This is a bit more frought: it uses
$\not(A \and \not B) \iff (A \implies B)$
...which is rejected in constructive logic.
A real contradiction uses *both* A and not B at some point.
faux contraditions:
assume $A$ and $\not B$. Then
$\not B \implies C_1 \implies \dotsb \implies C_n \implies \not A$,
but that's a contradiction
\emph{You never used the assumption $A$!}
This is actually a contraposition, and is clearer that way.
problem with contradiction is that you're carrying falsehoods around!
you're not just working with true statements
Reductio Ad Adsurdum
some distinguish where p and ~q yields any contradiction
p & ~q -> contradition.
Thus ~(p & ~q)
and where it yields ~p
The key point of contradiction is that you're carrying around an incoherent object
(Fermat's last thm)
\subsection{Direction of argument}
Easiest if argument goes in same direction;
confusing if you do
A -> c1 -> c2 -> ... cn
and ~B -> d1 -> d2 -> ... ~cn
...or switch back and forth.
\section{Exhaustion / case by case analysis}
kinda boring:
preferable to give uniform answers
[sometimes necessary if your answer space does have distinct cases,
e.g., classifying finite groups, say of a given order]
Amusing EG:
- assume the Riemann hypothesis
(then prove it)
- assume the Riemann hypothesis is false
(then also prove the result!)
EG:
statement is true for finite fields and infinite fields
(or positive char and char zero),
but requires different proofs! (as far as we know)
\section{Quantifiers}
$\forall x, A(x)$, or $\exists x, A(x)$,
or $\not\forall x, A(x)$, or $\not\exists x, A(x)$
The last 2 quantifiers can be re-written as the first ones
($\not \forall = \exists \not$ is called a counter-example;
no usual name for $\not \exists = \forall \not$),
and the quantifier ones can be re-written as implication ones
(if x in X, then A(x))
(where X is our universe)
...or producing an example
(take x = x_0. then x,y,z, A(x)!)
\section{Definitions and Characterizations}
Recall the duality between theories and objects:
theories describe (possible) objects, while objects are examples are theories.
FIXME: matroids are a good EG of a concept
that doesn't have a *standard* definition
\subsection{Axiom systems and theories}
Formal math starts with a system of axioms, and deduces from there.
From a more abstract point of view (like universal algebra),
this is too concrete: it's picking a generating set for a theory.
That is, the \emph{theory} is more basic than the \emph{axioms}:
a theory may admit different axiomatizations.
For example, the conventional definition of a group is with
a nullary, a unary, and a binary operation (unit, inverse, and multiplication),
but you can also define it with a binary operation of multiplication
(and axiomatize unit and inverse: $\exists e \in G$ s.t.),
or with a binary operation of division ($g/h$) and different axioms.
This is not necessarily particularly useful,
but emphasizes that the theory, the object of study is more fundamental
than any particular axiom system.
Of course, to deduce, you use particular axioms
(just as to compute in linear algebra, you pick a basis,
otherwise you can't get a handle on your objects).
\subsection{Axioms as implicit descriptions}
Axioms are an \emph{implicit} description of a theory:
they give a list of properties that objects satisfy
(just as a system of equations gives a list of equations
that a solution satisfies).
Contrast this with explicit descriptions of a theory:
``Here is the object (or objects) that I am studying''
You can give such a description in some cases
(when you have a classification theorem),
like vector spaces, surfaces, or simple Lie algebras,
and you can often work from the description,
rather than from axioms.
A very frequent technique is to have some concrete objects,
and then abstract them by finding axioms that they satisfy,
and then study everything that satisfies those axioms
\dots and see if the axioms characterize the object
(like defining equations), or what new kind of theory you get.
Similarly, you can take axioms and weaken them and see what
new theory you get, and so on: it's a back and forth.
\subsection{``The following are equivalent\dots''}
A genre of theorem is the Definition/Theorem\footnote{Some use portmanteau words
like ``Defineorem''.}: ``The following (conditions) are equivalent,
and we call something that satisfies it an XXX''
(examples in Atiyah-MacDonald,
or for instance a formally real field)
Formalists might scuff at this:
\emph{one} characterization must be the definition,
and others must be theorems.
There is something to this (what is your bottom line definition?),
but it is too harsh: ``TFAE'' \emph{is} a legit theorem,
and you're free to then name the described theory: this again emphasizes
the independence of a theory over any \emph{particular} axiomatization.
Sometimes the proofs work by showing the equivalence of each characterization in term.
Most commonly, the characterizations are arranged in order such that 1 obviously implies 2,
2 obviously implies 3, etc., and then you simply need to proves that $n$ implies 1.
In some cases the implications are more complicated,
and it really pays to have a digraph showing the implications,
though I've never seen this (the reader implicitly draws it themselves).
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
[constructive and existential]
Constructivism is not a *better* or *more correct* math;
it's just a different paradigm, yielding different theories.
You can do constructive analysis, etc.,
and it can yield some insights.