The limits of formal reasoning
==============================
Allin [1383 of 8/3]writes, perhaps in a similar vein to Gil,
perhaps not:
"we can be sure that we have taken everything relevant
properly into account only if we can produce a general
mathematical treatment."
Would it were so.
In fact there *is* no way to take into account 'everything'
relevant and modern logic has proved it.
>From the simultaneous assumption, it can be proved that under
common circumstances, infinitely large amounts of value can
appear out of nowhere. But this cannot be deduced from within
the simultaneist system. It can only be discovered by confronting
this system with data that satisfy reality but violate the model's
assumptions.
That is, formulating the model actually excludes the data
which show the model doesn't work. It does not take into
account everything relevant because it excluded what it
took to be irrelevant when it was created.
The simplest way both to discover its contradictions and also
to prove them is to produce a set of numbers which shows it. I've
done this; I'm still waiting for an answer.
The fact that simultaneism cannot produce an answer to its
own contradictions even when confronted with them, does not
exactly raise high hopes that it will discover these
contradictions aided only by its models.
What you really do when you create a model is to test
the internal consistency of your own thinking. When Allin
says 'everything relevant' a slip of the keyboard is involved;
what he actually means is 'everything relevant to *me*'.
What a model lets you do is externalise your own thought,
and this is not a non-useful thing to do. But it is not
the same thing as analysing reality. That is my point; once
you cross the narrow bridge from your equations to the
real world and come back convinced that your equations
*are* the real world, you are lost in a virtual reality
from which you never return.
The incompleteness of second-order predicate calculus
=====================================================
Modern model theory has proved, unfortunately, that we
*cannot* be sure we have taken everything into account if
we produce a general mathematical treatment.
Above a certain minimum degree of complexity, formal models
are 'incomplete'; it is impossible exhaustively to enumerate
their tautologies or contradictions. Most reasonable models
contain contradictions that will *never* come to light unless
we happen to stumble across them. Therefore, yes we *can*
miss something relevant *even* though we have produced a model.
This is part of the reason that, for example, it is very
hard to guarantee that a computer programme will work.
So it is a very practical and commonplace issue.
Like Allin, I don't feel really satisfied until I have
produced a model. But I think this is a dangerous feeling;
like fire, a good servant but a bad master. In my view
a model 'works' when it establishes a general result;
this saves the possibly infinite work of enumerating all
the special cases. But if one goes beyond this to argue
that the model is valid beyond the specific question
you asked it when you set it up, then you will fall
into error. Models, like computers, answer the question
that is put to them, no more and no less.
This is why, in my view, the old tried-and-tested scientific
procedure of checking out models against data, whether
it be real data or thought-experiment data designed to
test a specific and likely or possible scenario, is still
the best method for finding contradictions.
Faced with a set of numbers that expose a contradiction, a
logician *must* ask where the contradiction comes from. If
the logician relied on deductive logic alone, s/he might
never come across the contradiction and continue to believe in
a disprovable theory.
Though a disconcertingly large number of people seem willing to
rely on disprovable theories even when the contradictions in
them *have* been exposed.
With numbers.
Alan