Re: [OPE] The micro dimension of the Financial Crisis

From: Paul Cockshott <>
Date: Fri Nov 14 2008 - 05:52:38 EST

At the probabalistic political economy conference in July, Emanuel
Farjoun gave a talk in which he made the very
interesting observation that of the infinity of possible probability
distributions that are in principle possible, only
a very small number ever appear in reality: the exponential family,
(normal, binomial, Gamma etc) the power law
distributions and a few others.
The reason why it is important to decide whether the modeling of
financial risk uses exponential distributions
or power law distributions is that they give quite different predictions
of the likelyhood of rare events.
In a negative exponential distribution the likelyhood of extreeme events
falls off very steeply so that
when you integrate over them they become insignificant. With a power law
on the other hand the
likelyhood of extreeme events is much higher. Thus if you model your
value at risk calculations
using a negative exponential rather than a power law distribution you
will seriously under rate
the likelyhood of extreeme events occuring.

This is true whether one is designing structures for the North Sea oil
industry or dealing
with the probability of withdrawals from a bank.

Jurriaan Bendien wrote:
> Of course you can model human behaviour mathematically, to the extent
> that you can assign a measurable quantity to an aspect of behaviour,
> and assess which variables have the most weight in the given
> situation. There is a pre-mathematical problem there of categorisation
> of the units and variables which you want to count and measure, but
> you can devise workable schemata which capture most salient cases,
> through a taxonomy of the subjectmatter as a whole, aided by
> probabilistic and exploratory reasoning about the likely weighting of
> different sorts of things.
> But the real point is that most times, mathematical modelling is
> combined with non-mathematical inferences, assumptions and judgements
> about human behaviour. To create a model, you need to import
> assumptions into the model, and select variables deemed important -
> you have to assume something, that's the thing. The crucial question
> is then, where do we get them from? They could be derived from
> background theory about causal processes, from ideological values, or
> from experience. A "sophisticated empiricism" indeed has a sort of
> feedback loop whereby assumptions (including categorical distinctions)
> are adjusted according to new data that become available, including
> results of the model's application (though some things must stay
> constant).
> The main "fetters" on mathematical forecasting are not techniques, but
> (1) private property and competition, which block the formation of
> predictive ability and cooperation in important respects, (2) the
> effect of the forecast on the subjectmatter it seeks to predict, (3)
> the very reason why you do the forecasting in the first place.
> If rich people pay big money to predict the future value of their
> capital, the forecaster is likely to ask certain questions but ignore
> others. He is not going to pursue those questions because they fall
> outside the scope of inquiry, or outside the frame of reference that
> informs the very purpose why he is doing it. The young Marx already
> realised this, and literally wrote, that it was the framing of the
> questions themselves that were the chief difficulty, and therefore
> that we ought to analyze the questions thoroughly - and in this
> respect the professional mathematician is ipso facto no better
> positioned than a Dutch lout dabbling in social science or a physicist
> or a counsellor.
> So I feel the question is not at all "whether mathematics has all the
> answers, or superior answers", but (1) what kinds of problems people
> try to solve and why, (2) how you can usefully combine mathematical
> reasoning with non-mathematical insights to provide superior
> forecasting, (3) whether you can actually publish what you find.
> Alfred Korzybski pointed out that one of the biggest problems we have,
> is when we start to confuse the concepts and language we use to depict
> a reality, with the reality itself. One of his phrases was: "the map
> is not the territory" (he did not mean a terrible Tory, he meant, that
> you have to be able to tell the difference between the representation
> of reality, and reality itself. The map is only an aid to "reading"
> reality, an orientation device).
> Since, among other things, we make part of our own reality as well as
> having part of our reality made for us, and since this dialectic is
> mediated by cognition and representation, there is an irreducible
> margin of error involved which we can only reduce with the aid of
> creativity and inventiveness.
> An intellectual who is engrossed in his intellectual pursuits,
> assuming and devising distinctions and testing them out, is
> particularly susceptible to Korzybski's problem, simply because his
> modus operandi is likely to be one-sided - to work out his concepts,
> he has to ignore a whole bunch of things, and to understand what's out
> there, his concepts influence what and how he will look at it. It
> leads to subject-object confusions of all kinds. At the most extreme
> level, Hegel proposes an objective idealism according to which the
> empiria are only the reflections or expressions of eternal categories,
> and Plato proposes a theory of the "forms of knowledge" which (he
> says) we "recollect".
> But this does not really solve Immanuel Kant's problem of chasm
> between the noumena and phenomena, except for the insight that the two
> are parts of a unity, a whole. A better (non-mystical) solution is to
> acknowledge that the problem is real, and then look for practical ways
> to overcome it to the extent that you can. This is not primarily a
> problem of epistemology or ontology, but of learning efficiency with
> respect to a goal or purpose, i.e. what is really concluded from
> success and error. For the slow learner, the chasm between noumena and
> phenomena is wide; for the fast learner, it shrinks. In this respect,
> pragmatism is an exceedingly simple theory: you simply look at what
> practically proved the work well or failed badly so far, and you
> pursue what had the most success. But for a true forecaster this
> approach is not sufficient, because he wants to know exactly why it
> worked or didn't work, in order to form an understanding that will
> stand the test of time, so you get less unexpected outcomes and better
> orientation out of it.
> Mr Greenspan could always say, I suppose, well my theory worked for
> two decades and that's not bad going. He has a point, in the long run
> we're all dead, you give it your best meantime. But the true
> forecaster seeks a frame of reference which can explain both why the
> theory seemed to work for two decades, and why it failed thereafter
> rather spectacularly. Kuhn talked about paradigms, Lakatos talked
> about scientific dogma, Laudan talked about the social environment of
> science, but this is just to say there seems to be also exogenous
> limits to learning efficiency. But anyhow one thing you can be sure
> of, it is that if people need to do what they can to survive, they'll
> learn "something" very, very fast.
> There are more chess maneouvres possible than there are stars in the
> sky - though it may be difficult to beat Deep Blue as an individual,
> you can get pretty close.
> Jurriaan
> Would you believe in a love at first sight?
> Yes I'm certain that it happens all the time.
> What do you see when you turn out the light?
> I can't tell you, but I know it's mine.
> Oh I get by with a little help from my friends,
> Mmm I get high with a little help from my friends,
> Oh I'm gonna try with a little help from my friends

ope mailing list
Received on Fri Nov 14 05:54:43 2008

This archive was generated by hypermail 2.1.8 : Wed Dec 03 2008 - 15:07:39 EST