Feb 17, 2012

Models are dangerous!

We need to be aware of what we are doing when assigning probabilities - most people somehow seem to tend to forget that:
  • they are mistaking the model for reality, and even worse
  • they are forgetting about the reality check of the model in suitable intervals
  • however, a bad model may be worse than no model at all.
Of course you can assign probabilities to your  guesses about future developments and use scenarios and decisions trees. However, that assumes we know the future states of the world in sufficient detail to make relative or absolute judgement calls about their likelihood.


Now, as Knight, Popper (and based on him Soros), and Taleb argue, the future is open and we cannot know it 'completely' enough to rely on our models.


Walking along river Danube we (may) feel confirmed about our (locally correct) belief in white swans by every observation of another white swan around the next bend - which we turn into a belief on the existence of only white swans - until we meet a black swan from Australia. In this, there is an epistemic problem that we cannot extrapolate from a limited observation / induction base to a 'totality' of states of the world.


We often cannot know the future 'completely' or sufficiently to make the assumptions we are psychologically and institutionally 'required' to make, because people (esp. in large and political organizations, without vision and / or strong leaders) cannot remain confident in the face of having to admit to (brutal epistemic) uncertainty.


Of course we can adapt our (inferred) states of the world and probabilities as we go along and learn, but that requires changing the model on the fly - however, what is your justification for the model if you have to update it frequently (in the large, political, visionless organizations above)?


If we are dealing with 'closed', simple systems that follow (or can be approximated by) linear models a probability approach is fine, we can assign probabilities and measure risks. If things get more complex and complicated we run into the epistemic limits described above and we face uncertainty and cannot assign probabilities anymore - as Knight argued.


Here is alink to Knight's book: Risk, Uncertainty and Profit

No comments: