Feb 17, 2012

Models are dangerous!

We need to be aware of what we are doing when assigning probabilities - most people somehow seem to tend to forget that:
  • they are mistaking the model for reality, and even worse
  • they are forgetting about the reality check of the model in suitable intervals
  • however, a bad model may be worse than no model at all.
Of course you can assign probabilities to your  guesses about future developments and use scenarios and decisions trees. However, that assumes we know the future states of the world in sufficient detail to make relative or absolute judgement calls about their likelihood.

Now, as Knight, Popper (and based on him Soros), and Taleb argue, the future is open and we cannot know it 'completely' enough to rely on our models.

Walking along river Danube we (may) feel confirmed about our (locally correct) belief in white swans by every observation of another white swan around the next bend - which we turn into a belief on the existence of only white swans - until we meet a black swan from Australia. In this, there is an epistemic problem that we cannot extrapolate from a limited observation / induction base to a 'totality' of states of the world.

We often cannot know the future 'completely' or sufficiently to make the assumptions we are psychologically and institutionally 'required' to make, because people (esp. in large and political organizations, without vision and / or strong leaders) cannot remain confident in the face of having to admit to (brutal epistemic) uncertainty.

Of course we can adapt our (inferred) states of the world and probabilities as we go along and learn, but that requires changing the model on the fly - however, what is your justification for the model if you have to update it frequently (in the large, political, visionless organizations above)?

If we are dealing with 'closed', simple systems that follow (or can be approximated by) linear models a probability approach is fine, we can assign probabilities and measure risks. If things get more complex and complicated we run into the epistemic limits described above and we face uncertainty and cannot assign probabilities anymore - as Knight argued.

Here is alink to Knight's book: Risk, Uncertainty and Profit

Strategy and Finance on Moving Landscapes

When implementing a new strategy, which time horizon do you set and how much negative cash flow are you prepared to accept until the realization of the outcome? 

That is how patient are 'investors' and how deep are their pockets (resp. how good management at selling their story).

In terms of time horizons and deep pockets I like the concept of fitness landscapes (which has probably not been interpreted 'right' in its original biological context), where organizations and strategies have to move through hills and valleys of relative fitness (another concept with some issues), respectively have to move on a landscape that is moving itself.

Nevertheless, as a metaphor with quite some abstract and mathematical apparatus associated it seems useful. For instance, one could argue that non-monetary (that is hard to financially value or associated with immediate financial returns) factors can be brought into such a conceptualization impacting on 'fitness' of an organization or strategy (composed of financial and non-financial factors).

Growing niches for products / services, which can build on their accumulated economics of scale and scope nicely fit into this conceptualization as well.

Feb 14, 2012

Complexity, Evolution and Near decomposability

I have been pointing out the relevance of (nearly) decomposable systems, modularity and hierarchy to deal with complex systems before. Some of you had asked for a summary at some point.

I would like to know what you think of the concept and where you deem it useful for our discussions. 

•         It is his story of how variety generation can produce structure (organized) complex results relatively quickly by employing hierarchical relationships in evolution. 
•         It is potentially interesting wrt how costs of interaction between interfaced systems composed of (sub-)systems can be explained and measured
•         It provides a potential basis for the measurement of “complexity” and risk based on the interaction strengths between systems.

Here is a summary of one of the core articles of Simon on the evolution of complex systems based on hierarchy and nearly decomposable system. 
It gives a summary of his thinking on this subject inspired by natural systems and the application to social systems. 

His view of the evolution of complex structures is based on the following assumption:

If the existence of a particular complex form increases the probability of the creation of another form just like it, the equilibrium between complexes and components could be greatly altered in favor of the former.
This process (obviously) makes evolution of complex structures from building blocks in hierarchic structures much faster than the independent evolution of complex structures. The result of the process of evolution “employing” such an approach is near-decomposability (again – as it is based on simpler elements that are interacting more strongly internally than externally).

Simon defines near decomposability as follows:

(a) in a nearly decomposable system, the short-run behavior of each of the component subsystems is approximately independent of the short-run behavior of the other components; 
(b) in the long run, the behavior of any one of the components depends in only an aggregate way on the behavior of the other components.

He defines hierarchy as follows:

By a hierarchic system, or hierarchy, I mean a system that is composed of interrelated subsystems, each of the latter being, in turn, hierarchic in structure until we reach some lowest level of elementary subsystem. 

And qualifies the arbitrary notion of the measurement scale of a system wrt measurement / classification of the basic elements (defined as systems) by saying:

In most systems in nature, it is somewhat arbitrary as to where we leave off the partitioning, and what subsystems we take as elementary.

He further qualifies it by comparing to social organizations (which he described before)

Etymologically, the word "hierarchy" has had a narrower meaning than I am giving it here. The term has generally been used to refer to a complex system in which each of the subsystems is subordinated by an authority relation to the system it belongs to. More exactly, in a hierarchic formal organization, each system consists of a "boss" and a set of subordinate subsystems. Each of the subsystems has a "boss" who is the immediate subordinate of the boss of the system. We shall want to consider systems in which the relations among subsystems are more complex than in the formal organizational hierarchy just described. We shall want to include systems in which there is no relation of subordination among subsystems.

which relates to 

NEARLY DECOMPOSABLE SYSTEMS. We can distinguish between the interactions among subsystems, on the one hand, and the interactions within subsystems. ... As a second approximation, we may move to a theory of nearly decomposable systems, in which the interactions among the subsystems are weak, but not negligible.

THE DESCRIPTION OF COMPLEXITY. The fact, then, that many complex systems have a nearly decomposable, hierarchic structure is a major facilitating factor enabling us to understand, to describe, and even to "see" such systems and their parts.

The definition of near decomposability can be mode more formal (where it ordered such that strongly interacting elements are placed near each other):

This article treats of systems that are nearly decomposable--systems with matrices whose elements, except within certain submatrices along the main diagonal, approach zero in the limit. Such a system can be represented as a superposition of (1) a set of independent subsystems ... and (2) an aggregate system having one variable for each subsystem. ...

From the abstract of Simon and Ando's 1961 Aggregation of Variables in Dynamic Systems
which then allows to deal with the analysis and measurement of complex systems:

There is redundancy in complexity which takes a number of forms: Hierarchic systems are usually composed of only a few different kinds of subsystems, in various combinations and arrangements. Hierarchic systems are, as we have seen, often nearly decomposable. 

Hence only aggregative properties of their parts enter into the description of the interactions of those parts. By appropriate "recoding," the redundancy that is present but unobvious in the structure of a complex system can often be made patent. 

This allows us to reduce “the complexity” of the measurement of complex systems by compartmentalizing interactions and effects in areas or “boxes”.

N.B.: If one reads several of Simon's papers on causality and measurement there are a number of Machian perspectives and topics, however Mach's evolutionary / developmental / genetic perspective gets crushed in a systemic classification of the (Simonian) world, where the classification system seems to introduce a static nature. (Please correct me if you think I am wrong!)

I would greatly appreciate your feedback and thank for your attention!

Feb 12, 2012

Common evolutionary processes in science and arts

I think there is an interesting topic re arts, engineering and scientific methods, bodies of knowledge and methods of gaining knowledge, which can both be seen as extension of natural evolution to the cultural domain (in line with Ernst Mach, who extended Darwin's principles of evolution to how knowledge is gained very early after Darwin's publication).

Thinking of writing (crafting) a poem or a novel, it does not seem to be too far away from crafting a technical application or a scientific hypothesis, respectively its proof either through experiment or "logical" derivation. - All of these processes build on accumulation of test cases, sketches, drafts, observations, and trial and error wrt putting the dots together, building connections between "input" and "output".

So, I suspect, principles of tinkering, bricolage and learning by doing, resp. by trial and error to derive some integrative "picture" should be similar in engineering, science and in arts. In all areas these are based on intuition, though I further suspect the rationalization of how insights are gained is different: science and engineering emphasize logical rationalizations, while arts emphasize intuitive, creative rationalizations of how results have been achieved ... and tell their stories accordingly.
A post of mine in a LI discussion in the Scientist Artist Collaboration Group

Strategy, Intuition and Analysis

I believe that the strategy toolbox offers a useful set of tools to order and frame thinking about strategic issues. So the tools are limited, if one or a few are taken as "the" approach, especially if used without understanding the tacit knowledge element they can convey. 

One of these elements should be to think differently, outside the usual box - to develop risk scenarios and to envisage new approaches to or domains of business. Otherwise, if every strategist thinks BCG and 5 forces most companies will end up doing roughly the same in better or worse ways. 

Thus, what matters is the teachable but difficult to formalize way of thinking of a strategist, which means that economics, politics, history, philosophy as well as psychology and sociology as well as math, statistics, OR, and natural science (physics, chemistry, biology) can offer relevant models and mind sets to engage in strategy. For instance, I liked history a lot and learned I believe quite something about thinking in developments and scenarios as much as I learned about macro systems and variation from evolutionary theory. 

Knowledge about pertinent business domains is obviously useful - however, only up to the point that it does not blind the strategist - but how do you discover the cut-off point?! - Let's assume it is the mid to end 70s: Is the young guy in front of you talking about ... say computers on every desk and how to place them there a wacky lunatic or visionary business man - and if he is one of the two for you ... how big is the divide between the two? 

For me this points to the importance of the combination of creativity and intuition with formalized research and analysis, the specific tools are secondary, but what they can create in 'prepared mind' matters.

The sum is (not) greater than its parts

An explanatory / historical side-note on "the sum is greater than its parts" - concept:
Its origin seemingly goes back to concepts used by Henry Lewes and J.S. Mill, where a differentiation is made between the simple additive combination of forces (described by vectors) and the qualitatively different properties due to the combination of chemical elements leading to 'emergent' properties of H2O relative to its constituents H and O:

Lewes says: “The emergent is unlike its components in so far as these are incommensurable, and it cannot be reduced either to their sum or their difference”  (Lewes 1875, 413)

(N.B.: consider "or their difference" - it seems half of the story is forgotten, and not reduceable to the sum has been turned into greater than the sum, which adds some factor in my view.)

Mill says: “The chemical combination of two substances produces [...] a third substance with properties different from those of either of the two substances separately, or of both of them taken together . […] There, most of the uniformities, to which the causes conformed when separate, cease altogether when they are conjoined; and we are not, at least in the present state of our knowledge, able to foresee what result will follow from any new combination, until we have tried the specific experiment.” (Mill 1843, 371, bk3, ch6, §1)

The qualifier "present state of knowledge" is important here. Today we can (better) explain why the combination of two gases forms water. This indicates that we like to take as emergent what we cannot (yet) explain.

Thus emergence is the surprising occurence of a phenomenon we cannot currently explain - but may be able to explain in the future. The modern sum is greater than its parts is obviously a qualitative interpretation: water is definitely interesting in its properties, but why should its properties be "more" than that of H and O?

What are epistemological issues wrt the measurement of organizational complexity? Can one measure complexity or not - based on your definition of complexity?

We had some discussions on epistemological questions regarding the scientific process and measurement in general in the context of the measurement of the complexity of organisations (structures, processes). 

Differences in the definition of complexity and worldview regarding what constitutes science resp. how to do science in different disciplines obviously affect the answer - what is your take on this?

There will be value driven, socialization based, work field / discipline related differences in worldview - so please consider this in the tone of the discussion and be open and friendly to these interpretations of the world.A discussion I have started in the Quantitative Complexity Group on Linkedin

Is complexity measurement of organizations posible and feasible?

Organizations can be seen as hierarchical systems with business line / unit and departmental ‘modules’ that allow execution of specific functions through specific capabilities concentrated in particular areas. This confers economies through separation of work but also leads to interpretation and filtering problems in non-standard or changing situations - interpretative blindness and inertia are fostered in organizations. Therefore, organizations need to be heterarchical. Particularly in conditions of increased complexity and speed of change we face today. 

Heterarchical structures allow faster and broader interpretation of information, but also demand higher interpretative capabilities by management. Given traditional and often still normal ‘linearly’ organized procedures and structures, these interpretative capabilities determine to a large extent the success of an organization – as Edith Penrose already highlighted.

On the other hand, functional and departmental decomposability, i.e. separable modularity of an organization correlates with flexibility, adaptability and ease of change of an organization.

Does measurement of decomposability (e.g. based on Simon’s near decomposability) allow for a measurement and thus management of organizational complexity? What do you think?

Feb 11, 2012

How can strategy models be developed further on the academic level?

As argued in a discussion on the limitations of strategy models, I would seek to extend strategy concepts, models, or frameworks in two areas to cope with the shortcomings of strategic decision making: 

a) by researching and formulating the psychological of strategy to deal with framing issues, filtering and selection of information that is used in deriving and 'updating', resp. challenging strategy conceptions in firms.


b) it still remains an issue why relevant information is not 'used' by an organization, even if individuals take account of information.

Thus, the psychological perspective needs be complemented by a organizational perspective on how the different views of actors are aggregated into strategic decisions and actions on the relevant organizational levels: How do organizations weigh the different pieces of information and 'compute' the strategy / action result(s)?

What is your opinion on this approach?
A discussion I have started in Strategy Professionals Group on Linkedin:

Towards improved strategy concepts

Where and how do we need to beyond existing models of strategy? 

What are the issues that need to be tackled by 'improved' strategy models? 

How should strategy models address these issues?

There seems to be a certain need to go beyond existing strategy concepts based on an increasingly complex and changing competitive landscape. This requirement does not result only from a thirst for novelty and marketing, but rather an inability of existing approaches to deal with today's situation. However, how these models should look like, what they should address in terms of structural features and on which level of granularity they should be constructed remains unclear.

As argued in the discussion existing strategy models, I would look into issues such as:
a) Why are certain players (not) seen as potential competitors or entrants? 

b) Why are developments not perceived? 

c) How are current and potential markets 'defined'? 
How are alternatives perceived, framed and 'sold'?
d) What is filtered out and discounted in strategic decisions and why?
How can the above view and questions be operationalized from your point of view? Or: Why might they be wrong?