Jun 19, 2012

Development "vs." sales dilemma in lean startup methodology


The lean startup approach forces adherents to focus on development and sales in new product / company development, which creates a hen and egg issue: you need to do both, developing and sales. One issue with developing the perfect product is that nobody (or too few for the price and development cost) will want to buy it - hence the requirement to go out and find customers (and learn what they have to say, how they want it changed and so on).
However, if you go out early with a crappy product / design, it may hurt you in terms of reputation esp. if you get a wide exposure and are 'burnt'. So you have to be really cautious in finding mainly those that are interested and not losing too many that will not like what you created.
It's an iterative process where you follow sales and tech development routes at the same time. That is likely to be draining your attention and resources. Basically you have to focus on two things at the same time. Impossible if you do not have the people to do the tasks and processes to interface them properly.

What is organization design? - Nicolay Worren

Blog post by Nicolay Worren on What is organization design?


“Organization design” involves the creation of roles, processes and structures to ensure that the organization’s goals can be realized.
Some people associate organization design with the mechanical arrangement of positions and reporting lines on the organization chart. It is certainly true that organizational designers also need to define the vertical structure, including reporting lines. However, organization design is much more than “boxology”. Organization design problems are often some of the hardest problems that leaders face. The decisions they make with regard to formal structure, roles and processes directly impact the jobs and careers of employees – and the ability of the firm to realize its strategic objectives. ...

Discussion on complexity measurement in LinkedIn Orga Design Group


I posted on organizational complexity reduction in what might become an interesting discussion on Linkedin's Organizational Design Community:
"If you meant me wrt defining complexity: The use is made from a 'pragmatic usage perspective' focussed on the need to measure a proxy and certainly somewhat in flux. It may be that complication is the better term as hinted at above, but I need to know more about the understanding of complication here.
So, I would measure it ('complication-complexity') as the frequency and strength of interactions among members of an organization - currently.
I would include in the definition of the concept additionally rules and constraints that enable or hinder organizational interactions, processes and structural relationships between organizational 'members' or 'subsystems'."
Here the link to the discussion - be aware you need to join Linkedin and the group:

Behavioral reasons for organizational complexity?

 On quora I have asked a question about the causes of orga complexity:

"Do you think the evolutionary basis of human behavior can be invoked to explain the 'complexification' of organizational structures and processes?"
Here is a short into to the answer by Mario Alemi
"Yes and no. Yes it has an evolutionary basis, no is not mainly about our behavior. It's that an organization will evolve naturally towards more complex states.
First, we have to think of "evolutionary basis" not as a doctrine, but as an obvious statement –if one organization fits better to the environment, it will survive over a less fit one. "
Particularly interesting is the following afterthought on evolutionary behavior by Mario:
"As a day-after-thought... the idea that "no is not mainly about our behavior" is maybe too strong. You can actually see our behavior as the best to create efficient organizations. From an evolutionary psychology POV we evolved so that we tend to create subnetworks in an organization. In an organization, there is probably a struggle between the common good (divide the macro-tasks in sub-tasks and have people do what it is needed to be done) and people, who feel they should do what they think can do best. But that's a HR problem..."
Here is a link to the quora post - you need to be signed up. If you need a quora invite, you can contact me!

Apr 17, 2012

Management information systems and organizational worldviews

Management information systems reflect and control an organizations outlook on its 'world'. They control and steers information, incentives, the interpretation and filtering of information, affecting strongly its reaction on environmental and internal developments. 

They are causally related to the success or failure of innovative approaches and products as the filters used in determining 'key performance indicators' influence perception and interpretation of new markets and revenues.

These are usually handicapped relative to established markets and products. Nevertheless, again and again established players are superseded by new entrants as they discount information and chances that new players used to build franchises based on their 'attackers advantage'.

Modularity, networks and growth of complexity


Herbert Simon and Albert Ando (Simon and Ando 1961) have developed the concept of near decomposability, which is based on the idea that systems of interactions can be separated into groups (modules) according to the strength of interactions. If there are groups of elements among which interactions are much stronger than among other elements, while they show less strong interactions with other groups of interactions, it is assumed that these intergroup interactions can be neglected.

The obvious danger in this assumption is that interactions between groups are neglectable, which may be correct in the short run or under normal conditions but may also be wrong under longer terms and more unusual conditions, which leads under positive feedback to the crossing of thresholds and phase transitions and then may be observed as increased stress, risk and ‘catastrophes’.

While the system can be applied to the analysis of systems it may, under conditions, also be turned around to the explanation of emergence and system change.

Simon-Ando decomposability implies that microstates may be aggregated into (different) macro-states that describe aggregate system behaviour respectively macro-state variables. This is relevant for the analysis of different views, explanations and approaches to the analysis of an issue such as it occurs in science in general and in social sciences in particular.

Decomposability, i.e. separable modularity of a system correlates with flexibility, adaptability and ease of change of a system such as an organization. Thus, decomposability, innovation and the inverse of risk correlate. Decomposability is related inversely with risk as non-decomposable systems are characterized by systemic interlinkages that are more difficult to account for and manage. Near decomposability (Simon ) involves the assumption that interlinkages among a systems (possibly developing) modules can be neglected for analytical and extrapolation purposes. If non-decomposable systems are taken to be decomposable or decomposed according to non-fitting schemes, risk increases relative to a better match between partial model and its extrapolation. It is a viable assumption that interpretation schemes associated with non-fitting problem decompositions based on erroneous models are at the root of individual organizations threatening business failure as well as the economic crisis currently affecting the world economy.

Apr 16, 2012

How to deal with business complexity?


Business and state organizations face today a multitude of historically grown complex structures on which they depend for execution of processes. Examples range from technical infrastructure for communication to regulatory rules operating in bureaucracies. At the same time technological and associated social changes demand increasing speed, flexibility and adaptability of organisational structures and processes. How to cope with this situation that puts increasing stress on economic, social and political members of advanced and less advanced societies?

In the past numerous re-engineering projects have been executed to cut down on the jungle of organisational processes with limited success improving short-term figures, often creating images of short-term or even fake successes (e.g. through rules put into accounting systems) and increasing stress and susceptibility to failures. What if the jungle is a complex eco-system of organisational processes whose relations have been insufficiently understood and messed with?

The 'obvious' solution is to rationalize historically grown large scale organizational systems in meaningful ways by streamlining their structures based on an organic understanding of the important interlinkages among their 'modules' and their historic development. This view subsumes and at the same time limits a rational, analytic view of the world (as will become clear in the section on the decomposability of systems).

However, what that obvious solution entails is less clear. The above description entails a number of value judgements, whose resolution can only be based on very generally principles to be most widely acceptable. One of these values should be the long-term survival of the enterprise. Depending on the regulatory environment that entails a certain level of risk acceptable for an organization, which is related with specific degrees and processes of change.


The answer to the issue depends on our model of the world and its development. Fusing economic and evolutionary views, economic survival and even more economic success is about the realization of new chances and optimization of existing structures, i.e. two forms of adaptation. Competitive advantage as ex-post measure of 'realised' economic fitness is a relative measure of how an ensemble of organizational modules (e.g. departments, business units, legal entities) with certain characteristics that work better than other such ensembles (together).


The development of a complex systems, such as organizations, is a path-dependent build up of complex forms of organization glued together by information flows. The organizational build up takes places through learning about what does not work in markets captured in hierarchical structures and controlled by regulation of information blocks in modular components. It is based on an evolutionary process of learning about what survives in 'the environment' and reflection in organization characteristics, which is followed by a subsequent optimization process that leads to a more or less heterarchical entanglement of structures and processes.


Organizations can thus formally be seen as hierarchical systems with departmental modules that allow execution of functions through specific capabilities concentrated in particular departments. This confers economies through separation of work but also leads to interpretation and filtering problems in non-standard or changing situations as interpretative blindness and inertia are fostered. Therefore, (informal) organization structures need to be heterarchical in order to cut across departmental and disciplinary boundaries to successfully deal with the increased complexity and speed of change. This requirement is reflected in informal structures, which change the seemingly simple, modular hierarchy to a heterarchical form interlinked across hierarchical levels. Heterarchical structures allow faster and broader interpretation of information, but also demand higher interpretative capabilities by management. Given process and product related capabilities, these interpretative capabilities ('dynamic capabilities') determine to a large extent the success of an organization.



Apr 9, 2012

Ernst Mach workshop on scientific philosophy

Academy of Sciences of the Czech Republic

Ernst Mach Workshop / Werkstatt Ernst Mach

The first Ernst March Workshop, to be held on June 25-26, 2012, will host

Prof. Barry Loewer (Rutgers University)

Beginning in Spring 2012, the Department of Analytic Philosophy of the Institute of Philosophy in Prague will host annual workshops on the current work in analytic philosophy - philosophy of science, mind and language, metaphysics as well as value theory. Each workshop will center on the work of a distinguished keynote speaker who will give a lecture and respond to presentations by a limited number of workshop participants. On occasion, the workshop may have the format of a symposium on a recently published book by the keynote speaker.

Department for Analytic Philosophy
Institute of Philosophy
Jilská 1, 110 00 Prague 1
emw(at)flu.cas.cz

Prague Organizing Committee:
Vladimir Havlik, Tomas Hribek, Juraj Hvorecky, Ladislav Kvasz, Zuzana Parusnikova, Jaroslav Peregrin


International Advisory Board:
Tim Crane (Cambridge), Konrad Talmont-Kaminsky (Lublin), Friedrich Stadler (Vienna), Marián Zouhar (Bratislava), Zsofia Zvolenszky (Budapest), Hong Yu Wong (Tübingen)

Paper by Simon Robinson on Complexity, innovation and history of science

This paper of +Simon Robinson is really interesting as it points at the connections between the history / philosophy of science, information ecology and evolution, a complexity perspective, organisational worldviews and the need for complexity reduction in business. 

Personally, I take a Machian / Jamesian monist, pragmatist resp. radical empiricist position, which stresses the importance of considering the partial, evolutionary character of knowledge, which grows over time based on observation and inclusion of facts into thought systems, that happen to influence what is seen and collected as further observations. 

This matters also for organizations in general and businesses in particular that are build around values and cultures that define customer needs (problems), solutions (products), and approaches to identify and validate these through the lenses of market research, accounting / management information systems as well as softer foresight tools.

Institutional and organisational cultural systems are hard to change because underlying worldviews and information filters in internal and external data collection and analysis systems are strongly intertwined and reinfoce each other, creating a shared illusion which may be in alignment with market / stakeholders 'demand' - or not.

g+ keyword links: 
#complexity #philosophy of science #information ecology #innovation #strategy #foresight #organisational culture #worldviews


Enabling associative connections on the 'net

Re a post on a g+ post by Tim O'Reilly:

As far as I have understood fluidinfo, it should not only allow complex searches, but mimicking associative connections between information sources on the web - or other information storage / computing systems - helping "to free data from the siloes of tree-like-information storage" as explained by +Ralf Westphal here in a post on another system: http://weblogs.asp.net/ralfw/archive/2006/06/20/Freeing-Data-From-the-Silos---A-Relationistic-Approach-to-Information-Processing.aspx


Westphal there describes Pile, a recursive information storage / database system conceptualized by Erez Elul and funded / marketed by Peter Krieg (http://en.wikipedia.org/wiki/Peter_Krieg). It stores information as connection between elements which composed of connections of elements and so on. 

Krieg was a documentary movie maker and afaik influenced by Pile writing about the computer as the 'paranoid machine'. 



So take this as an example of of an associative post linking several concepts in a stream of thought.


N.B.: I happened to come across fluidinfo, when I checked out +Terry Jones as programmer working on Echo - an evolutionary ecological complex adaptive system simulation (see e.g. here: http://www.mitpressjournals.org/doi/abs/10.1162/artl.1997.3.3.165). 

Evolutionary simulation in computer systems hit upon the constraints of pre-defined system states, while real evolutionary processes are open in that they can build on any changes and re-combinations of information codified in 'the genes'.


Mar 30, 2012

Scientific method: fundamental truth vs. fluid knowledge


In the discussion of Planck and Mach, it should be considered that Mach was an empiricist, who had partly auto-didactically trained himself before his formal education and furthermore trained in a handicraft as a woodworker, while Planck was of a theoretical bend, declining for himself the 'need' to do empirical research. This seems important as Mach brings the experience of trial and error, tinkering or 'bricolage' to his theoretical and metaphysical views of (the development of) science.

Planck disagrees with his interpretation of Mach's historio-critical view of science at the beginning of his (Planck's) "Survey of Physics" because Mach tries to build his concepts of (physical) science on the notion of fluidity of human knowledge and the limits of models made up by (wo)man, which Mach sees as a currently 'valid' thought economic conceptualization of the known facts (and supporting assumptions resp. theories). Planck understands these as 'more or less arbitrary' constructions (which, I think, they are not as they are historically developed and take account of the currently known facts 'arranged' under the needs of specific world-views of scientists).



Furthermore, Planck disagrees with Mach's basing science in sense-perceptions (which are nevertheless deemed a useful starting point and correction to former exaggerations based on physical research results), but favors a view based on the 'constancy' of the properties of reality, a constancy which persists through all individual and historical interpretative variation. If I am not mistaken (pls. correct if I am wrong) Planck favors a statistical approach to (re-)searching these constant properties of reality, e.g. endorsing Boltzmann's thermodynamics in this context.



(N.B.: To me it seems Planck reduces the notion of Mach's perception complexes, which link properties of reality and their representation in the mind through mutually dependent functional complexes. Even if we use 'modern', extended sense-organs such as microscopes and Large Hadron Colliders, these measurement instruments can, just like our senses, only react to and register what they are 'designed' and constrained to capture - which leads us to issues with particle-wave dualistic appearances of entities.)



In contrast, Mach uses the 'historic-critical method' developed by religious scholars at the University of Tübingen, who put statements from the bible in their historical context, see e.g. his "Science of Mechanics. A Critical and Historical Account of its Development."

 As indicated above, Planck sees the world differently, where the standard of scientific research should be to strive for 'a fixed world-picture independent of the variation of time and people'. He argues that (physical) theory can be built on more 'fundamental' and unchanging concepts such as his Planck constant. Science should (can?) derive fundamental, stable statements or truths in his view.



I do see how Planck's approach works for abstract constructs as theoretical *pictures of the world* - one can build axiomatic, theoretical systems that capture (or seem to coincide with in a more critical reading) observed facts. However, I do not see how this works on a larger scale of several generations of practical researchers and theorists building their theoretical systems based on what they deem fundamental facts - which usually tended to change over the human history of science (which leaves us with the question: are Planck's constants and constancies subject to change?).



Mar 28, 2012

Conference "An Ecology of Ideas" by American Society for Cybernetics and Bateson Idea Group


From the website:

The American Society for Cybernetics (ASC) and Bateson Idea Group (BIG) come together to hold a conference on the relations among ideas as seen from multiple perspectives. We come from many disciplines but have common roots including cybernetics, circularity, reflexivity, language, culture and systems. For many of us these roots are enmeshed with biology, information, pattern, design, art, aesthetics, ethics and more.

In a world rife with factionalism and disenchantment, we will engage in conversations to integrate disciplines of knowing while taking into account our histories and considering our futures. We will regard both the parts and the whole that arises from the relations between the parts — and thus becomes the context for all the parts. We are concerned with the world that arises from how we live our ideas.
ASC and BIG have common interests in dynamic systems of thought, wisdom and learning. We accept that there are many views and value exploring the relationships between them, rather than in insisting that any view is “right”.

Here is the link to the website and the Call for papers


Mar 23, 2012

Conference: The Law & Economics of Organization: New Challenges and Directions


The Walter A. Haas School of Business, with support from the Alfred P. Sloan Foundation, is issuing a call for original research papers to be presented at the Conference on The Law & Economics of Organization: New Challenges and Directions. 

The purpose of the conference is to take stock of recent advances in the analysis of economic organization and institutions inspired by the work of 2009 Nobel Laureate Oliver Williamson and to examine its implications for contemporary problems of organization and regulation. Empirical research and research informed by detailed industry and institutional knowledge is especially welcome. 

Paper proposals or, if available, completed papers should be submitted on line at http://www.bus.umich.edu/Conferences/Haas-Sloan-LEO-Conference by March 31, 2012. The deadline for completed papers is November 1, 2012. Selections will be made by the conference organizers, Professors Pablo Spiller (Berkeley), Scott Masten (Michigan), and Alan Schwartz (Yale). Conference papers will be published in a special issue of the Journal of Law, Economics, & Organization.

International Summit and Conference on Enterprises *as* Systems

For the past 6 years the International Conference on Enterprises as Systems: Theory and Theory in Action has been concerned with the treatment of (networked) enterprises *as* systems in constantly changing social, economic, legal and technical environments. It has been held (with varying degrees of success) with the intent to create an environment for the collaborative exchange of knowledge among and between the Systemics Community, the Systems Architecture and Engineering Community (including Enterprise Architecture), and communities that are concerned with any aspect/part or whole of (inter-, intra-) enterprise systems and enterprises *as* systems.

This year, to further the collaboration effort, a(n) (Networked)Enterprises *as*
Systems Summit will be held in conjunction with the conference. Common threads for both the summit and conference include:

(1) (networked)enterprises *as* systems in the (general) systems-theoretic sense (systemics and the systems family of disciplines)
(2) identification and characterization of some of the most complex problems facing (networked) enterprises and potential solutions to which systemics may contribute.
(3) the formal and/or empirical representation of such systems for description,explanation, simulation, prediction and operation(formal/empirical theory)
(4) the use/application of theory in analysis / design, architecture / engineering, strategy, tactics, and operation of (inter-, intra-) enterprise systems and enterprises *as* systems

In general an enterprise may be considered a business, an educational organization, a standards body, a government organization, a federation, a group of enterprises bound by law in some fashion, any group of cooperating / collaborating enterprises such as those in GRID systems and emergency management/response systems, etc. In essence, an enterprise is a socio-technical system with dynamically varying systems characteristics which are dependent on both its' internal environment and its' external social, legal, economic/financial, and technical environment.

Disciplines of Systemics include, but are not limited to: General Systems Theory, Complex Adaptive Systems, Cybernetics, System-of-Systems, Systems Dynamics, Systems Thinking, Systems Engineering, Systems Analysis, Autopoiesis, Organization Theory.

The distinction between the summit and conference is that a summit organization committee, with input from interested individuals/organizations, will determine the summit tracks, the summit schedule, duration, etc.
The conference (and workshop) will focus primarily on a special topic and individuals will submit contributions for consideration. It is tentatively planned for the conference to kick-off the summit the first week in Aug. The summit will be virtual and will tentatively last until the third week in Nov.

Conference Dates: August 15-17, 2012
The Summit will run from August 20 - November 21st.


Summit planning is in the early stages. If anyone has any comments/questions, or if any person or organization is interested in planning the *summit* please contact me.


The Science of Complexity: Understanding the Global Financial Crisis

Business-oriented symposium by interesting combination of two 'complexity' institutions: 

The time-honored formulas of mainstream economics no longer capture the complex dynamics of today's financial markets. This three-day symposium offers a view of the recent global financial crises from a new perspective—that of complexity science. Sponsored by two leading complexity research institutes, the symposium will feature several of the world's most prominent complex systems thinkers. These experts will offer insights from non-linear dynamics, social networks, systemic risk, experimental economics, computational social science, and other areas that are vital not only to understand the current crises but to develop policies that address the underlying causes.

The program is open to any interested participants, but is particularly designed for professionals in government, business, and the non-profit sectors. 

May 16-18, 2012, at the new Founders Hall facility at the GMU Arlington, VA campus

For more information see http://krasnow.gmu.edu/soc.

Mar 21, 2012

Scientific practice and scientific progress: Integration and testing of rival hypotheses


In school I still learned that it is good argumentative practice (and a better strategy) to deal with potential counter arguments and criticisms by taking them apart in the course of your argument. Science is about making decisions about rival hypotheses, interpretations of evidence based on a set of observations, tests, experiments (depending on in which field you work and what is feasible). Thus, it is good scientific practice to compare rival hypotheses / theories (i.e. systems of interpretation of facts, data, or other evidence), by testing them with a set of data.

Rival hypotheses might (or rather often are indeed) special cases of an underlying reality. There is some truth in all observations, some are better than others, some are more suitable than others given a specific context. Think of the parts of an elephant that is examined by several blind men. Everyone comes up with different observations and theories about “reality”. 

That is what we actually often can observe in scientific practice: ideologically 'blinded' representatives of schools of thoughts mindlessly hurling arguments about “reality” at each other – based on selective interpretations of data – without looking for an integrative theory. 

Thus systems of scientific thought have the ability to press observations into procrustes beds that seem to lead to different “proven” true interpretations of reality – which are thus artifacts of more or less subtle differences in scientific worldviews, i.e. perspectives on the underlying reality .

Integration of opposing, rival views can be achieved if scientists (just as “ordinary” people) are able to switch their perspective and manage to develop dialectically the synthesis from thesis and antithesis. A set of literary examples that nicely shows how the integrative method works are Arthur Conan Doyle’s stories about Sherlock Holmes. Holmes generates a number of partial hypotheses based on the integration of facts known so far - which are proven 'wrong' by the some new detail until he stumbles across the truth by some coincidence. 

(Arthur Conan Doyle was incidentally influenced by Charles S. Peirce’ pragmatist philosophy, which stresses abduction (something akin to intuition) as source of knowledge. I cannot claim to be an expert on Peirce, but my understanding of Peirce’ abduction is that it is this process of generating new knowledge by integrating controversial elements into a larger picture.)

Resurrected and revised post from the Organization and Markets blog a few years ago.


Complexity Science and Social Science At the Interface to the Real World


Call for Papers and Conference Participation

Coping with the global-scale challenges of financial instability, food security, climate change, sustainability, demographic change and migration, pervasive web technology, transnational governance and security, among others, will involve dealing with large-scale complex systems made up of many parts interacting and adapting in sometimes subtle ways. People are critically important components of them all, which makes studying such systems a topic for social science as well as for natural science and engineering. However, the issues transcend disciplinary boundaries and making progress will require a significant interdisciplinary effort.
 
Much of the research that is required to address these issues is taking place at a new interface, where collaboration between economists, demographers, sociologists, etc., is supported and catalysed by tools and concepts from the physical sciences, mathematics, computer science and engineering. In the same way that research at the life and physical sciences interface has revolutionised biology and medicine since the turn of the century, research at the social sciences interface has the potential to transform our ability to answer questions about social, socio-economic, socio-ecological and socio-technological systems.
 
Contributions in the form of papers of 2500 – 8000 words reporting work that straddles the interface between complexity science and social science are invited.  The intention is that a collection of papers will be published after the conference as a special issue of a prestigious journal.  Papers describing applications are especially welcomed.  There will also be an opportunity to present posters.
 
Date: 24th and 25th September 2012
Venue: Chicheley Hall, Royal Society International Centre, Newport Pagnell, UK.
Link venue  http://bit.ly/ieya3m

To Attend: Follow the link to a page with further information and to submit an abstract or expression of interest to attend through the online form. 


Deadline: 1st June 2012. Places will be confirmed by 1st August 2012.

Queries to Prof. Nigel Gilbert n.gilbert@surrey.ac.uk and Alison Cooper (network coordinator) Alison.Cooper@surrey.ac.uk



Feb 17, 2012

Models are dangerous!

We need to be aware of what we are doing when assigning probabilities - most people somehow seem to tend to forget that:
  • they are mistaking the model for reality, and even worse
  • they are forgetting about the reality check of the model in suitable intervals
  • however, a bad model may be worse than no model at all.
Of course you can assign probabilities to your  guesses about future developments and use scenarios and decisions trees. However, that assumes we know the future states of the world in sufficient detail to make relative or absolute judgement calls about their likelihood.


Now, as Knight, Popper (and based on him Soros), and Taleb argue, the future is open and we cannot know it 'completely' enough to rely on our models.


Walking along river Danube we (may) feel confirmed about our (locally correct) belief in white swans by every observation of another white swan around the next bend - which we turn into a belief on the existence of only white swans - until we meet a black swan from Australia. In this, there is an epistemic problem that we cannot extrapolate from a limited observation / induction base to a 'totality' of states of the world.


We often cannot know the future 'completely' or sufficiently to make the assumptions we are psychologically and institutionally 'required' to make, because people (esp. in large and political organizations, without vision and / or strong leaders) cannot remain confident in the face of having to admit to (brutal epistemic) uncertainty.


Of course we can adapt our (inferred) states of the world and probabilities as we go along and learn, but that requires changing the model on the fly - however, what is your justification for the model if you have to update it frequently (in the large, political, visionless organizations above)?


If we are dealing with 'closed', simple systems that follow (or can be approximated by) linear models a probability approach is fine, we can assign probabilities and measure risks. If things get more complex and complicated we run into the epistemic limits described above and we face uncertainty and cannot assign probabilities anymore - as Knight argued.


Here is alink to Knight's book: Risk, Uncertainty and Profit

Strategy and Finance on Moving Landscapes


When implementing a new strategy, which time horizon do you set and how much negative cash flow are you prepared to accept until the realization of the outcome? 

That is how patient are 'investors' and how deep are their pockets (resp. how good management at selling their story).

In terms of time horizons and deep pockets I like the concept of fitness landscapes (which has probably not been interpreted 'right' in its original biological context), where organizations and strategies have to move through hills and valleys of relative fitness (another concept with some issues), respectively have to move on a landscape that is moving itself.

Nevertheless, as a metaphor with quite some abstract and mathematical apparatus associated it seems useful. For instance, one could argue that non-monetary (that is hard to financially value or associated with immediate financial returns) factors can be brought into such a conceptualization impacting on 'fitness' of an organization or strategy (composed of financial and non-financial factors).

Growing niches for products / services, which can build on their accumulated economics of scale and scope nicely fit into this conceptualization as well.

Feb 14, 2012

Complexity, Evolution and Near decomposability

I have been pointing out the relevance of (nearly) decomposable systems, modularity and hierarchy to deal with complex systems before. Some of you had asked for a summary at some point.

I would like to know what you think of the concept and where you deem it useful for our discussions. 

•         It is his story of how variety generation can produce structure (organized) complex results relatively quickly by employing hierarchical relationships in evolution. 
•         It is potentially interesting wrt how costs of interaction between interfaced systems composed of (sub-)systems can be explained and measured
•         It provides a potential basis for the measurement of “complexity” and risk based on the interaction strengths between systems.

Here is a summary of one of the core articles of Simon on the evolution of complex systems based on hierarchy and nearly decomposable system. 
   
It gives a summary of his thinking on this subject inspired by natural systems and the application to social systems. 

His view of the evolution of complex structures is based on the following assumption:

If the existence of a particular complex form increases the probability of the creation of another form just like it, the equilibrium between complexes and components could be greatly altered in favor of the former.
This process (obviously) makes evolution of complex structures from building blocks in hierarchic structures much faster than the independent evolution of complex structures. The result of the process of evolution “employing” such an approach is near-decomposability (again – as it is based on simpler elements that are interacting more strongly internally than externally).

Simon defines near decomposability as follows:

(a) in a nearly decomposable system, the short-run behavior of each of the component subsystems is approximately independent of the short-run behavior of the other components; 
(b) in the long run, the behavior of any one of the components depends in only an aggregate way on the behavior of the other components.

He defines hierarchy as follows:

By a hierarchic system, or hierarchy, I mean a system that is composed of interrelated subsystems, each of the latter being, in turn, hierarchic in structure until we reach some lowest level of elementary subsystem. 

And qualifies the arbitrary notion of the measurement scale of a system wrt measurement / classification of the basic elements (defined as systems) by saying:

In most systems in nature, it is somewhat arbitrary as to where we leave off the partitioning, and what subsystems we take as elementary.

He further qualifies it by comparing to social organizations (which he described before)

Etymologically, the word "hierarchy" has had a narrower meaning than I am giving it here. The term has generally been used to refer to a complex system in which each of the subsystems is subordinated by an authority relation to the system it belongs to. More exactly, in a hierarchic formal organization, each system consists of a "boss" and a set of subordinate subsystems. Each of the subsystems has a "boss" who is the immediate subordinate of the boss of the system. We shall want to consider systems in which the relations among subsystems are more complex than in the formal organizational hierarchy just described. We shall want to include systems in which there is no relation of subordination among subsystems.

which relates to 

NEARLY DECOMPOSABLE SYSTEMS. We can distinguish between the interactions among subsystems, on the one hand, and the interactions within subsystems. ... As a second approximation, we may move to a theory of nearly decomposable systems, in which the interactions among the subsystems are weak, but not negligible.

THE DESCRIPTION OF COMPLEXITY. The fact, then, that many complex systems have a nearly decomposable, hierarchic structure is a major facilitating factor enabling us to understand, to describe, and even to "see" such systems and their parts.

The definition of near decomposability can be mode more formal (where it ordered such that strongly interacting elements are placed near each other):

This article treats of systems that are nearly decomposable--systems with matrices whose elements, except within certain submatrices along the main diagonal, approach zero in the limit. Such a system can be represented as a superposition of (1) a set of independent subsystems ... and (2) an aggregate system having one variable for each subsystem. ...

From the abstract of Simon and Ando's 1961 Aggregation of Variables in Dynamic Systems
which then allows to deal with the analysis and measurement of complex systems:

There is redundancy in complexity which takes a number of forms: Hierarchic systems are usually composed of only a few different kinds of subsystems, in various combinations and arrangements. Hierarchic systems are, as we have seen, often nearly decomposable. 

Hence only aggregative properties of their parts enter into the description of the interactions of those parts. By appropriate "recoding," the redundancy that is present but unobvious in the structure of a complex system can often be made patent. 

This allows us to reduce “the complexity” of the measurement of complex systems by compartmentalizing interactions and effects in areas or “boxes”.

N.B.: If one reads several of Simon's papers on causality and measurement there are a number of Machian perspectives and topics, however Mach's evolutionary / developmental / genetic perspective gets crushed in a systemic classification of the (Simonian) world, where the classification system seems to introduce a static nature. (Please correct me if you think I am wrong!)

I would greatly appreciate your feedback and thank for your attention!