This has been one of those catch phrases one sees a lot, particularly in the objectives of certain types of wargames. If you have not checked out the Alidade, Inc. email list, it has spurts of fascinating discussion. sign up here
John Dickman recently posted the subject question which led to an interesting discussion (available on the lists archive if you join).
the discussion went down the rabbit hole of “how hard it was to get the trons representing the information to the right place at the right time”, but I thought that really the “easy” part from the more general point of view of “knowing what right is” (in terms of content, person and timeliness).
I chimed in with the following:
Back to the original question, “How easy (or hard) is it to get the “right” information to the “right” person at the “right” time?” This problem decomposes into an easy case where you know what the right information is, who the right person is and when the right time is. When those three conditions are met, it is because the “world state” is well under stood (the only way to know the right information), a plan of action is well-known, (when the right decision maker (s) is (are) known, and when there is a well understood ‘theory of action’ for the timeframe over which the “world state” changes (right time).
In cases of mechanical (simple) systems, we know the world state well (initial conditions), we have a plan of action (apply force F at location X), and a theory of action (F=ma). Simple systems can get extremely complicated, resulting in ambiguity in world state, plan of action and theory of action, but theoretically they are “understandable” with refinements in the degree for understanding of the whole system.
For a host of reasons, most related to maintaining an illusion of control, we in the military (and also politics and business) consider all problems to be essentially simple, just exceedingly complicated. That offers the hope of a “right answer” for the right person, with the right information at the right time. The timetable constructs of “industrial age” warfare gave a seductive allure to this formulation, even if the reality never quite managed to meet the promise of the theory.
The rub is that there is different kind of problem out there, one that is not just complicated but truly “complex”. We all hear what means, but don;t seen to want to really believe it. Complex problems defy formulation of an initial system state. Without that, there can be no understanding of the “dials and levers” we need decide to act on, and worst of all, no theory of action that relates the setting of those dials and levers to changes in that not understood world state. In these situations, the right information is not knowable, neither the right set of actors to take action, nor the timing of when to do it.
It is not a case of “with more information, we can figure it out” but that we CAN’T know enough to guarantee a right answer. We get by in these situation a large degree of the time because changes in the system state tend to occur slowly, so we can get by randomly moving the dials and levers and divining patterns in the outcomes that we can convince ourselves we had something to do with.
The more narrow and constrained we make “the system” we are considering, and isolating outside influences, the more we can make problems of the first type. This is what traditional science is good at, but has decomposed into so many systems that we get lost down the rabbit holes and lose the forest for the cellulose fibers. The bigger the system, and in particular the more people are part of it, the more it becomes a problem of the second type. One we can blunder along attempting to cope with. but never really understanding it, or able to effect it except in the margins, except for a period of “good luck” as was brought up…or as Taleb pointed out about turkeys, “They think they understand their world, right up until Thanksgiving”…
So to answer the question, we have a dichotomy of problems, on the one hand tractable, but not particularly interesting or important; on the other hand, those that are intractable, but critical to “solve” in our favor. Our hubris leading to all manner of “bad outcomes” has its roots not in the politics of hegemonic or uni-polar ambition, but in the ignorance that still feeds our desire to make complex problems simple if we just could figure out the right information based alchemy.
You can’t. And shame on us for perpetuating the myth.
To me wargaming is one way of understanding these difficult aspects and educating decision-makers about the their circumstances, and what “would be nice in a perfect world” (read JV2010…) but unrealistic, and what coping mechanisms you need because of that.
The question is “How?” What are the ways to bring this point home in wargames without making the player feel totally powerless (Ok that may be the reality, but we don’t want him to slit his wrists after the game…:) When is it appropriate to ignore it for other objectives sake, and as a designer, how do you “know what you don’t know” about the information landscape of the situation you are portraying?