Distributed Cognition in an Airline Cockpit takes an unusual standpoint towards understanding the principles of cognition themselves. Situating the research in the flight cabin of a commercial jet (or at least a simulated flight cabin) seems almost too esoteric for many to understand the intricacies of the interactions. However, because distributed cognition relies heavily on representations and how interactions manipulate, control, and process these representations, the cockpit itself is actually an ideal stage.
Both Hutchins and Klausen find that even though they only extract a very small portion of the interactions in the cockpit, they are able to derive a wealth of information about interactions and distributed cognition. They emphasize repeatedly that the cockpit is not a collection of separate entities; rather than considering it a Captain, First Officer, and Second Officer sitting in a control room, they note that it is in fact a complete system. Both the crew and cabin constitute a complex series of interactions within this system that functionally represent the ideas of distributed cognition.
Consider the fact that the crew is required to communicate to get their job done. This, according to Hutchins and Klausen, shows a transmission of representational media through space. The idea that when we talk to one another, especially in a system, we are verbally representing ideas, is not new. The researchers refer to locutionary aspects (as well as the derived illocutionary force and the perlocutionary ideas) of speech that carry us from one idea to another without explicitly demanding or demonstrating our intentions. However, the importance of the verbal aspects to distributed cognition is paramount because it is indeed a means by which information is moved around the system from one participant to another.
Interestingly, verbal communication is not the only way in which information is moved within the system. Non-verbal cues and pauses in communication all suggest what the researchers call "expectations." Expectations occur when we anticipate information coming from some representational medium and we act accordingly, allowing us to act efficiently within the system. For example, when the pilots are responsible for changing the altimeter alert mechanism after being cleared by the ground based air traffic controller (ATC), they follow a set of expectations that draws their hands towards the alert mechanism just as they ask the question of the ATC.
More importantly, regarding expectations, is the fact that when we follow a set of expectations in a system, we are able to coordinate our abilities within the same system. Once we are calibrated in a familiar area (such as how the pilots calibrated their positions in the cockpit), we can almost immediately begin to accomplish our set tasks. Interestingly, the study notes many consequences of this coordination, but also recognizes that it is restricted to a limited domain, especially one in which familiarity is key. Still, Hutchins and Klausen are able to effectively demonstrate that the system itself follows many ideas of interaction that they had set forth.
How does distributed cognition apply to HCI? How can we use distributed cognition to better interact with machines? The first thing that caught my attention with regards to this dilemma is expectation. As the researchers demonstrate, cues from certain media will allow us to form expectations about how we perform. I consider this to be a critical element of HCI, especially an efficient HCI. If we can follow a machine in front of us through a set of expectations, I think we can not only use it effectively, but efficiently. However, one of the things we must always consider when designing an interface with a machine is how these expectations are formed. Are they derived from previous experiences with different, but similar machines? Do they emerge from trial and error with the machine in question? It is difficult to say which is the best launch pad for these answers, but I would venture to guess that a hybrid of the two circumstances would best attain the desired results. After all, the pilots in the essay had never worked together before, but were able to complete a system of interaction because of previously known expectations from several different representational media.
Moving information between a human and an interface is also an interesting question, because generally we lack verbal cues. In my experience, there is a general silence observed when interacting with a computer, and aside from a few expletives that may come forth when things aren't going according to plan, most of my interaction within the HCI medium is silent. However, I do wonder if this is the case for everyone. Do most people interact quietly with their computers or do we create our own verbal cues and "redundancies" (back-ups or reiterations of the ideas we wish to describe)?
When we consider the system of HCI, we are not always looking at a one-to-one relationship between the human and the computer. In fact, with the advent of the internet, we have become reliant on taking that one-to-one interaction and exponentially increasing it. We no longer communicate with the computer alone, but use the computer as an avenue to communicate with others from all around the world. This creates a vast network, an immense system of users trying to coordinate their ideas on an almost global level. Interestingly, this does not always rely on a sort of co-temporality. Look at the popular website wikipedia.org. Using wikipedia, one can modify information (and consequently expectations) with authority. However, because this does not always occur at the same time, the system does not need people working simultaneously for it to be successful. However, the system does not close there. Rather, through a vigilant system of notification (reinforced by denied expectations when the information is wrong), the greater system of distributed cognition allows the information to generally maintain its integrity.