Here follows the most recent draft of my paper which I recently presented at the higher seminar in theoretical philosophy at Gothenburg university’s department of philosophy. It is also the basis for my upcoming talk at the Swedish Congress of Philosophy (“Filosofidagarna”) in june. All feedback is most welcome!

 

Information Integration Theory (IIT) is a neuroscientific theory proposed by Giulio Tononi (2004, 2008) as a fundamental theory of consciousness. However, since the theory does not tell us anything about the representational feature of the mind, it falls short on being a full- fledged theory of mind. My upcoming paper is an enquiry into these matters in order to reveal the conditions for a representational theory that wishes to be compatible with IIT. In order to accomplish this, theories of representation from both philosophy and cognitive science will be analyzed regarding their potential compatibility with IIT. The aim is not, however, to draw up a comprehensive list of all available theories of representation and their respective level of compatibility with IIT, but to sketch a general outline of the prerequisites that IIT poses on such representational theories. From this discussion conclusions are drawn regarding which kind of representational theory that are required for IIT to be both a necessary and a sufficient explanation of consciousness.

Information Integration Theory

Information

In order to explain how the brain generates consciousness, Tononi focuses his investigation on the characteristics of consciousness regarding the concepts of information and integration. To this end, he makes use of a thought experiment: Imagine that you are seated in front of a screen that changes between light and dark with instructions to report the state of the screen. In front of the screen is also a light-sensitive photo-diode, connected to a circuit designed to beep when the diode detects light. For Tononi, this illustrates a core problem of consciousness: When you, as a human being, differentiate between a light and a dark screen, you have a conscious experience of the screen’s status. The photo-diode can also make this distinction, but no-one would hardly claim that the photo diode has any conscious experience. But what then, is the key difference between you and the photo diode? Tononi’s answer is that the difference must be related to the amount of information generated in the process. This analysis is based on a classical view of information which holds that more possible outcomes (states) yields more information. (A flip of a coin corresponds to 1 bit of information (1 out of 2 alternatives) while a throw of a dice generates roughly 2.59 bits of information (1 out of 6 alternatives). According to this terminology, the photo diode’s detection of light corresponds to 1 bit of information while the human experience of light equates to an enormous amount of information. This because of our ability (regardless of the experimental setup) to discriminate between a near infinite amount of different impressions. Thus, according to Tononi, the ability to handle a large amount of information must be a necessary component in a conscious system.

Integration

However, this ability of information handling can also be found elsewhere. Think for instance of a digital camera with one megapixel capacity. It has a sensor that, in principle, corresponds to one million photo diodes. Such a device could therefore discriminate between a large amount of different impressions, corresponding to one million bits of information. No-one would claim that such a camera is conscious, however. What is then the key difference between a digital camera and human consciousness? Tononi’s answer to this is integration. In a digital camera, each and every pixel is causally independent of all the other pixels. These pixels (photo diodes) do not interact in any way; they all performs its own light detection regardless of the status of any other pixel. This in contrast to the human brain: The repertoire of possible brain-states cannot be subdivided into the repertoire of individual components. Thus, a conscious system must have a large repertoire of possible states (information) and these states must be unified. It cannot be decomposed into a collection of causally independent subsystems (hence the necessity of integration).

Interactivism

Interactivism is a naturalistic theory of representation proposed by Mark H. Bickhard (Bickhard 1993, 2000). A foundational part of this theory is the conviction that traditional theories of representation are mistaken in that they explain representation in terms of encoding. That is, they use an explanation based on elements that acts as stand-ins for other elements (in the same way that the morse code “…” is a stand-in for the letter “S”), hence borrowing their representational content from whatever they are standing in for. Bickhard’s point is that the usage of such a notion as a basis for an explanation of representation is a deemed to fail since any chain of stand-ins must be grounded at some lowest level. If an element at this lowest level is defined in terms of another element, it cannot per definition be at the lowest level, yet it must carry some kind of representational content. Or put in other words: The system must already have the relevant representational content in order to have the encoding for that content. Hence, a definition of representation in terms of encodings entails that you must have a representation in order to get a representation, which of course is a vicious circularity. Bickhard’s view is therefore that such an explanation should be refuted. Instead we should opt for the more promising approach of Interactivism, an explanation grounded in control theory where an explanation of representation could be based on an analysis of a system’s internal control structure in interaction with the environment. Such an interaction will generate some final state internal to the system, a state whose nature will depend on the characteristics of the interaction (hence on the properties of the environment in question). The idea is that this final state serves to implicitly categorize that class of environments that would yield the same final state if interacted with. Hence, the overall system with its possible final states function as a differentiator of environments with the possible final states implicitly defining differentiation categories. However, such differentiators cannot be said to carry representational content in themselves, and for a full theory of representation we must consider a differentiator in the context of a goal-directed organization. Such a goal can have associated interactive strategies within the system (strategies to reach a certain goal from the different final states produced by the differentiator). Thus, each final state indicates the potentiality of its associated interactive strategy. In other words, each environment is predicated to invoke a certain interactive strategy:

The upshot of Bickhard’s theory is that these differentiated functional indications in a goal- directed system can be said to constitute representations. This perspective has the main advantage over other representational theories that features of such representations can be detected within the system itself. For example, a case of misrepresentation entails that the predication is wrong and, if so (if the goal is not reached), the predication is falsified. Now, this is something that is detectable and functionally available for the system itself, in contrast to traditional representational theories where such cases must be settled by an external observer as a “fallacy of observer semantics”, in Bickhard’s words.

Also, this mechanism could have played an evolutionary vital role in the development of organisms. If an system can internally indicate that some interaction X is possible in appropriate conditions, and that it could be anticipated to yield internal outcome Q if engaged in, this would provide an excellent ground for action selection. This line of reasoning is developed further (Bickhard 2000):

An indication of the potentiality of an interaction, say X, is a predication that this is a an X-type environment. Suppose the interaction is engaged in and the anticipated outcome is obtained. At this point, some further indications may hold, perhaps of Y. It may be that the indication system is organized such that all X-type environments are indicated to be (also) Y-type environments. […] This point also provides the clue to the representation of objects. Indicative relationships can iterate: encountering X environments can indicate Y environments, which might in turn indicate Z environments. And they can branch: encountering an X environment might indicate Q, R and S possibilities. With such iterating and branching as resources for constructing more complex representations, vast and complex webs of indications become possible. (p. 69)

All in all, this gives us a very thorough theory of representation that avoids the circularity in defining representations in terms of representations since it does not presupposes what it sets out to explain. It also does not require any conscious external observer to connect symbol states to features in the world and is therefore a good example of a naturalistic theory of representation.

Reasoning

The main claims of IIT is that consciousness is based on the system’s ability to discriminate among a large repertoire of states (high amount of information) and that these states must be unified (integration). Now, this is in fact the main characteristics of Interactivism! Bickhard’s “differentiator mechanism” discriminates between different environments while the complex “web of indications” is an example of just such integration that is crucial for IIT. How then, should these finding be analyzed? One way to comprehend it is:

- (IIT) Integrated information causes consciousness.
- (Interactivism) The representational mechanisms of the brain
  instantiates just such a system of integrated information.

__________________________________________________________

- There can be no such thing as unconscious representations.

Or, put another way: If way use a theory of representation which instantiates IIT and allows for the possibility of unconscious representation, this will contradict IIT (or, at least tell us that IIT may be a necessary but not sufficient criterion for consciousness to arise). We can summarize this line of reasoning as follows:

* This would presumably be the right spot for interactivism since it relies heavily on the notion of low-level representations that are not in principle available to consciousness.

Now, is it plausible for a theory of representation to allow for unconscious representations? According to Searle’s (1992) Connection Principle, for instance, all intentional states must have the potential to be conscious. In other words, this principle outlaws intentional states which are, in principle, unconscious all the time. This because of the fact that we have a lot of beliefs (for instance) which are not conscious at the moment (but can be at any given time as they are recalled from memory2).

Thus: A theory about representation which are to be combined with IIT must, if it instantiates IIT, outlaw all unconscious representations. This seems to be a much to rigorous condition, and leads to the conclusion that such a theory that wishes to be combined with IIT cannot in itself instantiate IIT (alternatively, include mechanisms that rules out temporarily unconscious representations from this instantiation). Furthermore, if we still are convinced that such a theory of representation is correct, we will have no choice but to deem IIT to be insufficient as an explanation of consciousness. It may very well pose necessary conditions for a conscious system, but it this case something else is needed for a full-fledged explanation.

References

Bickhard, M. H. (1993). Representational content in humans and machines.
Journal of Experimental & Theoretical Artificial Intelligence, 5(4), 285-333.

 

Bickhard, M. H. (2000). Information and representation in autonomous agents.
Cognitive Systems Research, 1(2), 65-75.

 

Searle, John R. (1992) The Rediscovery of the Mind. MIT Press.

 

Tononi, G. (2004). An information integration theory of consciousness.
BMC Neuroscience, 5, 42. doi:10.1186/1471-2202-5-42

 

Tononi. (2008). Consciousness as integrated information: A provisional manifesto.
The Biological Bulletin, 215(3), 216-42.