Persons: Embodied Computational Systems

Persons as Computational systems, Embodied and as possessing Consciousness

I realize that this paper may be much easier to read as a PDF. If you click this link you will automatically download the PDF version: PCEC
I highly suggest reading that version.

Image

The reduction that needs to be made conceptually in order for models of cognitive abilities to apply to persons has not at the present time been made (or if there is a clear picture I have not yet come across it while studying cognitive science). Thus, in this paper I am going to do the conceptual work necessary such that a picture can be painted (I’m cleaning brushes and making colors). My method will be aimed at answering this question: given two assumptions that seem correct about the majority of approaches to cognitive science used to frame research today, how do the kinds of approaches and concrete embodiments of cognition fare? The two assumptions I have chosen are: 1) It is possible to adequately establish an identity between computational processes and cognitive abilities & 2) personhood is applicable to any system with a specifiable set of cognitive abilities, and only systems with those cognitive abilities. These assumptions seem to be common, neutral and useful for the kind of conceptual reduction one would want from a developed science of the mind.

With these assumptions in mind, the rest of my paper proceeds as follows: 1) I evaluate Marr’s tri-level theory of explanation, 2) explain three broad approaches to computationalism and three theories for the concrete implementations of cognition 3) briefly present a problem for one of those theories of concrete implementation 4) introduce a non-computational approach to cognition (embodied cognition) to supplement computationalism and explain how it presents a problem for the second plausible theory of concrete implementation 5) introduce at least one part to personhood which will not be captured even by a hybrid computational/non-computational theory and 6) suggest a methodology of discourse for accessing theories that attempt to incorporate those difficult elements of persons into a general account which attempts to reduce persons to constitutive cognitive abilities.

1)

The division of explanation that David Marr came up with is worth reiterating to get an idea about the methodology for studying cognitive ability; I will present his division of explanation and a criticism of its applicability to connectionist networks first. My purpose in presenting this argument is to show how this division of explanation that Marr envisioned is useful to discard if models of cognitive abilities are to apply to persons.

Marr writes that three levels each must be understood in order to understand all there is to understand about how a machine carries out an information[1] processing task.

  1. There is the computational theory level. Answering Marr’s question “What is the goal of the computation, why is it appropriate, and what is the logic of the strategy by which it can be carried out?” he argues, will be one part of the explanation as to what a machine is doing when it carries out an information processing task.
  2. There is the representation and algorithm level. Answering Marr’s questions “How can this computational theory be implemented? In particular, what is the representation for the input and output, and what is the algorithm for the transformation?” he argues, will be a second part of the explanation as to what a machine is doing when it carries out an information processing task.
  3. Finally there is the hardware implementation level. Marr’s final question, “How can the representation and algorithm be realized physically?” he argues, as well, if answered will provide the last piece of the explanation one would need to give in order to understand what a machine is doing when it is carrying out an information processing task[2].

P.S. Churchland and T.J. Sejnowski, two connectivist theorists[3], write:

In Marr’s view, a higher level was independent of the levels below it, and hence computational problems could be analyzed independent of an understanding of the algorithm that executes the computation, and the algorithmic problem could be solved independent of an understanding of the physical implementation. Marr’s assessment of the relations between levels has been re-evaluated, and the dependence of higher levels on lower levels has come to be recognized.

Churchland and Sejnowski’s entire discussion is important, but the highlights from their paper “Neural Representation and Neural Computation” that pertains to this subject are two: 1) “Network models are not independent of either the computational level or the implementational level; they depend in important ways on constraints from all levels of analysis.” And 2)”…the notion that there are basically three levels of analysis also begins to look questionable. If we examine more closely how the three levels of analysis are meant to map on to the organization of the nervous system, the answer is far from straightforward.[4]

Upon reading this one may think that there is not something wrong with Marr’s divisions, but rather there is something wrong with connectionist models[5]—that there is something about these models that necessitates reducing these levels of explanation into one another but 1) that there are no more abstract principles which motivate one to accept that connectionist models are the right kind of computational models of cognitive abilities, 2) nor independent reasons to accept the reduction of the levels of explanation.

As for the first point one may give in reply to Churchland and Sejnowski, I’m in no position to convince someone that connectionist networks are the right kind of models for cognitive abilities, but I should note that it will not be possible to answer Marr’s third question (“How can the representation and algorithm be realized physically?”) without being constrained in some way by biological data (at least if the general goal is to identify persons with the cognitive abilities in question). Connectionist networks of the kind that Churchland and Sejnowski advocate aim to describe actual neural systems or some other biochemical level (reaching as far as “the whole central nervous system[6]”).

As for the second point (that the reduction of the levels of explanation is unmotivated) I would like to point out that the explanatory power of computationalism depends upon the identity holding between cognition and computation and thus that the more levels of irreducible explanation that exist on the computational side of this identity thesis the less explanation is given by any particular level being fully explained. An additional burden for this identity thesis is that more conditions for counting as a cognitive process must be satisfied before computation can be identified with a cognitive process. This trade-off may be desirable, but the final claim that the levels of explanation are in some principled way fundamentally distinct and irreducible threatens the claim that once all levels of explanation are given then the entirety of the explanation is given. If they are irreducible explanations and the cognitive process is not identical with any one explanation then why would it be identical to the explanations given in conjunction? These levels of explanation should have no additive properties if they are conceptually distinct or categorically different, and yet they must have those additive powers if the sum of the explanations is identical to the cognitive process. It is possible to conclude from this that either a cognitive process is reducible in a way that does not make the explanations conceptually distinct (or categorically different) or computationalism is wrong. I have assumed without argument that computationalism is not wrong[7] and thus they are reducible in a nomologically or categorically neutral way.

So far I have suggested a possible division of explanation that was quite important in motivating methods for researching, modeling and explaining the mind (Marr’s theory of explanation), given an argument against the adequacy of this division (via P.S. Churchland and T.J. Sejnowski), and given a conceptual argument for the reducibility of explanatory levels that doesn’t depend on the connectivist approach being correct but rather depends upon computationalism more generally being true. Now methods for fleshing out computationalism as well as the pros and cons of these approaches will be necessary to give[8]. The three broad approaches I will be explaining are the classical approach, the connectionist approach and the approach from computational neuroscience. Following these will be three theories of what concrete computation is and then problems for two of the broad approaches[9].

2)

  1. The classical approach “downplay[s] the relevance of neuroscience to the theory of cognition,” models cognitive capacities using computer programs and is thus able to model “higher level cognitive capacities such as problem solving, language processing, and language based interface.”
  2. The connectionist approach “downplays the analogy between cognitive systems and digital computers in favor of computational explanations of cognition that are “neutrally inspired,” uses neural networks to model cognitive capacities, and is “constrained by psychological, as opposed to neurophysiological and neuroanatomical data.” These networks are thus able to “exhibit cognitive capacities such as perception, motor control, learning, and implicit memory.”
  3. Finally, the computational neuroscience approach “downplays the analogy between cognitive systems and digital computers even more than the connectionist tradition.” The networks are aimed at describing “the actual neural systems such as (parts of) the hippocampus, cerebellum, or cortex, and are constrained by neurophysiological and neuroanatomical data in addition to psychological data.” These networks model how the real nervous system may exhibit “cognitive capacities, especially perception, motor control, learning, and implicit memory.[10]

In addition to the conceptual approaches mentioned above, computationalist can be categorized according to which of three theories they endorse for specifying what concrete computation is. These are the causal, semantic and mechanistic theories of implementation[11].

  1. According to the causal theory computational state transitions supervene on physical state changes.
  2. According to the semantic theory “computation is the processing or representations—or at least, the processing of appropriate representations in appropriate ways.[12]” In more detail:
    1. In addition to the causal account’s requirement that a computational description mirror the causal structure of a physical system, the semantic account adds a semantic requirement. Only physical states that qualify as representations may be mapped onto computational descriptions, thereby qualify as computational states. If a state is not representational, it is not computational either[13].
    2. Finally, the mechanistic theory states that “computational explanation is a species of mechanistic explanation; concrete computing systems are functionally organized mechanisms of a special kind—mechanisms that perform concrete computations.[14]” In detail, relating the accounts:
      1. In addition to the causal accounts requirement that a computational description mirror the causal structure of the physical system, the mechanistic account adds a requirement about the functional organization of the system. Only physical states that have a specific functional significance within a specific type of mechanism may be mapped onto computational descriptions, thereby qualifying as computational states. If a state lacks the appropriate functional significance, it is not a computational state[15].

3)

I have now given three broad approaches to computationalism and three theories for the concrete implementations of cognition. It should be noticed that as far as the semantic and the mechanistic theories have been specified they seem to be compatible with assumption 2) personhood is applicable to any system with a specifiable set of cognitive abilities, and only systems with those cognitive abilities.[16] However, it may be that the mechanistic theory is not.

One of the most note-worthy cons to the mechanistic approach is also one of its strengths: medium-independence. Because the rules which define the functions that describe, explain and model cognitive abilities in the mechanistic theory will operate in the same way just so long as the properties that are relevant for the computation are present, some computations (maybe even all of them) can be implemented in multiple physical media “provided the media possess a sufficient number of dimensions of variation (or degrees of freedom) that can be appropriately accessed and manipulated and that the components of the mechanism are functionally organized in the appropriate way.[17]

This may seem like it is problematic[18], especially if one is concerned that the computations which identify personhood ought not to be relizable in any non-human systems. However, it may be the best concrete implementational account of cognition available at the present time:[19] it may be compatible with assumption 2) after all, despite medium-independence. In order to be compatibly with assumption 2) the mechanistic theory may need to take into account non-computational theories of the concrete implementation of cognition. If it can take into account non-computational theories (and these theories capture parts of cognition that seems essential to personhood), while the semantic approach cannot, that will be one point in its favor.

Next I am going to present a cognitive ability that proves to be a problem for the semantic approach and introduce an approach to cognition that may help in specifying those sets of abilities which we would want to be identified with personhood. In doing so I hope to 1) show how medium-independence for the mechanistic approach may be a good thing & 2) undercut some of the explanatory force given by the semantic/representational approach.

4)

The cognitive ability in question comes from the research program[20] “embodied cognition” that “seeks to replace, revise, or at least upset the reigning cognitivist conception of mind according to which cognitive processes involve computations over symbolic representations.[21]” One of the strands of research in embodied cognition called replacement asks the following question: “how does the body and its interactions produce cognitive abilities that do not require for their explanation any of the traditional sources of cognitivism[22]?” The second of two general answers given is the kind I will be focusing on. The second answer to this question is: “When cognition is understood as emerging from the dynamic interactions of the body, environment, and brain, there is no time, nor need, for the construction and manipulation of representations.” The specific ability I have in mind is one performed by a robot, but which seems like a very human cognitive ability. The ability is to choose which objects to catch and which to avoid as they fall from the sky. Because there are so many technical details to dynamic systems modeling I will simply point out that the mathematical descriptions of the system is sufficient for modeling the systems behavior completely (in the case of the robot that performs this cognitive ability sixteen differential equations provide a complete description of what the state of the system “will be at any future time given a description of the system’s state at the present time[23]”). It is worth stating however that this description is just that, a description of behavior. Andy Clark, Shapiro suggests “is challenging dynamical systems approach to move beyond the neo-behaviorism implicit in their approach.” What isn’t given in the description is the cause of the behavior.

It may not be that dynamical systems approach conclusively undercuts the semantic specification of concrete computation: after all description doesn’t equal explanation, but the approach to modeling cognition which it is a part (embodied cognition) is radically different from the standard approach to computationalism and the semantic approach specifically. The mechanical details used when modeling cognitive abilities with dynamic systems depend crucially on neurophysiological and neuroanatomical data, meaning that while traditionally considered deviant from computationalism, embodied cognitions answer to the question of replacement has close affinities with the computational neuroscience approach to modeling and the theory of mechanical implementation. Psychological laws (as traditionally thought of in terms of common belief-desire psychology) on the other hand are less applicable to the models in embodied cognition. This may just be a product of the dominant paradigm in psychology today endorsing the models given by the classical approach and the seeming necessity of representational transparency given by articulating psychological facts using the semantic implementational theory[24].

What is interesting is that if cognition is an interactive process between “body, environment and brain” (and more specifically if the dynamical systems approaches are able to account for the cause of behavior in their modeling,) the reduction of levels of explanation necessary to identify personhood will be very near to complete. A complete model of the dynamical system would need to provide reasons similar to Churchland and Senowski’s reasons for reducing Marr’s levels into one another. The more ground non-computational theories of cognition gain the harder it is to make sense of cognition as the syntactic manipulation of symbols (the semantic approach). This is not because the representations in dynamic systems are not in the head (perhaps syntactic theory can account for the extended cognition thesis—that the mind is at least partly outside the head), but because “there is simply no time, nor need, for the construction and manipulation of representations.[25]

If cognition is not realized by the syntactic manipulation of internal representations, theorists whom endorse computationalism ought to be motivated to solve or explain the aspect of medium-independence that is a consequence of the mechanist approach and favor it over the semantic theory of implementation. One possible way to explain medium-independence may be to define the range of the constitutive cognitive abilities of persons in terms of the functional relationships between bodies, brains and the environment and to accept medium independence as necessary and useful for translating those functional relationships between bodies, brains and the environment into the conceptually categorizes of persons. This first step would require 1) persons having at least three parts: bodies brains and the environment & 2) a number of examples of cognitive abilities described, and argued to be more explanatory given persons are the combination of bodies brains and the environment rather than persons being bodies or bodies & brains, as well as a defense of the position that the person concept applies to persons and only persons. Second one would need for the medium-independent abstract functional characterization of any particular cognitive ability to be equivalent to the existence of some aspect of the coherent causal hierarchy of organization for the persons-concept. For example: “identifying a particular object” may be one cognitive ability constitutive of the person-concept that applies to a person just in case a person’s body brain and the environment are related in the right way. One interesting possibility would also be to take on a hybrid approach where persons are body-brain-environment if arranged functionally under a sociological concept like lifeguard, but persons are body-brain under other sociological/psychological concepts like psychoanalytic patient. The normal rules for the application of these concepts might be generated by training a connectionist network where the input level of the network is the range of variables conceptually necessary for personhood, there exists a hidden layer and the output are “brain-body-environment” or “brain-body.” I do not know how the backwards propagation algorithm would be made, but the decision as to whether the machine is correct or not, I believe, ought to depend upon common held opinion (not expert opinion). Exactly how the functional and causal organization of the concepts of persons should be arranged therefore remains unknown, but perhaps possible to determine. How a simulation of this organizational/conceptual structure ought to be causally united to form at one level persons, at another level pairs, at another level groups and so on, may be necessary to account for (and thus the number of relevant concepts in the causal organization of such structures would need to be incorporated into anything which attempts to model this)–but this may be a first step in bridging the gulf between computational and non-computational approaches to cognitive abilities.

5)

Supposing the mechanistic theory is improved by embodied cognition (that is, that even if the program I sketched above doesn’t work, that there is some solution), I will now argue that there is at least one aspect to personhood which will remain problematic for any model. Whether this aspect of personhood is constituted in any way by certain cognitive abilities is a question for another time. That aspect of personhood that is problematic for a new computational + non-computational approach is specifically the fact that the phenomena of conscious (or attentive cognition) is so widely accepted as a part of what it means to be a person. Without some method for accounting for this dimension of personhood in our account of cognition, the theory will remain incomplete.

Even assuming that consciousness is constituted by certain cognitive abilities, what, if anything, might account for conscious versus unconscious cognitive states? If a feature of consciousness were its medium-independence (which it must be in the mechanistic theory) one would expect miss-attributions of understanding and/or intelligence. Thus either 1) cognition (jointly computationally and non-computationally defined) needs to be conceptually distinguished from understanding and intelligence entirely (that is to deny that consciousness is constituted entirely by certain cognitive abilities) or 2) consciousness—the primary and necessary ingredient for a system’s understanding and being intelligent—needs to be formulated in such as way such that it is identifiable with a set of cognitive abilities. The former choice is perfectly acceptable. Many people have chosen option 1. Among those who may choose option 2, they may still dissent that it is not possibility because any identity claim that would hold between a set of cognitive abilities and a person would fail to include understanding and intelligence (important, perhaps the most important parts of personhood). Then there may be those who believe that there is an identity between consciousness and a specifiable set of cognitive abilities. I am inclined to favor the last option: I believe it is possible to working out a compatible identity between cognitive abilities and consciousness (hopefully influenced largely by a hybrid approach to cognition which incorporates both computational + non-computational research). How might this be done?

6)

Given the often held belief that consciousness is a privately accessibly property of a cognitive state, the first step I believe needs to be to motivate the denial of this claim: that consciousness, like belief, is dependent on at least a pair of systems (persons with bodies, brains and environments) being causally related in specifiable ways. As a general rule, even if one does not agree with my suggestion of a first step, methodologically I believe that it is up to the cognitive scientist to create empirically testable hypothesis which state that cognitive/non-cognitive states and kind-consciousnesses are identical and that it is therefore the burden of the critics of these identity claims to show that the cognitive states that can be instantiated under a particular brain module etc. etc., is not identical with consciousness. With the diverse concepts that theorist have used to apply consciousness to persons it would require surveying those concepts and assessing their tractability given what is desired by the identity thesis in question to decide upon one group of concepts[26]. As a second step to identifying consciousness with cognitive states, I emphasize, requires one do away with the levels of explanation Marr proposed. Once the explanatory levels are successfully reduced information from the bottom up will have as important a part to play in deciding which identity thesis between consciousness and cognition/non-cognitive states end up getting accepting and which do not.

Conclusion

I began my paper with two assumptions that seemed irrefutable given the present state of cognitive science 1) It is possible to adequately establish an identity between computational processes and cognitive abilities & 2) personhood is applicable to any system with a specifiable set of cognitive abilities, and only systems with those cognitive abilities.

After evaluating Marr’s tri-level theory of explanation I argued that the levels of explanation, in addition to being unable to specify research in connectionist networks finely enough, did not in fact form distinct levels. I gave a conceptual argument as to why they must reduce. Following this, three broad approaches to computationalism were presented as well as three theories of concrete computation. Two of the three theories were then examined—the mechanical and the semantic. Whether or not assumption 1) in the end is falsified by the inability of purely computational processes to account for cognitive abilities depends upon the possibility that successful descriptions and explanations can be given using dynamical systems—the systems seemingly able to capture cognition without the use of representations (the vehicles of content posited by the semantic account). Given the possibility that dynamical system will both describe the cognitive abilities of a system and explain the cause for the development of those abilities, the necessity of incorporating representations in our explanations of at least some cognitive abilities seems forced. With this problem presented for the semantic account and medium-independency presented as a problem for the mechanistic account it seemed as though computationalism was in need of repair or at least supplementation by some non-computational approaches to cognition. However, even if the problems for computationalism become solved, and assumption 1) survives the embodied cognitivist replacement strategy, at least one aspect of personhood I suggests would persist (and would thus need of be reduced): the conscious aspect. Consciousness as an element of cognition is motivated by consciousness being so closely tied up with two other qualities of persons: intelligence and understanding. Both of these qualities seem to be constitutive of the cognitive abilities of persons. However, as of the current state of cognitive science, no identity between consciousness and cognitive abilities seems ready at hand, particularly because the fact that consciousness is conceived of as only privately accessible and because of the variety of conceptions of human consciousness. The method I propose for making progress on this front is to require cognitive scientists to create empirically testable hypothesis which state that cognitive/non-cognitive states and kind-consciousness are identical and that it is therefore the burden of the critics of these identity claims to show that the cognitive states in question are not identical with consciousness. In order to do this someone must undertake the task of evaluating the varieties of conceptions of human consciousness in order to find a tractable hypothesis—a task I will need to undertake at a later time.

In the end it looks as though assumption 1) comes under fire because of the research field known as embodied cognition[27] while the “cognitive abilities” present in assumption 2) runs into problems of accounting for consciousness (an essential element of what it means to be a person) as constituted by cognitive abilities. It is my belief that some combination of computational and non-computational theories of the approach to modeling the mind when fleshed out in an ontologically rich analysis of persons at the implementation level will solve the attacks against 1) by replacing it with assumption 1’: It is possible to adequately establish an identity between (computational & non-computational) processes and cognitive abilities. The problem present in 2), in turn, may be far from conceptually possible to answer. Progress on this front depends in a large part on an assessment of the theories of the implementation of cognition abilities specifically with the desiderata that the theory provides empirically verifiable hypothesis about when consciousness is and is not present.

I hope that now the frame has been set for painting a picture which will apply cognitive abilities to persons, that the problems in need or resolution have been clarified by my discussion of them, that assumptions 1’ and 2) can be kept to focus further research and that the relationship between computationalism, embodiment & consciousness is now arranged neatly in conceptual space.


[1] Information may plausibly have one of two senses here: 1) information as approximately the same as natural meaning. Some kinds of computations, like arithmetic, don’t seem to carry any natural meaning—however this is one possibility. 2) Information as just ordinary semantic non-natural meaning.

[2] I don’t have room to argue for this tri-level theory of explanation, however the arguments can be found in ““Vision” D. Marr,” Philosophy of Psychology: Contemporary Readings, ed. J. S. Bermudez, 2006, pg. 385-405. All quotes in this paragraph were taken from page 389.

[3] These theorists, as I understand them, take approach 3 (below) and to endorse the mechanical concrete implementation for a theory.

[4] The paper from which all these quotes come is “Neural Representation and Neural Computation,” Philosophy of Psychology: Contemporary Readings, ed. J. S. Bermudez, 2006, page 177.

[5] Or more generally that approach 3 (in section 2 of this paper) plus the mechanical implementational theory is wrong.

[6] Ibid. pg 177.

[7] It is assumed without argument because the assumption is so central to the entire enterprise of cognitive science at the present time. It may be possible to deny it, but I am in no position to do so right now. In addition, even when the non-computationalist approach is presented (later in my paper: embodied cognition), it is not clear that it will be able to replace the computationalist approach. I believe it is far more likely that it will supplement one (or some combination) of classical, connectionist or computational neuroscience  approaches  in combination with theories of concrete implementation.

[8] The Pros are given in the explanation of the account as just what the approach can/has successfully model(ed). The cons are the problems that follow the explanations of the accounts as they endorse some theory of concrete implementation.

[9] I only discuss problems for the semantic and mechanistic theories 1) because the causal theory seems to face the issue of pancomputationalism (See footnote 16 for more on that) & 2) because these theories apply to the greatest number of theorists today.

[10] Computationalism, Gualtiero Piccinini, “The Oxford handbook of Philosophy of Cognitive Science,” ed. Eric Margolis, Richard Samuels, Stephen P. Stich, 2012, pages 225-226. All quotes but the last come from page 225, which comes from page 226.

[11] Because of space constraints I cannot give detailed descriptions of these approaches but these descriptions can be found in: Computationalism, “The Oxford handbook of Philosophy of Cognitive Science,” ed. Eric Margolis, Richard Samuels, Stephen P. Stich, 2012, pg 226-232.

[12] Computationalism, Gualtiero Piccinini, “The Oxford handbook of Philosophy of Cognitive Science,” ed. Eric Margolis, Richard Samuels, Stephen P. Stich, 2012, pg 228

[13] Ibid. pg. 228. Usually this theory of implementation is conjoined with the classical approach, but it need not be so in principle, just as the mechanistic account is usually conjoined with the connectionist approach or the computational neuroscience approach, though it need not be so in principle.

[14] Ibid. pg. 230

[15]  Ibid. pg. 230

[16] One major con to the causal approach and the reason why I do not discuss it further is its falling prey to what is called “pancomputationalism” whereas the semantic and mechanistic approaches, by adding requirements, do not. To fall prey to pancomputaionalism is just to not make it clear what is special or distinct about being a computational system. According to the causal account “digital computers perform computations in the same sense in which rocks, hurricanes, and planetary systems do. (page 227 of “The Oxford handbook of Philosophy of Cogntive Science” for more on this)”

[17] Computationalism, Gualtiero Piccinini, “The Oxford handbook of Philosophy of Cognitive Science,” ed. Eric Margolis, Richard Samuels, Stephen P. Stich, 2012, pg 231

[18] Ned Block’s “blockhead” conceptual argument is aimed at showing that this form of concrete computation is too permissive, specifically when it comes to cognitive states like understanding, intending or believing.

[19] I cannot argue in any detailed and systematic way for that claim in this paper, but that’s my hunch.

[20] As a research program this may be contrasted with the three approaches outlined in section 2) of this paper (though to be more specific replacement forms one set of research program while ____ and ____ form other parts.

[21] Embodied Cognition, Lawrence. A. Shapiro, “The Oxford handbook of Philosophy of Cognitive Science,” ed. Eric Margolis, Richard Samuels, Stephen P. Stich, 2012, pg 118. Notice that embodied cognition is therefore aimed at replacing, revising or at least upsetting the semantic theory of implementation, but not the mechanical theory of implementation.

[22] The author of this paper uses “cognitivism” where I use “the semantic theory of implementation,” but they are one in the same.

[23] Ibid. pg 138.

[24] This will be a subject for a later date. The question is “why frame functional explanations in terms of syntactic rules?” Another question will be to ask “why insist that there really is not computation without representation and what can be said in response to counter examples from embodied cognition?”

[25] The specific lack of time I have been told in conversation is exemplified by the cognitive ability human have to determine the location in space of an object responsible for a heard sound. Though I cannot cite a text I have read to support this claim, the speed at which the location is determined I have been told and the neurofunctional data to support it preclude the possibility that representations were constructed and manipulated.

[26] Conducting this research and determining a tractable thesis of consciousness is something I will undertake soon.

[27] It is an open question how much of computationalism is done away with and whether it is just the semantic theory that will face the brunt of the assault. It is also an open question if embodied cognitivist might incorporate the mechanical theory of implementation.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s