February 10, 2010

-page 128-

The relationship between a set of premises and a conclusion when the conclusion follows from the premise,. Many philosophers identify this with it being logically impossible that the premises should all be true, yet the conclusion false. Others are sufficiently impressed by the paradoxes of strict implication to look for a stranger relation, which would distinguish between valid and invalid arguments within the sphere of necessary propositions. The search for a strange notion is the field of relevance logic.


From a systematic theoretical point of view, we may imagine the process of evolution of an empirical science to be a continuous process of induction. Theories are evolved and are expressed in short encompassing as statements of as large number of individual observations in the form of empirical laws, from which the general laws can be ascertained by comparison. Regarded in this way, the development of a science bears some resemblance to the compilation of a classified catalogue. It is , a it were, a purely empirical enterprise.

But this point of view by no means embraces the whole of the actual process, for it slurs over the important part played by intuition and deductive thought in the development of an exact science. As soon as a science has emerged from its initial stages, theoretical advances are no longer achieved merely by a process of arrangement. Guided by empirical data, the investigators rather develops a system of thought which, in general, it is built up logically from a small number of fundamental assumptions, the so-called axioms. We call such a system of thought a theory. The theory finds the justification for its existence in the fact that it correlates a large number of single observations, and is just here that the truth of the theory lies.

Corresponding to the same complex of empirical data, there may be several theories, which differ from one another to a considerable extent. But as regards the deductions from the theories which are capable of being tested, the agreement between the theories may be so complete, that it becomes difficult to find any deductions in which the theories differ from each other. As an example, a case of general interest is available in the province of biology, in the Darwinian theory of the development of species by selection in the struggle for existence, and in the theory of development which is based on the hypophysis of the hereditary transmission of acquired characters. The Origin of Species was principally successful in marshalling the evidence for evolution, than providing a convincing mechanisms for genetic change. And Darwin himself remained open to the search for additional mechanisms, while also remaining convinced that natural selection was at the hart of it. It was only with the later discovery of the gene as the unit of inheritance that the synthesis known as neo-Darwinism became the orthodox theory of evolution in the life sciences.

In the 19th century the attempt to base ethical reasoning o the presumed facts about evolution, the movement is particularly associated with the English philosopher of evolution Herbert Spencer (1820-1903). The premise is that later elements in an evolutionary path are better than earlier ones: The application of this principle then requires seeing western society, laissez-faire capitalism, or some other object of approval, as more evolved than more primitive social forms. Neither the principle nor the applications command much respect. The version of evolutionary ethics called social Darwinism emphasises the struggle for natural selection, and draws the conclusion that we should glorify and assist such struggle, usually by enhancing competition and aggressive relations between people in society or between evolution and ethics has been re-thought in the light of biological discoveries concerning altruism and kin-selection.

Once again, the psychology proving attempts are founded to evolutionary principles, in which a variety of higher mental functions may be adaptations, forced in response to selection pressures on the human populations through evolutionary time. Candidates for such theorizing include material and paternal motivations, capacities for love and friendship, the development of language as a signalling system cooperative and aggressive , our emotional repertoire, our moral and reactions, including the disposition to detect and punish those who cheat on agreements or who free-ride on =the work of others, our cognitive structures, and many others. Evolutionary psychology goes hand-in-hand with neurophysiological evidence about the underlying circuitry in the brain which subserves the psychological mechanisms it claims to identify. The approach was foreshadowed by Darwin himself, and William James, as well as the sociology of E.O. Wilson. The term of use are applied, more or less aggressively, especially to explanations offered in sociobiology and evolutionary psychology.

Another assumption that is frequently used to legitimate the real existence of forces associated with the invisible hand in neoclassical economics derives from Darwins view of natural selection as a war-like competing between atomized organisms in the struggle for survival. In natural selection as we now understand it, cooperation appears to exist in complementary relation to competition. It is complementary relationships between such results that are emergent self-regulating properties that are greater than the sum of parts and that serve to perpetuate the existence of the whole.

According to E.O Wilson, the human mind evolved to believe in the gods and people need a sacred narrative to have a sense of higher purpose. Yet it id also clear that the gods in his view are merely human constructs and, therefore, there is no basis for dialogue between the world-view of science and religion. Science for its part, said Wilson, will test relentlessly every assumption about the human condition and in time uncover the bedrock of the moral an religious sentiments. The eventual result of the competition between the other, will be the secularization of the human epic and of religion itself.

Man has come to the threshold of a state of consciousness, regarding his nature and his relationship to te Cosmos, in terms that reflect reality. By using the processes of nature as metaphor, to describe the forces by which it operates upon and within Man, we come as close to describing reality as we can within the limits of our comprehension. Men will be very uneven in their capacity for such understanding, which, naturally, differs for different ages and cultures, and develops and changes over the course of time. For these reasons it will always be necessary to use metaphor and myth to provide comprehensible guides to living. In thus way. Mans imagination and intellect play vital roles on his survival and evolution.

Since so much of life both inside and outside the study is concerned with finding explanations of things, it would be desirable to have a concept of what counts as a good explanation from bad. Under the influence of logical positivist approaches to the structure of science, it was felt that the criterion ought to be found in a definite logical relationship between the exlanans (that which does the explaining) and the explanandum (that which is to be explained). The approach culminated in the covering law model of explanation, or the view that an event is explained when it is subsumed under a law of nature, that is, its occurrence is deducible from the law plus a set of initial conditions. A law would itself be explained by being deduced from a higher-order or covering law, in the way that Johannes Kepler(or, Keppler, 1571-1630), was by way of planetary motion that the laws were deducible from Newtons laws of motion. The covering law model may be adapted to include explanation by showing that something is probable, given a statistical law. Questions for the covering law model include querying for the covering law are necessary to explanation (we explain whether everyday events without overtly citing laws): Querying whether they are sufficient (it ma y not explain an event just to say that it is an example of the kind of thing that always happens). And querying whether a purely logical relationship is adapted to capturing the requirements we make of explanations. These may include, for instance, that we have a feel for what is happening, or that the explanation proceeds in terms of things that are familiar to us or unsurprising, or that we can give a model of what is going on, and none of these notions is captured in a purely logical approach. Recent work, therefore, has tended to stress the contextual and pragmatic elements in requirements for explanation, so that what counts as good explanation given one set of concerns may not do so given another.

The argument to the best explanation is the view that once we can select the best of any in something in explanations of an event, then we are justified in accepting it, or even believing it. The principle needs qualification, since something it is unwise to ignore the antecedent improbability of a hypothesis which would explain the data better than others, e.g., the best explanation of a coin falling heads 530 times in 1,000 tosses might be that it is biassed to give a probability of heads of 0.53 but it might be more sensible to suppose that it is fair, or to suspend judgement.

In a philosophy of language is considered as the general attempt to understand the components of a working language, the relationship with the understanding speaker has to its elements, and the relationship they bear to the world. The subject therefore embraces the traditional division of semiotic into syntax, semantics, an d pragmatics. The philosophy of language thus mingles with the philosophy of mind, since it needs an account of what it is in our understanding that enables us to use language. It so mingles with the metaphysics of truth and the relationship between sign and object. Much as much is that the philosophy in the 20th century, has been informed by the belief that philosophy of language is the fundamental basis of all philosophical problems, in that language is the distinctive exercise of mind, and the distinctive way in which we give shape to metaphysical beliefs. Particular topics will include the problems of logical form,. And the basis of the division between syntax and semantics, as well as problems of understanding the number and nature of specifically semantic relationships such as meaning, reference, predication, and quantification. Pragmatics include that of speech acts, while problems of rule following and the indeterminacy of translation infect philosophies of both pragmatics and semantics.

On this conception, to understand a sentence is to know its truth-conditions, and, yet, in a distinctive way the conception has remained central that those who offer opposing theories characteristically define their position by reference to it. The Conception of meanings truth-conditions need not and should not be advanced for being in itself as complete account of meaning. For instance, one who understands a language must have some idea of the range of speech acts contextually performed by the various types of sentence in the language, and must have some idea of the insufficiencies of various kinds of speech act. The claim of the theorist of truth-conditions should rather be targeted on the notion of content: If indicative sentence differ in what they strictly and literally say, then this difference is fully accounted for by the difference in the truth-conditions.

The meaning of a complex expression is a function of the meaning of its constituent. This is just as a sentence of what it is for an expression to be semantically complex. It is one of the initial attractions of the conception of meaning truth-conditions tat it permits a smooth and satisfying account of the way in which the meaning of s complex expression is a function of the meaning of its constituents. On the truth-conditional conception, to give the meaning of an expression is to state the contribution it makes to the truth-conditions of sentences in which it occurs. For singular terms-proper names, indexical, and certain pronouns-this is done by stating the reference of the terms in question. For predicates, it is done either by stating the conditions under which the predicate is true of arbitrary objects, or by stating that conditions under which arbitrary atomic sentences containing it are true. The meaning of a sentence-forming operator is given by stating its contribution to the truth-conditions of as complex sentence, as a function of the semantic values of the sentences on which it operates.

The theorist of truth conditions should insist that not every true statement about the reference of an expression is fit to be an axiom in a meaning-giving theory of truth for a language, such is the axiom: London refers to the city in which there was a huge fire in 1666, is a true statement about the reference of London. It is a consequent of a theory which substitutes this axiom for no different a term than of our simple truth theory that London is beautiful is true if and only if the city in which there was a huge fire in 1666 is beautiful. Since a subject can understand the name London without knowing that last-mentioned truth condition, this replacement axiom is not fit to be an axiom in a meaning-specifying truth theory. It is, of course, incumbent on a theorised meaning of truth conditions, to state in a way which does not presuppose any previous, non-truth conditional conception of meaning

Among the many challenges facing the theorist of truth conditions, two are particularly salient and fundamental. First, the theorist has to answer the charge of triviality or vacuity, second, the theorist must offer an account of what it is for a persons language to be truly describable by as semantic theory containing a given semantic axiom.

Since the content of a claim that the sentence Paris is beautiful is true amounts to no more than the claim that Paris is beautiful, we can trivially describers understanding a sentence, if we wish, as knowing its truth-conditions, but this gives us no substantive account of understanding whatsoever. Something other than grasp of truth conditions must provide the substantive account. The charge rests upon what has been called the redundancy theory of truth, the theory which, somewhat more discriminatingly. Horwich calls the minimal theory of truth. Its conceptual representation that the concept of truth is exhausted by the fact that it conforms to the equivalence principle, the principle that for any proposition p, it is true that p if and only if p. Many different philosophical theories of truth will, with suitable qualifications, except that equivalence principle. The distinguishing feature of the minimal theory is its claim that the equivalence principle exhausts the notion of truth. It is now widely accepted, both by opponents and supporters of truth conditional theories of meaning, that it is inconsistent to accept both minimal theory of truth and a truth conditional account of meaning. If the claim that the sentence Paris is beautiful is true is exhausted by its equivalence to the claim that Paris is beautiful, it is circular to try of its truth conditions. The minimal theory of truth has been endorsed by the Cambridge mathematician and philosopher Plumpton Ramsey (1903-30), and the English philosopher Jules Ayer, the later Wittgenstein, Quine, Strawson. Horwich and-confusing and inconsistently if this article is correct-Frége himself. But is the minimal theory correct?

The minimal theory treats instances of the equivalence principle as definitional of truth for a given sentence, but in fact, it seems that each instance of the equivalence principle can itself be explained. The truths from which such an instance as: London is beautiful is true if and only if London is beautiful. This would be a pseudo-explanation if the fact that London refers to London consists in part in the fact that London is beautiful has the truth-condition it does. But it is very implausible, it is, after all, possible to understand the name London without understanding the predicate is beautiful.

Sometimes, however, the counterfactual conditional is known as ‘subjunctive conditionals’, insofar as a counterfactual conditional is a conditional of the form if ‘p’ were to happen ‘q’ would, or if ‘p’ were to have happened ‘q’ would have happened, where the supposition of ‘p’ is contrary to the known fact that ‘not-p’. Such assertions are nevertheless, useful if you broke the bone, the X-ray would have looked different, or if the reactor were to fail, this mechanism wold click in are important truths, even when we know that the bone is not broken or are certain that the reactor will not fail. It is arguably distinctive of laws of nature that yield counterfactuals (if the metal were to be heated, it would expand), whereas accidentally true generalizations may not. It is clear that counterfactuals cannot be represented by the material implication of the propositional calculus, since that conditionals comes out true whenever p is false, so there would be no division between true and false counterfactuals.

Although the subjunctive form indicates a counterfactual, in many contexts it does not seem to matter whether we use a subjunctive form, or a simple conditional form: If you run out of water, you will be in trouble seems equivalent to if you were to run out of water, you would be in trouble, in other contexts there is a big difference: If Oswald did not kill Kennedy, someone else did is clearly true, whereas if Oswald had not killed Kennedy, someone would have is most probably false.

The best-known modern treatment of counterfactuals is that of David Lewis, which evaluates them as true or false according to whether q is true in the most similar possible worlds to ours in which p is true. The similarity-ranking this approach needs has proved controversial, particularly since it may need to presuppose some notion of the same laws of nature, whereas art of the interest in counterfactuals is that they promise to illuminate that notion. There is a growing awareness tat the classification of conditionals is an extremely tricky business, and categorizing them as counterfactuals or not be of limited use.

The pronouncing of any conditional; preposition of the form if ‘p’ then ‘q’. The condition hypothesizes, ‘p’. Its called the antecedent of the conditional, and ‘q’ the consequent. Various kinds of conditional have been distinguished. The weaken in that of material implication, merely telling us that with ‘not-p’. or ‘q’. stronger conditionals include elements of modality, corresponding to the thought that if ‘p’ is true then ‘q’ must be true. Ordinary language is very flexible in its use of the conditional form, and there is controversy whether, yielding different kinds of conditionals with different meanings, or pragmatically, in which case there should be one basic meaning which case there should be one basic meaning, with surface differences arising from other implicatures.

We now turn to a philosophy of meaning and truth, under which it is especially associated with the American philosopher of science and of language (1839-1914), and the American psychologist philosopher William James (1842-1910), wherefore the study in Pragmatism is given to various formulations by both writers, but the core is the belief that the meaning of a doctrine is the same as the practical effects of adapting it. Peirce interpreted of theocratical sentence is only that of a corresponding practical maxim (telling us what to do in some circumstance). In James the position issues in a theory of truth, notoriously allowing that belief, including for example, belief in God, are the widest sense of the works satisfactorially in the widest sense of the word. On James’ view almost any belief might be respectable, and even rue, provided it works (but working is no simple matter for James). The apparently subjectivist consequences of this were wildly assailed by Russell (1872-1970), Moore (1873-1958), and others in the early years of the 20 century. This led to a division within pragmatism between those such as the American educator John Dewey (1859-1952), whose humanistic conception of practice remains inspired by science, and the more idealistic route that especially by the English writer F.C.S. Schiller (1864-1937), embracing the doctrine that our cognitive efforts and human needs actually transform the reality that we seek to describe. James often writes as if he sympathizes with this development. For instance, in The Meaning of Truth (1909), he considers the hypothesis that other people have no minds (dramatized in the sexist idea of an automatic sweetheart or female zombie) and remarks hat the hypothesis would not work because it would not satisfy our egoistic craving for the recognition and admiration of others. The implication that this is what makes it true that the other persons have minds in the disturbing part.

Modern pragmatists such as the American philosopher and critic Richard Rorty (1931-) and some writings of the philosopher Hilary Putnam (1925-) who have usually tried to dispense with an account of truth and concentrate, as perhaps James should have done, upon the nature of belief and its relations with human attitude, emotion, and need. The driving motivation of pragmatism is the idea that belief in the truth on te one hand must have a close connexion with success in action on the other. One way of cementing the connexion is found in the idea that natural selection must have adapted us to be cognitive creatures because belief have effects, as they work. Pragmatism can be found in Kants doctrine of the primary of practical over pure reason, and continued to play an influential role in the theory of meaning and of truth.

In case of fact, the philosophy of mind is the modern successor to behaviourism, as do the functionalism that its early advocates were Putnam (1926-) and Sellars (1912-89), and its guiding principle is that we can define mental states by a triplet of relations they have on other mental states, what effects they have on behaviour. The definition need not take the form of a simple analysis, but if it were it could write down the totality of axioms, or postdate, or platitudes that govern our theories about what things of other mental states, and our theories about what things are apt to cause (for example), a belief state, what effects it would have on a variety of other mental states, and what affects it is likely to have on behaviour, then we would have done all that is needed to make the state a proper theoretical notion. It could be implicitly defied by this, for which of Functionalism is often compared with descriptions of a computer, since according to mental descriptions correspond to a description of a machine in terms of software, that remains silent about the underlaying hardware or realization of the program the machine is running. The principal advantage of functionalism include its fit with the way we know of mental states both of ourselves and others, which is via their effects on behaviour and other mental states. As with behaviourism, critics charge that structurally complex items that do not bear mental states might nevertheless, imitate the functions that are cited. According to this criticism functionalism is too generous and would count too many things as having minds. It is also queried whether functionalism is too paradoxical, able to see mental similarities only when there is causal similarity, when our actual practices of interpretations enable us to ascribe thoughts and desires to different from our own, it may then seem as though beliefs and desires can be variably realized causal architecture, just as much as they can be in different neurophysiological states.

The philosophical movement of Pragmatism had a major impact on American culture from the late 19th century to the present. Pragmatism calls for ideas and theories to be tested in practice, by assessing whether acting upon the idea or theory produces desirable or undesirable results. According to pragmatists, all claims about truth, knowledge, morality, and politics must be tested in this way. Pragmatism has been critical of traditional Western philosophy, especially the notion that there are absolute truths and absolute values. Although pragmatism was popular for a time in France, England, and Italy, most observers believe that it encapsulates an American faith in knowing how and the practicality is an equally American distrust of abstract theories and ideologies.

In mentioning the American psychologist and philosopher we find William James, who helped to popularize the philosophy of pragmatism with his book Pragmatism: A New Name for Old Ways of Thinking (1907). Influenced by a theory of meaning and verification developed for scientific hypotheses by American philosopher C. S. Peirce, James held that truth is what works, or has good experimental results. In a related theory, James argued the existence of God is partly verifiable because many people derive benefits from believing.

The Association for International Conciliation first published William Jamess pacifist statement, The Moral Equivalent of War, in 1910. James, a highly respected philosopher and psychologist, was one of the founders of pragmatism-a philosophical movement holding that ideas and theories must be tested in practice to assess their worth. James hoped to find a way to convince men with a long-standing history of pride and glory in war to evolve beyond the need for bloodshed and to develop other avenues for conflict resolution. Spelling and grammar represent standards of the time.

Pragmatists regard all theories and institutions as tentative hypotheses and solutions. For this reason they believed that efforts to improve society, through such means as education or politics, must be geared toward problem solving and must be ongoing. Through their emphasis on connecting theory to practice, pragmatist thinkers attempted to transform all areas of philosophy, from metaphysics to ethics and political philosophy.

Pragmatism sought a middle ground between traditional ideas about the nature of reality and radical theories of nihilism and irrationalism, which had become popular in Europe in the late 19th century. Traditional metaphysics assumed that the world has a fixed, intelligible structure and that human beings can know absolute or objective truths about the world and about what constitutes moral behaviour. Nihilism and irrationalism, on the other hand, denied those very assumptions and their certitude. Pragmatists today still try to steer a middle course between contemporary offshoots of these two extremes.

The ideas of the pragmatists were considered revolutionary when they first appeared. To some critics, pragmatisms refusal to affirm any absolutes carried negative implications for society. For example, pragmatists do not believe that a single absolute idea of goodness or justice exists, but rather that these concepts are changeable and depend on the context in which they are being discussed. The absence of these absolutes, critics feared, could result in a decline in moral standards. The pragmatists denial of absolutes, moreover, challenged the foundations of religion, government, and schools of thought. As a result, pragmatism influenced developments in psychology, sociology, education, semiotics (the study of signs and symbols), and scientific method, as well as philosophy, cultural criticism, and social reform movements. Various political groups have also drawn on the assumptions of pragmatism, from the progressive movements of the early 20th century to later experiments in social reform.

Pragmatism is best understood in its historical and cultural context. It arose during the late 19th century, a period of rapid scientific advancement typified by the theories of British biologist Charles Darwin, whose theories suggested too many thinkers that humanity and society are in a perpetuated state of progress. During this same period a decline in traditional religious beliefs and values accompanied the industrialization and material progress of the time. In consequence it became necessary to rethink fundamental ideas about values, religion, science, community, and individuality.

The three most important pragmatists are American philosophers Charles Sanders Peirce, William James, and John Dewey. Peirce was primarily interested in scientific method and mathematics; his objective was to infuse scientific thinking into philosophy and society, and he believed that human comprehension of reality was becoming ever greater and that human communities were becoming increasingly progressive. Peirce developed pragmatism as a theory of meaning-in particular, the meaning of concepts used in science. The meaning of the concept ‘brittle’, for example, is given by the observed consequences or properties that objects called brittle exhibit. For Peirce, the only rational way to increase knowledge was to form mental habits that would test ideas through observation, experimentation, or what he called inquiry. Many philosophers known as logical positivists, a group of philosophers who have been influenced by Peirce, believed that our evolving species was fated to get ever closer to Truth. Logical positivists emphasize the importance of scientific verification, rejecting the assertion of positivism that personal experience is the basis of true knowledge.

James moved pragmatism in directions that Peirce strongly disliked. He generalized Peirces doctrines to encompass all concepts, beliefs, and actions; he also applied pragmatist ideas to truth as well as to meaning. James was primarily interested in showing how systems of morality, religion, and faith could be defended in a scientific civilization. He argued that sentiment, as well as logic, is crucial to rationality and that the great issues of life-morality and religious belief, for example-are leaps of faith. As such, they depend upon what he called the will to believe and not merely on scientific evidence, which can never tell us what to do or what is worthwhile. Critics charged James with relativism (the belief that values depend on specific situations) and with crass expediency for proposing that if an idea or action works the way one intends, it must be right. But James can more accurately be described as a pluralist-someone who believes the world to be far too complex for any-one philosophy to explain everything.

Deweys philosophy can be described as a version of philosophical naturalism, which regards human experience, intelligence, and communities as ever-evolving mechanisms. Using their experience and intelligence, Dewey believed, human beings can solve problems, including social problems, through inquiry. For Dewey, naturalism led to the idea of a democratic society that allows all members to acquire social intelligence and progress both as individuals and as communities. Dewey held that traditional ideas about knowledge, truth, and values, in which absolutes are assumed, are incompatible with a broadly Darwinian world-view in which individuals and society are progressing. In consequence, he felt that these traditional ideas must be discarded or revised. Indeed, for pragmatists, everything people know and do depend on a historical context and is thus tentative rather than absolute.

Many followers and critics of Dewey believe he advocated elitism and social engineering in his philosophical stance. Others think of him as a kind of romantic humanist. Both tendencies are evident in Deweys writings, although he aspired to synthesize the two realms.

The pragmatists tradition was revitalized in the 1980s by American philosopher Richard Rorty, who has faced similar charges of elitism for his belief in the relativism of values and his emphasis on the role of the individual in attaining knowledge. Interest has renewed in the classic pragmatists-Pierce, James, and Dewey-have an alternative to Rortys interpretation of the tradition.

The Philosophy of Mind, is the branch of philosophy that considers mental phenomena such as sensation, perception, thought, belief, desire, intention, memory, emotion, imagination, and purposeful action. These phenomena, which can be broadly grouped as thoughts and experiences, are features of human beings; many of them are also found in other animals. Philosophers are interested in the nature of each of these phenomena as well as their relationships to one another and to physical phenomena, such as motion.

The most famous exponent of dualism was the French philosopher René Descartes, who maintained that body and mind are radically different entities and that they are the only fundamental substances in the universe. Dualism, however, does not show how these basic entities are connected.

In the work of the German philosopher Gottfried Wilhelm Leibniz, the universe is held to consist of an infinite number of distinct substances, or monads. This view is pluralistic in the sense that it proposes the existence of many separate entities, and it is monistic in its assertion that each monad reflects within itself the entire universe.

Other philosophers have held that knowledge of reality is not derived from a priori principles, but is obtained only from experience. This type of metaphysics is called empiricism. Still another school of philosophy has maintained that, although an ultimate reality does exist, it is altogether inaccessible to human knowledge, which is necessarily subjective because it is confined to states of mind. Knowledge is therefore not a representation of external reality, but merely a reflection of human perceptions. This view is known as skepticism or agnosticism in respect to the soul and the reality of God.

The 18th-century German philosopher Immanuel Kant published his influential work The Critique of Pure Reason in 1781. Three years later, he expanded on his study of the modes of thinking with an essay entitled What is Enlightenment? In this 1784 essay, Kant challenged readers to dare to know, arguing that it was not only a civic but also a moral duty to exercise the fundamental freedoms of thought and expression.

Several major viewpoints were combined in the work of Kant, who developed a distinctive critical philosophy called transcendentalism. His philosophy is agnostic in that it denies the possibility of a strict knowledge of ultimate reality; it is empirical in that it affirms that all knowledge arises from experience and is true of objects of actual and possible experience; and it is rationalistic in that it maintains the a priori character of the structural principles of this empirical knowledge.

These principles are held to be necessary and universal in their application to experience, for in Kants view the mind furnishes the archetypal forms and categories (space, time, causality, substance, and relation) to its sensations, and these categories are logically anterior to experience, although manifested only in experience. Their logical anteriority to experience makes these categories or structural principles transcendental; they transcend all experience, both actual and possible. Although these principles determine all experience, they do not in any way affect the nature of things in themselves. The knowledge of which these principles are the necessary conditions must not be considered, therefore, as constituting a revelation of things as they are in themselves. This knowledge concerns things only insofar as they appear to human perception or as they can be apprehended by the senses. The argument by which Kant sought to fix the limits of human knowledge within the framework of experience and to demonstrate the inability of the human mind to penetrate beyond experience strictly by knowledge to the realm of ultimate reality constitutes the critical feature of his philosophy, giving the key word to the titles of his three leading treatises, Critique of Pure Reason, Critique of Practical Reason, and Critique of Judgment. In the system propounded in these works, Kant sought also to reconcile science and religion in a world of two levels, comprising noumena, objects conceived by reason although not perceived by the senses, and phenomena, things as they appear to the senses and are accessible to material study. He maintained that, because God, freedom, and human immortality are noumenal realities, these concepts are understood through moral faith rather than through scientific knowledge. With the continuous development of science, the expansion of metaphysics to include scientific knowledge and methods became one of the major objectives of metaphysicians.

Some of Kants most distinguished followers, notably Johann Gottlieb Fichte, Friedrich Schelling, Georg Wilhelm Friedrich Hegel, and Friedrich Schleiermacher, negated Kants criticism in their elaborations of his transcendental metaphysics by denying the Kantian conception of the thing-in-itself. They thus developed an absolute idealism in opposition to Kants critical transcendentalism.

Since the formation of the hypothesis of absolute idealism, the development of metaphysics has resulted in as many types of metaphysical theory as existed in pre-Kantian philosophy, despite Kants contention that he had fixed definitely the limits of philosophical speculation. Notable among these later metaphysical theories are radical empiricism, or pragmatism, a native American form of metaphysics expounded by Charles Sanders Peirce, developed by William James, and adapted as instrumentalism by John Dewey; voluntarism, the foremost exponents of which are the German philosopher Arthur Schopenhauer and the American philosopher Josiah Royce; phenomenalism, as it is exemplified in the writings of the French philosopher Auguste Comte and the British philosopher Herbert Spencer; emergent evolution, or creative evolution, originated by the French philosopher Henri Bergson; and the philosophy of the organism, elaborated by the British mathematician and philosopher Alfred North Whitehead. The salient doctrines of pragmatism are that the chief function of thought is to guide action, that the meaning of concepts is to be sought in their practical applications, and that truth should be tested by the practical effects of belief; according to instrumentalism, ideas are instruments of action, and their truth is determined by their role in human experience. In the theory of voluntarism the will are postulated as the supreme manifestation of reality. The exponents of phenomenalism, who are sometimes called positivists, contend that everything can be analyzed in terms of actual or possible occurrences, or phenomena, and that anything that cannot be analyzed in this manner cannot be understood. In emergent or creative evolution, the evolutionary process is characterized as spontaneous and unpredictable rather than mechanistically determined. The philosophy of the organism combines an evolutionary stress on constant process with a metaphysical theory of God, the external objects, and creativity.

In the 20th century the validity of metaphysical thinking has been disputed by the logical positivists and by the so-called dialectical materialism of the Marxists. The basic principle maintained by the logical positivists is the verifiability theory of meaning. According to this theory a sentence has factual meaning only if it meets the test of observation. Logical positivists argue that metaphysical expressions such as Nothing exists except material particles and Everything is part of one all-encompassing spirit cannot be tested empirically. Therefore, according to the verifiability theory of meaning, these expressions have no factual cognitive meaning, although they can have an emotive meaning relevant to human hopes and feelings.

The dialectical materialists assert that the mind is conditioned by and reflects material reality. Therefore, speculations that conceive of constructs of the mind as having any other than material reality are themselves unreal and can result only in delusion. To these assertions metaphysicians reply by denying the adequacy of the verifiability theory of meaning and of material perception as the standard of reality. Both logical positivism and dialectical materialism, they argue, conceal metaphysical assumptions, for example, that everything is observable or at least connected with something observable and that the mind has no distinctive life of its own. In the philosophical movement known as existentialism, thinkers have contended that the questions of the nature of being and of the individuals relationship to it are extremely important and meaningful in terms of human life. The investigation of these questions is therefore considered valid whether its results can be verified objectively.

Since the 1950s the problems of systematic analytical metaphysics have been studied in Britain by Stuart Newton Hampshire and Peter Frederick Strawson, the former concerned, in the manner of Spinoza, with the relationship between thought and action, and the latter, in the manner of Kant, with describing the major categories of experience as they are embedded in language. Metaphysics have been pursued much in the spirit of positivism by Wilfred Stalker Sellars and Willard Van Orman Quine. Sellars has sought to express metaphysical questions in linguistic terms, and Quine has attempted to determine whether the structure of language commits the philosopher to asserting the existence of any entities whatever and, if so, what kind. In these new formulations the issues of metaphysics and ontology remain vital.

n the 17th century, French philosopher René Descartes proposed that only two substances ultimately exist; mind and body. Yet, if the two are entirely distinct, as Descartes believed, how can one substance interact with the other? How, for example, is the intention of a human mind able to cause movement in the persons limbs? The issue of the interaction between mind and body is known in philosophy as the mind-body problem.

Many fields other than philosophy share an interest in the nature of mind. In religion, the nature of mind is connected with various conceptions of the soul and the possibility of life after death. In many abstract theories of mind there is considerable overlap between philosophy and the science of psychology. Once part of philosophy, psychology split off and formed a separate branch of knowledge in the 19th century. While psychology used scientific experiments to study mental states and events, philosophy uses reasoned arguments and thought experiments in seeking to understand the concepts that underlie mental phenomena. Also influenced by philosophy of mind is the field of artificial intelligence, which endeavours to develop computers that can mimic what the human mind can do. Cognitive science attempts to integrate the understanding of mind provided by philosophy, psychology, AI, and other disciplines. Finally, all of these fields benefit from the detailed understanding of the brain that has emerged through neuroscience in the late 20th century.

Philosophers use the characteristics of inward accessibility, subjectivity, intentionality, goal-directedness, creativity and freedom, and consciousness to distinguish mental phenomena from physical phenomena.

Perhaps the most important characteristic of mental phenomena is that they are inwardly accessible, or available to us through introspection. We each know our own minds-our sensations, thoughts, memories, desires, and fantasies-in a direct sense, by internal reflection. We also know our mental states and mental events in a way that no one else can. In other words, we have privileged access to our own mental states.

Certain mental phenomena, those we generally call experiences, have a subjective nature-that is, they have certain characteristics we become aware of when we reflect, for instance, there is something as definitely to feel pain, or have an itch, or see something red. These characteristics are subjective in that they are accessible to the subject of the experience, the person who has the experience, but not to others.

Other mental phenomena, which we broadly refer to as thoughts, have a characteristic philosophers call intentionality. Intentional thoughts are about other thoughts or objects, which are represented as having certain properties or for being related to one another in a certain way. The belief that London is west of Toronto, for example, is about London and Toronto and represents the former as west of the latter. Although we have privileged access to our intentional states, many of them do not seem to have a subjective nature, at least not in the way that experiences do.

The contrast between the subjective and the objective is made in both the epistemic and the ontological divisions of knowledge. In the objective field of study, it is oftentimes identified with the distension between the intrapersonal and the interpersonal, or with that between matters whose resolving power depends on the psychology of the person in question, and who in this way is dependent, or, sometimes, with the distinction between the biassed and the impartial. Therefore, an objective question might be one answerable by a method usable by any competent investigator, while a subjective question would be answerable only from the questioners point of view. In the ontological domain, the subjective-objective contrast is often between what is what is not mind-dependent: Secondary qualities, e.g., colours, have been variability with observation conditions. The truth of a proposition, for instance: Apart from certain propositions about oneself, would be objective if it is interdependent of the perspective, especially for beliefs of those judging it. Truth would be subjective if it lacks such independence, because it is a construct from justified beliefs, e.g., those well-confirmed by observation.

One notion of objectivity can be basic and the other as an end point of reasoning and observation, if only to infer of it as a conclusion. If the epistemic notion is essentially an underlying of something as related to or dealing with such that are to fundamental primitives, then the criteria for objectivity in the ontological sense derive from considerations of justification: An objective question is one answerable by a procedure that yields (adequate) justification is a matter of amenability to such a means or procedures used to attaining an end. , its method, if, on the other hand, the ontological notion is basic, the criteria for an interpersonal method and its objective use are a matter of its mind-independence and tendency to lead to objective truth, perhaps, its applying to external objects and yielding predictive success. Since, the use of these criteria requires employing the methods which, on the epistemic conception, define objectivists most notably scientific methods-but no similar dependence obtains in the other direction, the epistemic notion os often taken as basic.

A different theory of truth, or the epistemic theory, is motivated by the desire to avoid negative features of the correspondence theory, which celebrates the existence of God, whereby, its premises are that all natural things are dependent for their existence on something else, whereas the totality of dependent beings must then itself depend upon a non-dependent, or necessarily existent, being, which is God. So, the God that ends the question must exist necessarily, it must not be an entity of which the same kinds of questions can be raised. The problem with such is the argument that it unfortunately affords no reason for attributing concern and care to the deity, nor for connecting the necessarily existent being it derives with human values and aspirations.

This presents in truth as that which is licenced by our best theory of reality, truth is distributively contributed as a function of our thinking about the world and all surrounding surfaces. An obvious problem with this is the fact of revision; theories are constantly refined and corrected. To deal with this objection it is at the end of enquiry. We never in fact reach it, but it serves as a direct motivational disguised enticement, as an asymptotic end of enquiry. Nonetheless, the epistemic theory of truth is not antipathetic to ontological relativity, since it has no commitment to the ultimate furniture of the world and it also is open to the possibilities of some kinds of epistemological relativism.

Lest be said, however, that of epistemology, the subjective-objective contrast arises above all for the concept of justification and its relatives. Externalism, particularly reliabilism, and since, for reliabilism, truth-conduciveness (non-subjectivity conceived) is central for justified belief. Internalism may or may not construe justification subjectivistically, depending on whether the proposed epistemic standards are interpersonally grounded. There are also various kinds of subjectivity: Justification may, e.g., be grounded in ones considered standards of simply in what one believes to be sound. Yet, justified beliefs accorded with precise or explicitly considered standards whether or not deem it a purposive necessity to think them justifiably made so.

Any conception of objectivity may treat one domain as fundamental and the others derivatively. Thus, objectivity for methods (including sensory observation) might be thought basic. Let us look upon an objective method be that one is (1) interpersonally usable and tends to yield justification regarding the questions to which it applies (an epistemic conception), or (2) trends to yield truth when properly applied (an ontological conception) or (3) both. Then an objective person is one who appropriately uses objective methods by an objective method, as one appraisable by an objective method, an objective discipline is whose methods are objective, and so on. Typically, those who conceive objectivity epistemically tend to take methods as fundamental, and those who conceive it ontologically tend to take statements as basic.

A number of mental phenomena appear to be connected to one another as elements in an intelligent, goal-directed system. The system works as follows: First, our sense organs are stimulated by events in our environment; next, by virtue of these stimulations, we perceive things about the external world; finally, we use this information, as well as information we have remembered or inferred, to guide our actions in ways that further our goals. Goal-directedness seems to accompany only mental phenomena.

Another important characteristic of mind, especially of human minds, is the capacity for choice and imagination. Rather than automatically converting past influences into future actions, individual minds are capable of exhibiting creativity and freedom. For instance, we can imagine things we have not experienced and can act in ways that no one expects or could predict.

Mental phenomena are conscious, and consciousness may be the closest term we have for describing what is special about mental phenomena. Minds are sometimes referred to as consciousness, yet it is difficult to describe exactly what consciousness is. Although consciousness is closely related to inward accessibility and subjectivity, these very characteristics seem to hinder us in reaching an objective scientific understanding of it.

Although philosophers have written about mental phenomena since ancient times, the philosophy of mind did not garner much attention until the work of French philosopher René Descartes in the 17th century. Descartes work represented a turning point in thinking about mind by making a strong distinction between bodies and minds, or the physical and the mental. This duality between mind and body, known as Cartesian dualism, has posed significant problems for philosophy ever since.

Descartes believed there are two basic kinds of things in the world, a belief known as substance dualism. For Descartes, the principles of existence for these two groups of things-bodies and minds-are completely different from one another: Bodies exist by being extended in space, while minds exist by being conscious. According to Descartes, nothing can be done to give a body thought and consciousness. No matter how we shape a body or combine it with other bodies, we cannot turn the body into a mind, a thing that is conscious, because being conscious is not a way of being extended.

For Descartes, a person consists of a human body and a human mind causally interacting with one another. For example, the intentions of a human being, that may have conceivably, caused that persons limbs to move. In this way, the mind can affect the body. In addition, the sense organs of a human being as forced, in effect of a refractive ray of light, pressure, or sound, external sources which in turn affect the brain, affecting mental states. Thus the body may affect the mind. Exactly how mind can affect body, and vice versa, is a central issue in the philosophy of mind, and is known as the mind-body problem. According to Descartes, this interaction of mind and body is peculiarly intimate. Unlike the interaction between a pilot and his ship, the connexion between mind and body more closely resembles two substances that have been thoroughly mixed together.

In response to the mind-body problem arising from Descartes theory of substance dualism, a number of philosophers have advocated various forms of substance monism, the doctrine that there is ultimately just one kind of thing in reality. In the 18th century, Irish philosopher George Berkeley claimed there were no material objects in the world, only minds and their ideas. Berkeley thought that talk about physical objects was simply a way of organizing the flow of experience. Near the turn of the 20th century, American psychologist and philosopher William James proposed another form of substance monism. James claimed that experience is the basic stuff from which both bodies and minds are constructed.

Most philosophers of mind today are substance monists of a third type: They are materialists who believe that everything in the world is basically material, or a physical object. Among materialists, there is still considerable disagreement about the status of mental properties, which are conceived as properties of bodies or brains. Materialists who those properties undersized by duality, yet believe that mental properties are an additional kind of property or attribute, not reducible to physical properties. Property diarists have the problem of explaining how such properties can fit into the world envisaged by modern physical science, according to which there are physical explanations for all things.

Materialists who are property monists believe that there is ultimately only one type of property, although they disagree on whether or not mental properties exist in material form. Some property monists, known as reductive materialists, hold that mental properties exist simply as a subset of relatively complex and non-basic physical properties of the brain. Reductive materialists have the problem of explaining how the physical states of the brain can be inwardly accessible and have a subjective character, as mental states do. Other property monists, known as eliminative materialists, consider the whole category of mental properties to be a mistake. According to them, mental properties should be treated as discredited postulates of an out-moded theory. Eliminative materialism is difficult for most people to accept, since we seem to have direct knowledge of our own mental phenomena by introspection and because we use the general principles we understand about mental phenomena to predict and explain the behaviour of others.

Philosophy of mind concerns itself with a number of specialized problems. In addition to the mind-body problem, important issues include those of personal identity, immortality, and artificial intelligence.

During much of Western history, the mind has been identified with the soul as presented in Christian theology. According to Christianity, the soul is the source of a persons identity and is usually regarded as immaterial; thus it is capable of enduring after the death of the body. Descartes conception of the mind as a separate, nonmaterial substance fits well with this understanding of the soul. In Descartes view, we are aware of our bodies only as the cause of sensations and other mental phenomena. Consequently our personal essence is composed more fundamentally of mind and the preservation of the mind after death would constitute our continued existence.

The mind conceived by materialist forms of substance monism does not fit as neatly with this traditional concept of the soul. With materialism, once a physical body is destroyed, nothing enduring remains. Some philosophers think that a concept of personal identity can be constructed that permits the possibility of life after death without appealing to separate immaterial substances. Following in the tradition of 17th-century British philosopher John Locke, these philosophers propose that a person consists of a stream of mental events linked by memory. It is these links of memory, rather than a single underlying substance, that provides the unity of a single consciousness through time. Immortality is conceivable if we think of these memory links as connecting a later consciousness in heaven with an earlier one on earth.

The field of artificial intelligence also raises interesting questions for the philosophy of mind. People have designed machines that mimic or model many aspects of human intelligence, and there are robots currently in use whose behaviour is described in terms of goals, beliefs, and perceptions. Such machines are capable of behaviour that, were it exhibited by a human being, would surely be taken to be free and creative. As an example, in 1996 an IBM computer named Deep Blue won a chess game against Russian world champion Garry Kasparov under international match regulations. Moreover, it is possible to design robots that have some sort of privileged access to their internal states. Philosophers disagree over whether such robots truly think or simply appear to think and whether such robots should be considered to be conscious

Dualism, in philosophy, the theory that the universe is explicable only as a whole composed of two distinct and mutually irreducible elements. In Platonic philosophy the ultimate dualism is between being and nonbeing-that is, between ideas and matter. In the 17th century, dualism took the form of belief in two fundamental substances: mind and matter. French philosopher René Descartes, whose interpretation of the universe exemplifies this belief, was the first to emphasize the irreconcilable difference between thinking substance (mind) and extended substance (matter). The difficulty created by this view was to explain how mind and matter interact, as they apparently do in human experience. This perplexity caused some Cartesians to deny entirely any interaction between the two. They asserted that mind and matter are inherently incapable of affecting each other, and that any reciprocal action between the two is caused by God, who, on the occasion of a change in one, produces a corresponding change in the other. Other followers of Descartes abandoned dualism in favor of monism.

In the 20th century, reaction against the monistic aspects of the philosophy of idealism has to some degree revived dualism. One of the most interesting defences of dualism is that of Anglo-American psychologist William McDougall, who divided the universe into spirit and matter and maintained that good evidence, both psychological and biological, indicates the spiritual basis of physiological processes. French philosopher Henri Bergson in his great philosophic work Matter and Memory likewise took a dualistic position, defining matter as what we perceive with our senses and possessing in itself the qualities that we perceive in it, such as colour and resistance. Mind, on the other hand, reveals itself as memory, the faculty of storing up the past and utilizing it for modifying our present actions, which otherwise would be merely mechanical. In his later writings, however, Bergson abandoned dualism and came to regard matter as an arrested manifestation of the same vital impulse that composes life and mind.

Dualism, in philosophy, the theory that the universe is explicable only as a whole composed of two distinct and mutually irreducible elements. In Platonic philosophy the ultimate dualism is between being and nonbeing-that is, between ideas and matter. In the 17th century, dualism took the form of belief in two fundamental substances: mind and matter. French philosopher René Descartes, whose interpretation of the universe exemplifies this belief, was the first to emphasize the irreconcilable difference between thinking substance (mind) and extended substance (matter). The difficulty created by this view was to explain how mind and matter interact, as they apparently do in human experience. This perplexity caused some Cartesians to deny entirely any interaction between the two. They asserted that mind and matter are inherently incapable of affecting each other, and that any reciprocal action between the two is caused by God, who, on the occasion of a change in one, produces a corresponding change in the other. Other followers of Descartes abandoned dualism in favor of monism.

In the 20th century, reaction against the monistic aspects of the philosophy of idealism has to some degree revived dualism. One of the most interesting defences of dualism is that of Anglo-American psychologist William McDougall, who divided the universe into spirit and matter and maintained that good evidence, both psychological and biological, indicates the spiritual basis of physiological processes. French philosopher Henri Bergson in his great philosophic work Matter and Memory likewise took a dualistic position, defining matter as what we perceive with our senses and possessing in itself the qualities that we perceive in it, such as colour and resistance. Mind, on the other hand, reveals itself as memory, the faculty of storing up the past and utilizing it for modifying our present actions, which otherwise would be merely mechanical. In his later writings, however, Bergson abandoned dualism and came to regard matter as an arrested manifestation of the same vital impulse that composes life and mind.

For many people understanding the place of mind in nature is the greatest philosophical problem. Mind is often though to be the last domain that stubbornly resists scientific understanding and philosophers defer over whether they find that cause for celebration or scandal. The mind-body problem in the modern era was given its definitive shape by Descartes, although the dualism that he espoused is in some form whatever there is a religious or philosophical tradition there is a religious or philosophical tradition whereby the soul may have an existence apart from the body. While most modern philosophers of mind would reject the imaginings that lead us to think that this makes sense, there is no consensus over the best way to integrate our understanding of people as bearers of physical properties lives on the other.

Occasionalist find from it term as employed to designate the philosophical system devised by the followers of the 17th-century French philosopher René Descartes, who, in attempting to explain the interrelationship between mind and body, concluded that God is the only cause. The occasionalists began with the assumption that certain actions or modifications of the body are preceded, accompanied, or followed by changes in the mind. This assumed relationship presents no difficulty to the popular conception of mind and body, according to which each entity is supposed to act directly on the other; these philosophers, however, asserting that cause and effect must be similar, could not conceive the possibility of any direct mutual interaction between substances as dissimilar as mind and body.

According to the occasionalists, the action of the mind is not, and cannot be, the cause of the corresponding action of the body. Whenever any action of the mind takes place, God directly produces in connexion with that action, and by reason of it, a corresponding action of the body; the converse process is likewise true. This theory did not solve the problem, for if the mind cannot act on the body (matter), then God, conceived as mind, cannot act on matter. Conversely, if God is conceived as other than mind, then he cannot act on mind. A proposed solution to this problem was furnished by exponents of radical empiricism such as the American philosopher and psychologist William James. This theory disposed of the dualism of the occasionalists by denying the fundamental difference between mind and matter.

Generally, along with consciousness, that experience of an external world or similar scream or other possessions, takes upon itself the visual experience or deprive of some normal visual experience, that this, however, does not perceive the world accurately. In its frontal experiment. As researchers reared kittens in total darkness, except that for five hours a day the kittens were placed in an environment with only vertical lines. When the animals were later exposed to horizontal lines and forms, they had trouble perceiving these forms.

While, in the theory of probability the Cambridge mathematician and philosopher Frank Ramsey (1903-30), was the first to show how a personalised theory could be developed, based on precise behavioural notions of preference and expectation. In the philosophy of language, Ramsey was one of the first thinkers to accept a redundancy theory of truth, which he combined with radical views of the function of many kinds of propositions. Neither generalizations nor causal propositions, nor those treating probability or ethics, described facts, but each has a different specific function in our intellectual economy.

Ramsey advocates that of a sentence generated by taking all the sentence affirmed in a scientific theory that use some term, e.g., quark. Replacing the term by a variable, and existentially quantifying into the result. Instead of saying quarks have such-and-such properties, Ramsey postdated that the sentence as saying that there is something that has those properties. If the process is repeated, the sentence gives the topic-neutral structure of the theory, but removes any implications that we know what the term so treated denote. It leaves open the possibility of identifying the theoretical item with whatever, and it is that best fits the description provided. Nonetheless, it was pointed out by the Cambridge mathematician Newman that if the process is carried out for all except the logical bones of the theory, then by the Löwenheim-Skolem theorem, the result will be interpretable in any domain of sufficient cardinality, and the content of the theory may reasonably be felt to have been lost.

Nevertheless, probability is a non-negative, additive set function whose maximum value is unity. What is harder to understand is the application of the formal notion to the actual world. One point of application is statistical, when kinds of event or trials (such as the tossing of a coin) can be described, and the frequency of occurrence of particular outcomes (such as the coin falling heads) is measurable, then we can begin to think of the probability of that kind of outcome in that kind of trial. One account of probability is therefore the frequency theory, associated with Venn and Richard von Mises (1883-1953), that identifies the probability of an event with such a frequency of occurrence. A second point of application is the description of an hypothesis as probable when the evidence bears a favoured relation is conceived of as purely logical in nature, as in the works of Keynes and Carnap, probability statement are not empirical measures of frequency, but represent something like partial entailments or measures of possibilities left open by the evidence and by the hypothesis.

Formal confirmation theories and range theories of probability are developments of this idea. The third point of application is in the use probability judgements have in regulating the confidence with which we hold various expectations. The approach sometimes called subjectivism or personalism, but more commonly known as Bayesianism, associated with de Finetti and Ramsey, whom of both, see probability judgements as expressions of a subjects degree of confidence in an event or kind of event, and attempts to describe constraints on the way we should have degrees of confidence in different judgements that explain those judgements having the mathematical form of judgements of probability. For Bayesianism, probability or chance is probability or chance is not an objective or real factor in the world, but rather a reflection of our own states of mind. However, these states of mind need to be governed by empirical frequencies, so this is not an invitation to licentious thinking.

This concept of sampling and accompanying application of the laws of probability find extensive use in polls, public opinion polls. Polls to determine what radio or television program is being watched and listened to, polls to determine house-wives reaction to a new product, political polls, and the like. In most cases the sampling is carefully planned and often a margin of error is stated. Polls cannot, however, altogether eliminate the fact that certain people dislike being questioned and may deliberately conceal or give false information. In spite of this and other objections, the method of sampling often makes results available in situations where the cost of complete enumeration would be prohibitive both from the standpoint of time and of money.

Thus we can see that probability and statistics are used in insurance, physics, genetics, biology, business, as well as in games of chance, and we are inclined to agree with P.S. LaPlace who said: We see . . . that the theory of probabilities is at bottom only common sense reduced to calculation, it makes us appreciate with exactitude what reasonable minds feel by a sort of instinct, often being able to account for it . . . it is remarkable that [this] science, which originated in the consideration of games of chance, should have become the most important object of human knowledge.

It seems, that the most taken of are the paradoxes in the foundations of set theory as discovered by Russell in 1901. Some classes have themselves as members: The class of all abstract objects, for example, is an abstract object, whereby, others do not: The class of donkeys is not itself a donkey. Now consider the class of all classes that are not members of themselves, is this class a member of itself, that, if it is, then it is not, and if it is not, then it is.

The paradox is structurally similar to easier examples, such as the paradox of the barber. Such one like a village having a barber in it, who shaves all and only the people who do not have in themselves. Who shaves the barber? If he shaves himself, then he does not, but if he does not shave himself, then he does not. The paradox is actually just a proof that there is no such barber or in other words, that the condition is inconsistent. All the same, it is no too easy to say why there is no such class as the one Russell defines. It seems that there must be some restriction on the kind of definition that are allowed to define classes and the difficulty that of finding a well-motivated principle behind any such restriction.

The French mathematician and philosopher Henri Jules Poincaré (1854-1912) believed that paradoses like those of Russell and the barber were due to such as the impredicative definitions, and therefore proposed banning them. But, it turns out that classical mathematics required such definitions at too many points for the ban to be easily absolved. Having, in turn, as forwarded by Poincaré and Russell, was that in order to solve the logical and semantic paradoxes it would have to ban any collection (set) containing members that can only be defined by means of the collection taken as a whole. It is, effectively by all occurring principles into which have an adopting vicious regress, as to mark the definition for which involves no such failure. There is frequently room for dispute about whether regresses are benign or vicious, since the issue will hinge on whether it is necessary to reapply the procedure. The cosmological argument is an attempt to find a stopping point for what is otherwise seen for being an infinite regress, and, to ban of the predicative definitions.

The investigation of questions that arise from reflection upon sciences and scientific inquiry, are such as called of a philosophy of science. Such questions include, what distinctions in the methods of science? s there a clear demarcation between scenes and other disciplines, and how do we place such enquires as history, economics or sociology? And scientific theories probable or more in the nature of provisional conjecture? Can the be verified or falsified? What distinguished good from bad explanations? Might there be one unified since, embracing all special sciences? For much of the 20th century their questions were pursued in a highly abstract and logical framework it being supposed that as general logic of scientific discovery that a general logic of scientific discovery a justification might be found. However, many now take interests in a more historical, contextual and sometimes sociological approach, in which the methods and successes of a science at a particular time are regarded less in terms of universal logical principles and procedure, and more in terms of their availability to methods and paradigms as well as the social context.

In addition, to general questions of methodology, there are specific problems within particular sciences, giving subjects as biology, mathematics and physics.

The intuitive certainty that sparks aflame the dialectic awarenesses for its immediate concerns are either of the truth or by some other in an object of apprehensions, such as a concept. Awareness as such, has to its amounting quality value the place where philosophical understanding of the source of our knowledge are, however, in covering the sensible apprehension of things and pure intuition it is that which structural sensation into the experience of things accent of its direction that orchestrates the celestial overture into measures in space and time.

The notion that determines how something is seen or evaluated of the status of law and morality especially associated with St. Thomas Aquinas and the subsequent scholastic tradition. More widely, any attempt to cement the moral and legal order together with the nature of the cosmos or how the nature of human beings, for which sense it is also found in some Protestant writers, and arguably derivative from a Platonic view of ethics, and is implicit in ancient Stoicism. Law stands above and apart from the activities of human lawmaker, it constitutes an objective set of principles that can be seen true by natural light or reason, and (in religion versions of the theory) that express Gods will for creation. Non-religious versions of the theory substitute objective conditions for human flourishing as the source of constraints upon permissible actions and social arrangements. Within the natural law tradition, different views have been held about the relationship between the rule of law about God s will, for instance the Dutch philosopher Hugo Grothius (1583-1645), similarly takes upon the view that the content of natural law is independent of any will, including that of God, while the German theorist and historian Samuel von Pufendorf (1632-94) takes the opposite view, thereby facing the problem of one horn of the Euthyphro dilemma, that simply states, that its dilemma arises from whatever the source of authority is supposed to be, for in which do we care about the general good because it is good, or do we just call good things that we care about. Wherefore, by facing the problem that may be to assume of a strong form, in which it is claimed that various facts entail values, or a weaker form, from which it confines itself to holding that reason by itself is capable of discerning moral requirements that are supposedly of binding to all human bings regardless of their desires

Although the morality of people send the ethical amount from which the same thing, is that there is a usage that restricts morality to systems such as that of the German philosopher and founder of ethical philosophy Immanuel Kant (1724-1804), based on notions such as duty, obligation, and principles of conduct, reserving ethics for more than the Aristotelian approach to practical reasoning based on the notion of a virtue, and generally avoiding the separation of moral considerations from other practical considerations. The scholarly issues are complex, with some writers seeing Kant as more Aristotelian and Aristotle as, ore involved in a separate sphere of responsibility and duty, than the simple contrast suggests. Some theorists see the subject in terms of a number of laws (as in the Ten Commandments). The status of these laws may be test, and they are the edicts of a divine lawmaker, or that they are truths of reason, knowable deductively. Other approaches to ethics (e.g., eudaimonism, situation ethics, virtue ethics) eschew general principles as much as possible, frequently disguising the great complexity of practical reasoning. For Kantian notion of the moral law is a binding requirement of the categorical imperative, and to understand whether they are equivalent at some deep level. Kants own applications of the notion are not always convincing, as for one cause of confusion in relating Kants ethics to theories such additional expressivism, is that it is easy, but mistaken, to suppose that the categorical nature of the imperative means that it cannot be the expression of sentiment, but must derive from something unconditional or necessary such as the voice of reason.

For which ever reason, the mortal being makes of its presence to the future of weighing of that which one must do, or that which can be required of one. The term carries implications of that which is owed (due) to other people, or perhaps in oneself. Universal duties would be owed to persons (or sentient beings) as such, whereas special duty in virtue of specific relations, such for being the child of someone, or having made someone a promise. Duty or obligation is the primary concept of deontological approaches to ethics, but is constructed in other systems out of other notions. In the system of Kant, a perfect duty is one that must be performed whatever the circumstances: Imperfect duties may have to give way to the more stringent ones. In another way, perfect duties are those that are correlative with the right to others, imperfect duties are not. Problems with the concept include the ways in which due needs to be specified (a frequent criticism of Kant is that his notion of duty is too abstract). The concept may also suggest of a regimented view of ethical life in which we are all forced conscripts in a kind of moral army, and may encourage an individualistic and antagonistic view of social relations.

The most generally accepted account of externalism and/or internalism, that this distinction is that a theory of justification is internalist if only if it requiem that all of the factors needed for a belief to be epistemologically justified for a given person be cognitively accessible to that person, internal to his cognitive perception, and externalist, if it allows that at least some of the justifying factors need not be thus accessible, so that they can be external to the believers cognitive perceptive, beyond any such given relations. However, epistemologists often use the distinction between internalist and externalist theories of epistemic justification without offering any very explicit explication.

The externalist/internalist distinction has been mainly applied to theories of epistemic justification: It has also been applied in a closely related way to accounts of knowledge and in a rather different way to accounts of belief and thought contents.

The internalist requirement of cognitive accessibility can be interpreted in at least two ways: A strong version of internalism would require that the believer actually be aware of the justifying factor in order to be justified: While a weaker version would require only that he be capable of becoming aware of them by focussing his attentions appropriately, but without the need for any change of position, new information, etc. Though the phrase cognitively accessible suggests the weak interpretation, the main intuitive motivation for internalism, viz. the idea that epistemic justification requires that the believer actually have in his cognitive possession a reason for thinking that the belief is true, and would require the strong interpretation.

Perhaps, the clearest example of an internalist position would be a Foundationalist view according to which foundational beliefs pertain to immediately experienced states of mind and other beliefs are justified by standing in cognitively accessible logical or inferential relations to such foundational beliefs. Such a view could count as either a strong or a weak version of internalism, depending on whether actual awareness of the justifying elements or only the capacity to become aware of them is required. Similarly, a churent view could also be internalist, if both the beliefs or other states with which a justification belief is required to cohere and the coherence relations themselves are reflectively accessible.

It should be carefully noticed that when internalism is construed in this way, it is neither necessary nor sufficient by itself for internalism that the justifying factors literally be internal mental states of the person in question. Not necessary, necessary, because on at least some views, e.g., a direct realist view of perception, something other than a mental state of the believer can be cognitively accessible: Not sufficient, because there are views according to which at least some mental states need not be actual (strong version) or even possible (weak version) objects of cognitive awareness. Also, on this way of drawing the distinction, a hybrid view, according to which some of the factors required for justification must be cognitively accessible while others need not and in general will not be, would count as an externalist view. Obviously too, a view that was externalist in relation to a strong version of internalism (by not requiring that the believer actually be aware of all justifying factors) could still be internalist in relation to a weak version (by requiring that he at least be capable of becoming aware of them).

The most prominent recent externalist views have been versions of Reliabilism, whose requirements for justification is roughly that the belief be produced in a way or via a process that makes of objectively likely that the belief is true. What makes such a view externalist is the absence of any requirement that the person for whom the belief is justified have any sort of cognitive access to the relations of reliability in question. Lacking such access, such a person will in general have no reason for thinking that the belief is true or likely to be true , but will, on such an account, nonetheless be epistemically justified in according it. Thus such a view arguably marks a major break from the modern epistemological tradition, stemming from Descartes, which identifies epistemic justification with having a reason, perhaps even a conclusive reason for thinking that the belief is true. An epistemologist working within this tradition is likely to feel that the externalist, than offering a competing account of the same concept of epistemic justification with which the traditional epistemologist is concerned, has simply changed the subject.

The main objection to externalism rests on the intuitive certainty that the basic requirement for epistemic justification is that the acceptance of the belief in question be rational or responsible in relation to the cognitive goal of truth, which seems to require in turn that the believer actually be dialectally aware of a reason for thinking that the belief is true (or, at the very least, that such a reason be available to him). Since the satisfaction of an externalist condition is neither necessary nor sufficient for the existence of such a cognitively accessible reason, it is argued, externalism is mistaken as an account of epistemic justification. This general point has been elaborated by appeal to two sorts of putative intuitive counter-examples to externalism. The first of these challenges the necessity of belief which seem intuitively to be justified, but for which the externalist conditions are not satisfied. The standard examples in this sort are cases where beliefs are produced in some very nonstandard way, e.g., by a Cartesian demon, but nonetheless, in such a way that the subjective experience of the believer is indistinguishable from that of someone whose beliefs are produced more normally. The intuitive claim is that the believer in such a case is nonetheless epistemically justified, as much so as one whose belief is produced in a more normal way, and hence that externalist account of justification must be mistaken.

Perhaps the most striking reply to this sort of counter-example, on behalf of a cognitive process is to be assessed in normal possible worlds, i.e., in possible worlds that are actually the way our world is common-seismically believed to be, than in the world which contains the belief being judged. Since the cognitive processes employed in the Cartesian demon cases are, for which we may assume, reliable when assessed in this way, the reliabilist can agree that such beliefs are justified. The obvious, to a considerable degree of bringing out the issue of whether it is or not an adequate rationale for this construal of Reliabilism, so that the reply is not merely a notional presupposition guised as having representation.

The correlative way of elaborating on the general objection to justificatory externalism challenges the sufficiency of the various externalist conditions by citing cases where those conditions are satisfied, but where the believers in question seem intuitively not to be justified. In this context, the most widely discussed examples have to do with possible occult cognitive capacities, like clairvoyance. Considering the point in application once, again, to Reliabilism, the claim is that to think that he has such a cognitive power, and, perhaps, even good reasons to the contrary, is not rational or responsible and therefore not epistemically justified in accepting the belief that result from his clairvoyance, despite the fact that the reliabilist condition is satisfied.

One sort of response to this latter sorts of objection is to bite the bullet and insist that such believers are in fact justified, dismissing the seeming intuitions to the contrary as latent internalist prejudice. A more widely adopted response attempts to impose additional conditions, usually of a roughly internalist sort, which will rule out the offending example, while stopping far of a full internalism. But, while there is little doubt that such modified versions of externalism can handle particular cases, as well enough to avoid clear intuitive implausibility, the usually problematic cases that they cannot handle, and also whether there is and clear motivation for the additional requirements other than the general internalist view of justification that externalist are committed to reject.

A view in this same general vein, one that might be described as a hybrid of internalism and externalism holds that epistemic justification requires that there is a justificatory factor that is cognitively accessible to the believer in question (though it need not be actually grasped), thus ruling out, e.g., a pure Reliabilism. At the same time, however, though it must be objectively true that beliefs for which such a factor is available are likely to be true, in addition, the fact need not be in any way grasped or cognitively accessible to the believer. In effect, of the premises needed to argue that a particular belief is likely to be true, one must be accessible in a way that would satisfy at least weak internalism, the internalist will respond that this hybrid view is of no help at all in meeting the objection and has no belief nor is it held in the rational, responsible way that justification intuitively seems to require, for the believer in question, lacking one crucial premise, still has no reason at all for thinking that his belief is likely to be true.

An alternative to giving an externalist account of epistemic justification, one which may be more defensible while still accommodating many of the same motivating concerns, is to give an externalist account of knowledge directly, without relying on an intermediate account of justification. Such a view will obviously have to reject the justified true belief account of knowledge, holding instead that knowledge is true belief which satisfies the chosen externalist condition, e.g., a result of a reliable process (and perhaps, further conditions as well). This makes it possible for such a view to retain internalist account of epistemic justification, though the centrality of that concept to epistemology would obviously be seriously diminished.

Such an externalist account of knowledge can accommodate the commonsense conviction that animals, young children, and unsophisticated adults posses knowledge, though not the weaker conviction (if such a conviction does exist) that such individuals are epistemically justified in their beliefs. It is also at least less vulnerable to internalist counter-examples of the sort discussed, since the intuitions involved there pertain more clearly to justification than to knowledge. What is uncertain is what ultimate philosophical significance the resulting conception of knowledge is supposed to have. In particular, does it have any serious bearing on traditional epistemological problems and on the deepest and most troubling versions of scepticism, which seems in fact to be primarily concerned with justification, that of knowledge?`

A rather different use of the terms internalism and externalism has to do with the issue of how the content of beliefs and thoughts is determined: According to an internalist view of content, the content of such intention states depends only on the non-relational, internal properties of the individuals mind or grain, and not at all on his physical and social environment: While according to an externalist view, content is significantly affected by such external factors and suggests a view that appears of both internal and external elements is standardly classified as an external view.

As with justification and knowledge, the traditional view of content has been strongly internalist in character. The main argument for externalism derives from the philosophy y of language, more specifically from the various phenomena pertaining to natural kind terms, indexicals, etc. that motivate the views that have come to be known as direct reference theories. Such phenomena seem at least to show that the belief or thought content that can be properly attributed to a person is dependent on facts about his environment-e.g., whether he is on Earth or Twin Earth, what is fact pointing at, the classificatory criteria employed by expects in his social group, etc.-not just on what is going on internally in his mind or brain.

An objection to externalist account of content is that they seem unable to do justice to our ability to know the content of our beliefs or thought from the inside, simply by reflection. If content is depending on external factors pertaining to the environment, then knowledge of content should depend on knowledge of these factors-which will not in general be available to the person whose belief or thought is in question.

The adoption of an externalist account of mental content would seem to support an externalist account of justification, by way that if part or all of the content of a belief inaccessible to the believer, then both the justifying status of other beliefs in relation to that content and the status of that content as justifying further beliefs will be similarly inaccessible, thus contravening the internalist requirement for justification. An internalist must insist that there are no justification relations of these sorts, that our internally associable content can be either justified or justly for anything else: But such a response appears lame unless it is coupled with an attempt to show that the externalist account of content is mistaken.

In addition, to what to the Foundationalist, but the view in epistemology that knowledge must be regarded as a structure raised upon secure, certain foundations. These are found in some combination of experience and reason, with different schools (empirical, rationalism) emphasizing the role of one over that of the other. Foundationalism was associated with the ancient Stoics, and in the modern era with Descartes, who discovered his foundations in the clear and distinct ideas of reason. Its main opponent is Coherentism or the view that a body of propositions my be known without as foundation is certain, but by their interlocking strength. Rather as a crossword puzzle may be known to have been solved correctly even if each answer, taken individually, admits of uncertainty.

Truth, alone with coherence is the study of concept, in such a study in philosophy is that it treats both the meaning of the word true and the criteria by which we judge the truth or falsity in spoken and written statements. Philosophers have attempted to answer the question What is truth? for thousands of years. The four main theories they have proposed to answer this question are the correspondence, pragmatic, coherence, and deflationary theories of truth.

There are various ways of distinguishing types of Foundationalist epistemology by the use of the variations that have been enumerating. Planntinga has put forward an influence conception of classical Foundationalism, specified in terms of limitations on the foundations. He construes this as a disjunction of ancient and medieval Foundationalism;, which takes foundations to comprise that with self-evident and evident to the senses, and modern Foundationalism that replace evident Foundationalism that replaces evident to the senses with the replaces of evident to the senses with incorrigibly, which in practice was taken to apply only to beliefs bout ones present state of consciousness? Plantinga himself developed this notion in the context of arguing that items outside this territory, in particular certain beliefs about God, could also be immediately justified. A popular recent distinction is between what is variously strong or extremely Foundationalism and moderate, modest or minimal and moderately modest or minimal Foundationalism with the distinction depending on whether epistemic immunities are reassured of foundations. While depending on whether it require of a foundation only that it be required of as foundation, that only it be immediately justified, or whether it be immediately justified. In that it make just the comforted preferability, only to suggest that the plausibility of the string requiring stems from both a level confusion between beliefs on different levels.

Emerging sceptic tendencies come forth in the 14th-century writings of Nicholas of Autrecourt. His criticisms of any certainty beyond the immediate deliverance of the senses and basic logic, and in particular of any knowledge of either intellectual or material substances, anticipate the later scepticism of Balye and Hume. The; latter distinguishes between Pyrrhonistic and excessive scepticism, which he regarded as unlivable, and the more mitigated scepticism that accepts every day or commonsense beliefs (not as the delivery of reason, but as due more to custom and habit), but is duly wary of the power of reason to give us much more. Mitigated scepticism is thus closer to the attitude fostered by ancient scepticism from Pyrrho through to Sexus Empiricus. Although the phrase Cartesian scepticism is sometimes used, Descartes himself was not a sceptic, but in the method of doubt, uses a sceptical scenario in order to begin the process of finding a secure mark of knowledge. Descartes himself trusts a category of clear and distinct ideas, not far removed from the phantasia kataleptiké of the Stoics.

Scepticism should not be confused with relativism, which is a doctrine about the nature of truth, and may be motivated by trying to avoid scepticism. Nor is it identical with eliminativism, which counsels abandoning an area of thought together, not because we cannot know the truth, but because there are no truths capable of being framed in the terms we use.

Descartes theory of knowledge starts with the quest for certainty, for an indubitable starting-point or foundation on the basis alone of which progress is possible. This is eventually found in the celebrated Cogito ergo sum: I think therefore I am. By locating the point of certainty in my own awareness of my own self, Descartes gives a first-person twist to the theory of knowledge that dominated them following centuries in spite of various counter-attacks on behalf of social and public starting-points. The metaphysics associated with this priority is the famous Cartesian dualism, or separation of mind and matter into two different but interacting substances, Descartes rigorously and rightly sees that it takes divine dispensation to certify any relationship between the two realms thus divided, and to prove the reliability of the senses invokes a clear and distinct perception of highly dubious proofs of the existence of a benevolent deity. This has not met general acceptance: as Hume drily puts it, to have recourse to the veracity of the supreme Being, in order to prove the veracity of our senses, is surely making a very unexpected circuit.

In his own time Descartes conception of the entirely separate substance of the mind was recognized to give rise to insoluble problems of the nature of the causal connexion between the two. It also gives rise to the problem, insoluble in its own terms, of other minds. Descartes notorious denial that non-human animals are conscious is a stark illustration of the problem. In his conception of matter Descartes also gives preference to rational cogitation over anything derived from the senses. Since we can conceive of the matter of a ball of wax surviving changes to its sensible qualities, matter is not an empirical concept, but eventually an entirely geometrical one, with extension and motion as its only physical nature. Descartes thought, as reflected in Leibniz, that the qualities of sense experience have no resemblance to qualities of things, so that knowledge of the external world is essentially knowledge of structure rather than of filling. On this basis Descartes erects a remarkable physics. Since matter is in effect the same as extension there can be no empty space or void, since there is no empty space motion is not a question of occupying previously empty space, but is to be thought of in terms of vortices (like the motion of a liquid).

Although the structure of Descartes epistemology, theory of mind, and theory of matter have ben rejected many times, their relentless exposure of the hardest issues, their exemplary clarity, and even their initial plausibility, all contrive to make him the central point of reference for modern philosophy.

The self conceived as Descartes presents it in the first two Meditations: aware only of its own thoughts, and capable of disembodied existence, neither situated in a space nor surrounded by others. This is the pure self of I-ness that we are tempted to imagine as a simple unique thing that make up our essential identity. Descartes view that he could keep hold of this nugget while doubting everything else is criticized by Lichtenberg and Kant, and most subsequent philosophers of mind.

Descartes holds that we do not have any knowledge of any empirical proposition about anything beyond the contents of our own minds. The reason, roughly put, is that there is a legitimate doubt about all such propositions because there is no way to deny justifiably that our senses are being stimulated by some cause (an evil spirit, for example) which is radically different from the objects that we normally think affect our senses.

He also points out, that the senses (sight, hearing, touch, etc., are often unreliable, and it is prudent never to trust entirely those who have deceived us even once, he cited such instances as the straight stick that looks ben t in water, and the square tower that looks round from a distance. This argument of illusion, has not, on the whole, impressed commentators, and some of Descartes contemporaries pointing out that since such errors become known as a result of further sensory information, it cannot be right to cast wholesale doubt on the evidence of the senses. But Descartes regarded the argument from illusion as only the first stage in a softening up process which would lead the mind away from the senses. He admits that there are some cases of sense-base belief about which doubt would be insane, e.g., the belief that I am sitting here by the fire, wearing a winter dressing gown.

Descartes was to realize that there was nothing in this view of nature that could explain or provide a foundation for the mental, or from direct experience as distinctly human. In a mechanistic universe, he said, there is no privileged place or function for mind, and the separation between mind and matter is absolute. Descartes was also convinced, that the immaterial essences that gave form and structure to this universe were coded in geometrical and mathematical ideas, and this insight led him to invent algebraic geometry.

A scientific understanding of these ideas could be derived, said Descartes, with the aid of precise deduction, and also claimed that the contours of physical reality could be laid out in three-dimensional coordinates. Following the publication of Newtons Principia Mathematica in 1687, reductionism and mathematical modelling became the most powerful tools of modern science. And the dream that the entire physical world could be known and mastered through the extension and refinement of mathematical theory became the central feature and guiding principle of scientific knowledge.

Having to its recourse of knowledge, its central questions include the origin of knowledge, the place of experience in generating knowledge, and the place of reason in doing so, the relationship between knowledge and certainty, and between knowledge and the impossibility of error, the possibility of universal scepticism, and the changing forms of knowledge that arise from new conceptualizations of the world. All of these issues link with other central concerns of philosophy, such as the nature of truth and the natures of experience and meaning.

Foundationalism was associated with the ancient Stoics, and in the modern era with Descartes (1596-1650). Who discovered his foundations in the clear and distinct ideas of reason? Its main opponent is Coherentism, or the view that a body of propositions mas be known without a foundation in certainty, but by their interlocking strength, than as a crossword puzzle may be known to have been solved correctly even if each answer, taken individually, admits of uncertainty. Difficulties at this point led the logical passivists to abandon the notion of an epistemological foundation altogether, and to flirt with the coherence theory of truth. It is widely accepted that trying to make the connexion between thought and experience through basic sentences depends on an untenable myth of the given.

Still in spite of these concerns, the problem was, of course, in defining knowledge in terms of true beliefs plus some favoured relations between the believer and the facts that began with Platos view in the Theaetetus, that knowledge is true belief, and some logos. Due of its nonsynthetic epistemology, the enterprising of studying the actual formation of knowledge by human beings, without aspiring to certify those processes as rational, or its proof against scepticism or even apt to yield the truth. Natural epistemology would therefore blend into the psychology of learning and the study of episodes in the history of science. The scope for external or philosophical reflection of the kind that might result in scepticism or its refutation is markedly diminished. Despite the fact that the terms of modernity are so distinguished as exponents of the approach include Aristotle, Hume, and J. S. Mills.

The task of the philosopher of a discipline would then be to reveal the correct method and to unmask counterfeits. Although this belief lay behind much positivist philosophy of science, few philosophers now subscribe to it. It places too well a confidence in the possibility of a purely previous first philosophy, or viewpoint beyond that of the work ones way of practitioners, from which their best efforts can be measured as good or bad. These standpoints now seem that too many philosophers may be too fanciful, that the more modest of tasks are actually adopted at various historical stages of investigation into different areas and with the aim not so much of criticizing, but more of systematization. In the presuppositions of a particular field at a particular classification, there is still a role for local methodological disputes within the community investigators of some phenomenon, with one approach charging that another is unsound or unscientific, but logic and philosophy will not, on the modern view, provide any independent arsenal of weapons for such battles, which often come to seem more like factional recommendations in the ascendancy of a discipline.

This is an approach to the theory of knowledge that sees an important connexion between the growth of knowledge and biological evolution. An evolutionary epistemologist claims that the development of human knowledge processed through some natural selection process, the best example of which is Darwins theory of biological natural selection. There is a widespread misconception that evolution proceeds according to some plan or direct, but it has neither, and the role of chance ensures that its future course will be unpredictable. Random variations in individual organisms create tiny differences in their Darwinian fitness. Some individuals have more offsprings than others, and the characteristics that increased their fitness thereby become more prevalent in future generations. Once upon a time, at least a mutation occurred in a human population in tropical Africa that changed the haemoglobin molecule in a way that provided resistance to malaria. This enormous advantage caused the new gene to spread, with the unfortunate consequence that sickle-cell anaemia came to exist.

Given that chance, it can influence the outcome at each stage: First, in the creation of genetic mutation, second, in whether the bearer lives long enough to show its effects, thirdly, in chance events that influence the individuals actual reproductive success, and fourth, in whether a gene even if favoured in one generation, is, happenstance, eliminated in the next, and finally in the many unpredictable environmental changes that will undoubtedly occur in the history of any group of organisms. As Harvard biologist Stephen Jay Gould has so vividly expressed that process over again, the outcome would surely be different. Not only might there not be humans, there might not even be anything like mammals.

We will often emphasis the elegance of traits shaped by natural selection, but the common idea that nature creates perfection needs to be analysed carefully. The extent to which evolution achieves perfection depends on exactly what you mean. If you mean Does natural selections always take the best path for the long-term welfare of a species? The answer is no. That would require adaption by group selection, and this is, unlikely. If you mean Does natural selection creates every adaption that would be valuable? The answer again, is no. For instance, some kinds of South American monkeys can grasp branches with their tails. The trick would surely also be useful to some African species, but, simply because of bad luck, none have it. Some combination of circumstances started some ancestral South American monkeys using their tails in ways that ultimately led to an ability to grab onto branches, while no such development took place in Africa. Mere usefulness of a trait does not necessitate a means in that what will understandably endure phylogenesis or evolution.

This is an approach to the theory of knowledge that sees an important connexion between the growth of knowledge and biological evolution. An evolutionary epistemologist claims that the development of human knowledge proceeds through some natural selection process, the best example of which is Darwins theory of biological natural selection. The three major components of the model of natural selection are variation selection and retention. According to Darwins theory of natural selection, variations are not pre-designed to do certain functions. Rather, these variations that do useful functions are selected. While those that do not employ of some coordinates in that are regainfully purposed are also, not to any of a selection, as duly influenced of such a selection, that may have responsibilities for the visual aspects of variational intentionally occurs. In the modern theory of evolution, genetic mutations provide the blind variations: Blind in the sense that variations are not influenced by the effects they would have-the likelihood of a mutation is not correlated with the benefits or liabilities that mutation would confer on the organism, the environment provides the filter of selection, and reproduction provides the retention. Fatnesses are achieved because those organisms with features that make them less adapted for survival do not survive in connexion with other organisms in the environment that have features that are better adapted. Evolutionary epistemology applies this blind variation and selective retention model to the growth of scientific knowledge and to human thought processes overall.

The parallel between biological evolution and conceptual or epistemic evolution can be seen as either literal or analogical. The literal version of evolutionary epistemology deeds biological evolution as the main cause of the growth of knowledge. On this view, called the evolution of cognitive mechanic programs, by Bradie (1986) and the Darwinian approach to epistemology by Ruse (1986), that growth of knowledge occurs through blind variation and selective retention because biological natural selection itself is the cause of epistemic variation and selection. The most plausible version of the literal view does not hold that all human beliefs are innate but rather than the mental mechanisms that guide the acquisitions of non-innate beliefs are themselves innately and the result of biological natural selection. Ruse, (1986) demands of a version of literal evolutionary epistemology that he links to sociolology (Rescher, 1990).

On the analogical version of evolutionary epistemology, called the evolution of theorys program, by Bradie (1986). The Spenserians approach (after the nineteenth century philosopher Herbert Spencer) by Ruse (1986), the development of human knowledge is governed by a process analogous to biological natural selection, rather than by an instance of the mechanism itself. This version of evolutionary epistemology, introduced and elaborated by Donald Campbell (1974) as well as Karl Popper, sees the [partial] fit between theories and the world as explained by a mental process of trial and error known as epistemic natural selection.

Both versions of evolutionary epistemology are usually taken to be types of naturalized epistemology, because both take some empirical facts as a starting point for their epistemological project. The literal version of evolutionary epistemology begins by accepting evolutionary theory and a materialist approach to the mind and, from these, constructs an account of knowledge and its developments. In contrast, the metaphorical version does not require the truth of biological evolution: It simply draws on biological evolution as a source for the model of natural selection. For this version of evolutionary epistemology to be true, the model of natural selection need only apply to the growth of knowledge, not to the origin and development of species. Crudely put, evolutionary epistemology of the analogical sort could still be true even if Creationism is the correct theory of the origin of species.

Although they do not begin by assuming evolutionary theory, most analogical evolutionary epistemologists are naturalized epistemologists as well, their empirical assumptions, least of mention, implicitly come from psychology and cognitive science, not evolutionary theory. Sometimes, however, evolutionary epistemology is characterized in a seemingly non-naturalistic fashion. Campbell (1974) says that if one is expanding knowledge beyond what one knows, one has no choice but to explore without the benefit of wisdom, i.e., blindly. This, Campbell admits, makes evolutionary epistemology close to being a tautology (and so not naturalistic). Evolutionary epistemology does assert the analytic claim that when expanding ones knowledge beyond what one knows, one must precessed to something that is already known, but, more interestingly, it also makes the synthetic claim that when expanding ones knowledge beyond what one knows, one must proceed by blind variation and selective retention. This claim is synthetic because it can be empirically falsified. The central claim of evolutionary epistemology is synthetic, not analytic. If the central contradictory, which they are not. Campbell is right that evolutionary epistemology does have the analytic feature he mentions, but he is wrong to think that this is a distinguishing feature, since any plausible epistemology has the same analytic feature (Skagestad, 1978).

Two extraordinary issues lie to awaken the literature that involves questions about realism, i.e., What metaphysical commitment does an evolutionary epistemologist have to make? Progress, i.e., according to evolutionary epistemology, does knowledge develop toward a goal? With respect to realism, many evolutionary epistemologists endorse that is called hypothetical realism, a view that combines a version of epistemological scepticism and tentative acceptance of metaphysical realism. With respect to progress, the problem is that biological evolution is not goal-directed, but the growth of human knowledge seems to be. Campbell (1974) worries about the potential dis-analogy here but is willing to bite the stone of conscience and admit that epistemic evolution progress toward a goal (truth) while biologic evolution does not. Many another has argued that evolutionary epistemologists must give up the truth-topic sense of progress because a natural selection model is in essence, is non-teleological, as an alternative, following Kuhn (1970), and embraced in the accompaniment with evolutionary epistemology.

Among the most frequent and serious criticisms levelled against evolutionary epistemology is that the analogical version of the view is false because epistemic variation is not blind (Skagestad, 1978), and (Ruse, 1986) including, (Stein and Lipton, 1990) all have argued, nonetheless, that this objection fails because, while epistemic variation is not random, its constraints come from heuristics that, for the most part, are selective retention. Further, Stein and Lipton come to the conclusion that heuristics are analogous to biological pre-adaptions, evolutionary pre-biological pre-adaptions, evolutionary cursors, such as a half-wing, a precursor to a wing, which have some function other than the function of their descendable structures: The function of descendable structures, the function of their descendable character embodied to its structural foundations, is that of the guidelines of epistemic variation is, on this view, not the source of disanaloguousness, but the source of a more articulated account of the analogy.

Many evolutionary epistemologists try to combine the literal and the analogical versions (Bradie, 1986, and Stein and Lipton, 1990), saying that those beliefs and cognitive mechanisms, which are innate results from natural selection of the biological sort and those that are innate results from natural selection of the epistemic sort. This is reasonable as long as the two parts of this hybrid view are kept distinct. An analogical version of evolutionary epistemology with biological variation as its only source of blondeness would be a null theory: This would be the case if all our beliefs are innate or if our non-innate beliefs are not the result of blind variation. An appeal to the legitimate way to produce a hybrid version of evolutionary epistemology since doing so trivializes the theory. For similar reasons, such an appeal will not save an analogical version of evolutionary epistemology from arguments to the effect that epistemic variation is blind (Stein and Lipton, 1990).

Although it is a new approach to theory of knowledge, evolutionary epistemology has attracted much attention, primarily because it represents a serious attempt to flesh out a naturalized epistemology by drawing on several disciplines. In science is relevant to understanding the nature and development of knowledge, then evolutionary theory is among the disciplines worth a look. Insofar as evolutionary epistemology looks there, it is an interesting and potentially fruitful epistemological programme.

What makes a belief justified and what makes a true belief knowledge? Thinking that whether a belief deserves one of these appraisals is natural depends on what caused the depicted branch of knowledge to have the belief. In recent decades a number of epistemologists have pursued this plausible idea with a variety of specific proposals. Some causal theories of knowledge have it that a true belief that p is knowledge just in case it has the right causal connexion to the fact that p. Such a criterion can be applied only to cases where the fact that p is a sort that can reach causal relations, as this seems to exclude mathematically and their necessary facts and perhaps any fact expressed by a universal generalization, and proponents of this sort of criterion have usually supposed that it is limited to perceptual representations where knowledge of particular facts about subjects environments.

For example, Armstrong (1973), predetermined that a position held by a belief in the form This perceived object is ‘F’ is [non-inferential] knowledge if and only if the belief is a completely reliable sign that the perceived object is ‘F’, that is, the fact that the object is ‘F’ contributed to causing the belief and its doing so depended on properties of the believer such that the laws of nature dictated that, for any subject ‘χ’ and perceived object y, if ‘χ’ has those properties and believed that y is ‘F’, then ‘y’ is ‘F’. (Dretske (1981) offers a rather similar account, in terms of the beliefs being caused by a signal received by the perceiver that carries the information that the object is ‘F’).

Goldman (1986) has proposed an importantly different causal criterion, namely, that a true belief is knowledge if it is produced by a type of process that is globally and locally reliable. Causing true beliefs is sufficiently high is globally reliable if its propensity. Local reliability has to do with whether the process would have produced a similar but false belief in certain counterfactual situations alternative to the actual situation. This way of marking off true beliefs that are knowledge does not require the fact believed to be causally related to the belief, and so it could in principle apply to knowledge of any kind of truth.

Goldman requires the global reliability of the belief-producing process for the justification of a belief, he requires it also for knowledge because justification is required for knowledge. What he requires for knowledge, but does not require for justification is local reliability. His idea is that a justified true belief is knowledge if the type of process that produced it would not have produced it in any relevant counterfactual situation in which it is false. Its purported theory of relevant alternatives can be viewed as an attempt to provide a more satisfactory response to this tension in our thinking about knowledge. It attempts to characterize knowledge in a way that preserves both our belief that knowledge is an absolute concept and our belief that we have knowledge.

According to the theory, we need to qualify rather than deny the absolute character of knowledge. We should view knowledge as absolute, reactive to certain standards (Dretske, 1981 and Cohen, 1988). That is to say, in order to know a proposition, our evidence need not eliminate all the alternatives to that preposition, rather for us, that we can know our evidence eliminates al the relevant alternatives, where the set of relevant alternatives (a proper subset of the set of all alternatives) is determined by some standard. Moreover, according to the relevant alternatives view, and the standards determining that of the alternatives is raised by the sceptic are not relevant. If this is correct, then the fact that our evidence cannot eliminate the sceptics alternative does not lead to a sceptical result. For knowledge requires only the elimination of the relevant alternatives, so the relevant alternative view preserves in both strands in our thinking about knowledge. Knowledge is an absolute concept, but because the absoluteness is relative to a standard, we can know many things.

The interesting thesis that counts as a causal theory of justification (in the meaning of causal theory intended here) are that: A belief is justified in case it was produced by a type of process that is globally reliable, that is, its propensity to produce true beliefs-that can be defined (to a good approximation) As the proportion of the beliefs it produces (or would produce) that is true is sufficiently great.

This proposal will be adequately specified only when we are told (i) how much of the causal history of a belief counts as part of the process that produced it, (ii) which of the many types to which the process belongs is the type for purposes of assessing its reliability, and (iii) relative to why the world or worlds are the reliability of the process type to be assessed the actual world, the closet worlds containing the case being considered, or something else? Let us look at the answers suggested by Goldman, the leading proponent of a reliabilist account of justification.

(1) Goldman (1979, 1986) takes the relevant belief producing process to include only the proximate causes internal to the believer. So, for instance, when recently I believed that the telephone was ringing the process that produced the belief, for purposes of assessing reliability, includes just the causal chain of neural events from the stimulus in my ears inward ands other concurrent brain states on which the production of the belief depended: It does not include any events in the telephone, or the sound waves travelling between it and my ears, or any earlier decisions I made that were responsible for my being within hearing distance of the telephone at that time. It does seem intuitively plausible of a belief depends should be restricted to internal oneness proximate to the belief. Why? Goldman does not tell us. One answer that some philosophers might give is that it is because a beliefs being justified at a given time can depend only on facts directly accessible to the believers awareness at that time (for, if a believer ought to holds only beliefs that are justified, she can tell at any given time what beliefs would then be justified for her). However, this cannot be Goldmans answer because he wishes to include in the relevantly process neural events that are not directly accessible to consciousness.

(2) Once the reliabilist has told us how to delimit the process producing a belief, he needs to tell us which of the many types to which it belongs is the relevant type. Coincide, for example, the process that produces your current belief that you see a book before you. One very broad type to which that process belongs would be specified by coming to a belief as to something one perceives as a result of activation of the nerve endings in some of ones sense-organs. A constricted type, in which that unvarying processes belong would be specified by coming to a belief as to what one sees as a result of activation of the nerve endings in ones retinas. A still narrower type would be given by inserting in the last specification a description of a particular pattern of activation of the retinas particular cells. Which of these or other types to which the token process belongs is the relevant type for determining whether the type of process that produced your belief is reliable?

If we select a type that is too broad, as having the same degree of justification various beliefs that intuitively seem to have different degrees of justification. Thus the broadest type we specified for your belief that you see a book before you apply also to perceptual beliefs where the object seen is far away and seen only briefly is less justified. On the other hand, is we are allowed to select a type that is as narrow as we please, then we make it out that an obviously unjustified but true belief is produced by a reliable type of process. For example, suppose I see a blurred shape through the fog far in a field and unjustifiedly, but correctly, believe that it is a sheep: If we include enough details about my retinal image is specifying te type of the visual process that produced that belief, we can specify a type is likely to have only that one instanced and is therefore 100 percent reliable. Goldman conjectures (1986) that the relevant process type is the narrowest type that is casually operative. Presumably, a feature of the process producing beliefs were causally operatives in producing it just in case some alternative feature instead, but it would not have led to that belief. We need to say some here rather than any, because, for example, when I see an oak or maple tree, the particular like-minded material bodies of my retinal image is causally clear towards the worked in producing my belief that what is seen as a tree, even though there are alternative shapes, for example, oak or maples, ones that would have produced the same belief.

(3) Should the justification of a belief in a hypothetical, non-actual example turn on the reliability of the belief-producing process in the possible world of the example? That leads to the implausible result in that in a world run by a Cartesian demon-a powerful being who causes the other inhabitants of the world to have rich and churent sets of perceptual and memory impressions that are all illusory the perceptual and memory beliefs of the other inhabitants are all unjustified, for they are produced by processes that are, in that world, quite unreliable. If we say instead that it is the reliability of the processes in the actual world that matters, we get the equally undesired result that if the actual world is a demon world then our perceptual and memory beliefs are all unjustified.

Goldmans solution (1986) is that the reliability of the process types is to be gauged by their performance in normal worlds, that is, worlds consistent with our general beliefs about the world . . . about the sorts of objects, events and changes that occur in it. This gives the intuitively right results for the problem cases just considered, but indicate by inference an implausible proportion of making compensations for alternative tending toward justification. If there are people whose general beliefs about the world are very different from mine, then there may, on this account, be beliefs that I can correctly regard as justified (ones produced by processes that are reliable in what I take to be a normal world) but that they can correctly regard as not justified.

However, these questions about the specifics are dealt with, and there are reasons for questioning the basic idea that the criterion for a beliefs being justified is its being produced by a reliable process. Thus and so, doubt about the sufficiency of the reliabilist criterion is prompted by a sort of example that Goldman himself uses for another purpose. Suppose that being in brain-state (B) always causes one to believe that one is in brained-state (B). Here the reliability of the belief-producing process is perfect, but we can readily imagine circumstances in which a person goes into grain-state B and therefore has the belief in question, though this belief is by no means justified (Goldman, 1979). Doubt about the necessity of the condition arises from the possibility that one might know that one has strong justification for a certain belief and yet that knowledge is not what actually prompts one to believe. For example, I might be well aware that, having read the weather bureaus forecast that it will be much hotter tomorrow. I have ample reason to be confident that it will be hotter tomorrow, but I irrationally refuse to believe it until Wally tells me that he feels in his joints that it will be hotter tomorrow. Here what prompts me to believe dors not justify my belief, but my belief is nevertheless justified by my knowledge of the weather bureaus prediction and of its evidential force: I can advert to any disavowable inference that I ought not to be holding the belief. Indeed, given my justification and that there is nothing untoward about the weather bureaus prediction, my belief, if true, can be counted knowledge. This sorts of example raises doubt whether any causal conditions, are it a reliable process or something else, is necessary for either justification or knowledge.

Philosophers and scientists alike, have often held that the simplicity or parsimony of a theory is one reason, all else being equal, to view it as true. This goes beyond the unproblematic idea that simpler theories are easier to work with and gave greater aesthetic appeal.

One theory is more parsimonious than another when it postulates fewer entities, processes, changes or explanatory principles: The simplicity of a theory depends on essentially the same consecrations, though parsimony and simplicity obviously become the same. Demanding clarification of what makes one theory simpler or more parsimonious is plausible than another before the justification of these methodological maxims can be addressed.

If we set this description problem to one side, the major normative problem is as follows: What reason is there to think that simplicity is a sign of truth? Why should we accept a simpler theory instead of its more complex rivals? Newton and Leibniz thought that the answer was to be found in a substantive fact about nature. In Principia, Newton laid down as his first Rule of Reasoning in Philosophy that nature does nothing in vain . . . for Nature is pleased with simplicity and affects not the pomp of superfluous causes. Leibniz hypothesized that the actual world obeys simple laws because Gods taste for simplicity influenced his decision about which world to actualize.

The tragedy of the Western mind, described by Koyré, is a direct consequence of the stark Cartesian division between mind and world. We discovered the certain principles of physical reality, said Descartes, not by the prejudices of the senses, but by the light of reason, and which thus possess so great evidence that we cannot doubt of their truth. Since the real, or that which actually exists external to ourselves, was in his view only that which could be represented in the quantitative terms of mathematics, Descartes conclude that all quantitative aspects of reality could be traced to the deceitfulness of the senses.

The most fundamental aspect of the Western intellectual tradition is the assumption that there is a fundamental division between the material and the immaterial world or between the realm of matter and the realm of pure mind or spirit. The metaphysical frame-work based on this assumption is known as ontological dualism. As the word dual implies, the framework is predicated on an ontology, or a conception of the nature of God or Being, that assumes reality has two distinct and separable dimensions. The concept of Being as continuous, immutable, and having a prior or separate existence from the world of change dates from the ancient Greek philosopher Parmenides. The same qualities were associated with the God of the Judeo-Christian tradition, and they were considerably amplified by the role played in theology by Platonic and Neoplatonic philosophy.

Nicolas Copernicus, Galileo, Johannes Kepler, and Isaac Newton were all inheritors of a cultural tradition in which ontological dualism was a primary article of faith. Hence the idealization of the mathematical ideal as a source of communion with God, which dates from Pythagoras, provided a metaphysical foundation for the emerging natural sciences. This explains why, the creators of classical physics believed that doing physics was a form of communion with the geometrical and mathematical forms resident in the perfect mind of God. This view would survive in a modified form in what is now known as Einsteinian epistemology and accounts in no small part for the reluctance of many physicists to accept the epistemology associated with the Copenhagen Interpretation.

At the beginning of the nineteenth century, Pierre-Simon LaPlace, along with a number of other French mathematicians, advanced the view that the science of mechanics constituted a complete view of nature. Since this science, by observing its epistemology, had revealed itself to be the fundamental science, the hypothesis of God was, they concluded, entirely unnecessary.

LaPlace is recognized for eliminating not only the theological component of classical physics but the entire metaphysical component as well. The epistemology of science requires, he said, that we proceed by inductive generalizations from observed facts to hypotheses that are tested by observed conformity of the phenomena. What was unique about LaPlaces view of hypotheses was his insistence that we cannot attribute reality to them. Although concepts like force, mass, motion, cause, and laws are obviously present in classical physics, they exist in LaPlaces view only as quantities. Physics is concerned, he argued, with quantities that we associate as a matter of convenience with concepts, and the truths about nature are only the quantities.

As this view of hypotheses and the truths of nature as quantities was extended in the nineteenth century to a mathematical description of phenomena like heat, light, electricity, and magnetism. LaPlaces assumptions about the actual character of scientific truths seemed correct. This progress suggested that if we could remove all thoughts about the nature of or the source of phenomena, the pursuit of strictly quantitative concepts would bring us to a complete description of all aspects of physical reality. Subsequently, figures like Comte, Kirchhoff, Hertz, and Poincaré developed a program for the study of nature hat was quite different from that of the original creators of classical physics.

The seventeenth-century view of physics as a philosophy of nature or as natural philosophy was displaced by the view of physics as an autonomous science that was the science of nature. This view, which was premised on the doctrine of positivism, promised to subsume all of nature with a mathematical analysis of entities in motion and claimed that the true understanding of nature was revealed only in the mathematical description. Since the doctrine of positivism assumes that the knowledge we call physics resides only in the mathematical formalism of physical theory, it disallows the prospect that the vision of physical reality revealed in physical theory can have any other meaning. In the history of science, the irony is that positivism, which was intended to banish metaphysical concerns from the domain of science, served to perpetuate a seventeenth-century metaphysical assumption about the relationship between physical reality and physical theory.

Epistemology since Hume and Kant has drawn back from this theological underpinning. Indeed, the very idea that nature is simple (or uniform) has come in for a critique. The view has taken hold that a preference for simple and parsimonious hypotheses is purely methodological: It is constitutive of the attitude we call scientific and makes no substantive assumption about the way the world is.

A variety of otherwise diverse twentieth-century philosophers of science have attempted, in different ways, to flesh out this position. Two examples must suffice here: Hesse (1969) as, for summaries of other proposals. Popper (1959) holds that scientists should prefer highly falsifiable (improbable) theories: He tries to show that simpler theories are more falsifiable, also Quine (1966), in contrast, sees a virtue in theories that are highly probable, he argues for a general connexion between simplicity and high probability.

Both these proposals are global. They attempt to explain why simplicity should be part of the scientific method in a way that spans all scientific subject matters. No assumption about the details of any particular scientific problem serves as a premiss in Poppers or Quines arguments.

Newton and Leibniz thought that the justification of parsimony and simplicity flows from the hand of God: Popper and Quine try to justify these methodologically median of importance is without assuming anything substantive about the way the world is. In spite of these differences in approach, they have something in common. They assume that all users of parsimony and simplicity in the separate sciences can be encompassed in a single justifying argument. That recent developments in confirmation theory suggest that this assumption should be scrutinized. Good (1983) and Rosenkrantz (1977) has emphasized the role of auxiliary assumptions in mediating the connexion between hypotheses and observations. Whether a hypothesis is well supported by some observations, or whether one hypothesis is better supported than another by those observations, crucially depends on empirical background assumptions about the inference problem here. The same view applies to the idea of prior probability (or, prior plausibility). In of a single hypo-physical science if chosen as an alternative to another even though they are equally supported by current observations, this must be due to an empirical background assumption.

Principles of parsimony and simplicity mediate the epistemic connexion between hypotheses and observations. Perhaps these principles are able to do this because they are surrogates for an empirical background theory. It is not that there is one background theory presupposed by every appeal to parsimony; This has the quantifier order backwards. Rather, the suggestion is that each parsimony argument is justified only to each degree that it reflects an empirical background theory about the subjective matter. On this theory is brought out into the open, but the principle of parsimony is entirely dispensable (Sober, 1988).

This local approach to the principles of parsimony and simplicity resurrects the idea that they make sense only if the world is one way rather than another. It rejects the idea that these maxims are purely methodological. How defensible this point of view is, will depend on detailed case studies of scientific hypothesis evaluation and on further developments in the theory of scientific inference.

It is usually not found of one and the same that, an inference is a (perhaps very complex) act of thought by virtue of which act (1) I pass from a set of one or more propositions or statements to a proposition or statement and (2) it appears that the latter are true if the former is or are. This psychological characterization has occurred over a wider summation of literature under more lesser than inessential variations. Desiring a better characterization of inference is natural. Yet attempts to do so by constructing a fuller psychological explanation fail to comprehend the grounds on which inference will be objectively valid-A point elaborately made by Gottlob Frége. Attempts to understand the nature of inference through the device of the representation of inference by formal-logical calculations or derivations better (1) leave us puzzled about the relation of formal-logical derivations to the informal inferences they are supposedly to represent or reconstruct, and (2) leaves us worried about the sense of such formal derivations. Are these derivations inference? Are not informal inferences needed in order to apply the rules governing the constructions of formal derivations (inferring that this operation is an application of that formal rule)? These are concerns cultivated by, for example, Wittgenstein.

Coming up with an adequate characterized inferences, and even working out what would count as a very adequate characterization here is demandingly by no means nearly some resolved philosophical problem.

Traditionally, a proposition that is not a conditional, as with the affirmative and negative, modern opinion is wary of the distinction, since what appears categorical may vary with the choice of a primitive vocabulary and notation. Apparently categorical propositions may also turn out to be disguised conditionals: X is intelligent (categorical?) Equivalent, if X is given a range of tasks, she does them better than many people (conditional?). The problem is not merely one of classification, since deep metaphysical questions arise when facts that seem to be categorical and therefore solid, come to seem by contrast conditional, or purely hypothetical or potential.

Its condition of some classified necessity is so proven sufficient that if p is a necessary condition of q, then q cannot be true unless p; is true? If p is a sufficient condition, thus steering well is a necessary condition of driving in a satisfactory manner, but it is not sufficient, for one can steer well but drive badly for other reasons. Confusion may result if the distinction is not heeded. For example, the statement that A causes B may be interpreted to mean that A is itself a sufficient condition for B, or that it is only a necessary condition fort B, or perhaps a necessary parts of a total sufficient condition. Lists of conditions to be met for satisfying some administrative or legal requirement frequently attempt to give individually necessary and jointly sufficient sets of conditions.

What is more that if any proposition of the form if p then q. The condition hypothesized, p. Is called the antecedent of the conditionals, and q, the consequent? Various kinds of conditional have been distinguished. Its weakest is that of material implication, merely telling that either not-p, or q. Stronger conditionals include elements of modality, corresponding to the thought that if p is truer then q must be true. Ordinary language is very flexible in its use of the conditional form, and there is controversy whether conditionals are better treated semantically, yielding differently finds of conditionals with different meanings, or pragmatically, in which case there should be one basic meaning with surface differences arising from other implicatures.

It follows from the definition of strict implication that a necessary proposition is strictly implied by any proposition, and that an impossible proposition strictly implies any proposition. If strict implication corresponds to q follows from p, then this means that a necessary proposition follows from anything at all, and anything at all follows from an impossible proposition. This is a problem if we wish to distinguish between valid and invalid arguments with necessary conclusions or impossible premises.

The Humean problem of induction is that if we would suppose that there is some property A concerning and observational or an experimental situation, and that out of a large number of observed instances of A, some fraction m/n (possibly equal to 1) has also been instances of some logically independent property B. Suppose further that the background proportionate circumstances not specified in these descriptions have been varied to a substantial degree and that there is no collateral information available concerning the frequency of Bs among As or concerning causal or nomologically connections between instances of A and instances of B.

In this situation, an enumerative or instantial induction inference would move rights from the premise, that m/n of observed As are Bs to the conclusion that approximately m/n of all As are Bs. (The usual probability qualification will be assumed to apply to the inference, rather than being part of the conclusion.) Here the class of As should be taken to include not only unobserved As and future As, but also possible or hypothetical As (an alternative conclusion would concern the probability or likelihood of the adjacently observed A being a B).

The traditional or Humean problem of induction, often referred to simply as the problem of induction, is the problem of whether and why inferences that fit this schema should be considered rationally acceptable or justified from an epistemic or cognitive standpoint, i.e., whether and why reasoning in this way is likely to lead to true claims about the world. Is there any sort of argument or rationale that can be offered for thinking that conclusions reached in this way are likely to be true in the corresponding premisses is true ‒or even that their chances of truth are significantly enhanced?

Humes discussion of this issue deals explicitly only with cases where all observed As are Bs and his argument applies just as well to the more general case. His conclusion is entirely negative and sceptical: Inductive inferences are not rationally justified, but are instead the result of an essentially a-rational process, custom or habit. Hume (1711-76) challenges the proponent of induction to supply a cogent ligne of reasoning that leads from an inductive premise to the corresponding conclusion and offers an extremely influential argument in the form of a dilemma (a few times referred to as Humes fork), that either our actions are determined, in which case we are not responsible for them, or they are the result of random events, under which case we are also not responsible for them.

Such reasoning would, he argues, have to be either deductively demonstrative reasoning in the concerning relations of ideas or experimental, i.e., empirical, that reasoning concerning matters of fact or existence. It cannot be the former, because all demonstrative reasoning relies on the avoidance of contradiction, and it is not a contradiction to suppose that the course of nature may change, that an order that was observed in the past and not of its continuing against the future: But it cannot be, as the latter, since any empirical argument would appeal to the success of such reasoning about an experience, and the justifiability of generalizing from experience are precisely what is at issue-so that any such appeal would be question-begging. Hence, Hume concludes that there can be no such reasoning (1748).

An alternative version of the problem may be obtained by formulating it with reference to the so-called Principle of Induction, which says roughly that the future will resemble the past or, somewhat better, that unobserved cases will resemble observed cases. An inductive argument may be viewed as enthymematic, with this principle serving as a supposed premiss, in which case the issue is obviously how such a premiss can be justified. Humes argument is then that no such justification is possible: The principle cannot be justified a prior because having possession of been true in experiences without obviously begging the question is not contradictory to have possession of been true in experiences without obviously begging the question.

The predominant recent responses to the problem of induction, at least in the analytic tradition, in effect accept the main conclusion of Humes argument, namely, that inductive inferences cannot be justified in the sense of showing that the conclusion of such an inference is likely to be true if the premise is true, and thus attempt to find another sort of justification for induction. Such responses fall into two main categories: (I) Pragmatic justifications or vindications of induction, mainly developed by Hans Reichenbach (1891-1953), and (ii) ordinary language justifications of induction, whose most important proponent is Frederick, Peter Strawson (1919-). In contrast, some philosophers still attempt to reject Humes dilemma by arguing either (iii) That, contrary to appearances, induction can be inductively justified without vicious circularity, or (iv) that an anticipatory justification of induction is possible after all. In that:

(1) Reichenbachs view is that induction is best regarded, not as a form of inference, but rather as a method for arriving at posits regarding, i.e., the proportion of As remain additionally of Bs. Such a posit is not a claim asserted to be true, but is instead an intellectual wager analogous to a bet made by a gambler. Understood in this way, the inductive method says that one should posit that the observed proportion is, within some measure of an approximation, the true proportion and then continually correct that initial posit as new information comes in.

The gamblers bet is normally an appraised posit, i.e., he knows the chances or odds that the outcome on which he bets will actually occur. In contrast, the inductive bet is a blind posit: We do not know the chances that it will succeed or even that success is that it will succeed or even that success is possible. What we are gambling on when we make such a bet is the value of a certain proportion in the independent world, which Reichenbach construes as the limit of the observed proportion as the number of cases increases to infinity. Nevertheless, we have no way of knowing that there are even such a limit, and no way of knowing that the proportion of As are in addition of Bs converges in the end on some stable value than varying at random. If we cannot know that this limit exists, then we obviously cannot know that we have any definite chance of finding it.

What we can know, according to Reichenbach, is that if there is a truth of this sort to be found, the inductive method will eventually find it. That this is so is an analytic consequence of Reichenbachs account of what it is for such a limit to exist. The only way that the inductive method of making an initial posit and then refining it in light of new observations can fail eventually to arrive at the true proportion is if the series of observed proportions never converges on any stable value, which means that there is no truth to be found pertaining the proportion of As additionally constitute Bs. Thus, induction is justified, not by showing that it will succeed or indeed, that it has any definite likelihood of success, but only by showing that it will succeed if success is possible. Reichenbachs claim is that no more than this can be established for any method, and hence that induction gives us our best chance for success, our best gamble in a situation where there is no alternative to gambling.

This pragmatic response to the problem of induction faces several serious problems. First, there are indefinitely many other methods for arriving at posits for which the same sort of defence can be given-methods that yield the same result as the inductive method over time but differ arbitrarily before long. Despite the efforts of others, it is unclear that there is any satisfactory way to exclude such alternatives, in order to avoid the result that any arbitrarily chosen short-term posit is just as reasonable as the inductive posit. Second, even if there is a truth of the requisite sort to be found, the inductive method is only guaranteed to find it or even to come within any specifiable distance of it in the indefinite long run. All the same, any actual application of inductive results always takes place in the presence to the future eventful states in making the relevance of the pragmatic justification to actual practice uncertainly. Third, and most important, it needs to be emphasized that Reichenbachs response to the problem simply accepts the claim of the Humean sceptic that an inductive premise never provides the slightest reason for thinking that the corresponding inductive conclusion is true. Reichenbach himself is quite candid on this point, but this does not alleviate the intuitive implausibility of saying that we have no more reason for thinking that our scientific and commonsense conclusions that result in the induction of it . . . is true than, to use Reichenbachs own analogy (1949), a blind man wandering in the mountains who feels an apparent trail with his stick has for thinking that following it will lead him to safety.

An approach to induction resembling Reichenbachs claiming in that those particular inductive conclusions are posits or conjectures, than the conclusions of cogent inferences, is offered by Popper. However, Poppers view is even more overtly sceptical: It amounts to saying that all that can ever be said in favour of the truth of an inductive claim is that the claim has been tested and not yet been shown to be false.

(2) The ordinary language response to the problem of induction has been advocated by many philosophers, none the less, Strawson claims that the question whether induction is justified or reasonable makes sense only if it tacitly involves the demand that inductive reasoning meet the standards appropriate to deductive reasoning, i.e., that the inductive conclusions are shown to follow deductively from the inductive assumption. Such a demand cannot, of course, be met, but only because it is illegitimate: Inductive and deductive reasons are simply fundamentally different kinds of reasoning, each possessing its own autonomous standards, and there is no reason to demand or expect that one of these kinds meet the standards of the other. Whereas, if induction is assessed by inductive standards, the only ones that are appropriate, then it is obviously justified.

The problem here is to understand to what this allegedly obvious justification of an induction amount. In his main discussion of the point (1952), Strawson claims that it is an analytic true statement that believing it a conclusion for which there is strong evidence is reasonable and an analytic truth that inductive evidence of the sort captured by the schema presented earlier constitutes strong evidence for the corresponding inductive conclusion, thus, apparently yielding the analytic conclusion that believing it a conclusion for which there is inductive evidence is reasonable. Nevertheless, he also admits, indeed insists, that the claim that inductive conclusions will be true in the future is contingent, empirical, and may turn out to be false (1952). Thus, the notion of reasonable belief and the correlative notion of strong evidence must apparently be understood in ways that have nothing to do with likelihood of truth, presumably by appeal to the standard of reasonableness and strength of evidence that are accepted by the community and are embodied in ordinary usage.

Understood in this way, Strawsons response to the problem of inductive reasoning does not speak to the central issue raised by Humean scepticism: The issue of whether the conclusions of inductive arguments are likely to be true. It amounts to saying merely that if we reason in this way, we can correctly call ourselves reasonable and our evidence strong, according to our accepted community standards. Nevertheless, to the undersealing of issue of wether following these standards is a good way to find the truth, the ordinary language response appears to have nothing to say.

(3) The main attempts to show that induction can be justified inductively have concentrated on showing that such as a defence can avoid circularity. Skyrms (1975) formulate, perhaps the clearest version of this general strategy. The basic idea is to distinguish different levels of inductive argument: A first level in which induction is applied to things other than arguments: A second level in which it is applied to arguments at the first level, arguing that they have been observed to succeed so far and hence are likely to succeed in general: A third level in which it is applied in the same way to arguments at the second level, and so on. Circularity is allegedly avoided by treating each of these levels as autonomous and justifying the argument at each level by appeal to an argument at the next level.

One problem with this sort of move is that even if circularity is avoided, the movement to Higher and Higher levels will clearly eventually fail simply for lack of evidence: A level will reach at which there have been enough successful inductive arguments to provide a basis for inductive justification at the next Higher level, and if this is so, then the whole series of justifications collapses. A more fundamental difficulty is that the epistemological significance of the distinction between levels is obscure. If the issue is whether reasoning in accord with the original schema offered above ever provides a good reason for thinking that the conclusion is likely to be true, then it still seems question-begging, even if not flatly circular, to answer this question by appeal to anther argument of the same form.

(4) The idea that induction can be justified on a pure priori basis is in one way the most natural response of all: It alone treats an inductive argument as an independently cogent piece of reasoning whose conclusion can be seen rationally to follow, although perhaps only with probability from its premise. Such an approach has, however, only rarely been advocated (Russell, 19132 and BonJour, 1986), and is widely thought to be clearly and demonstrably hopeless.

Many on the reasons for this pessimistic view depend on general epistemological theses about the possible or nature of anticipatory cognition. Thus if, as Quine alleges, there is no a prior justification of any kind, then obviously a prior justification for induction is ruled out. Or if, as more moderate empiricists have in claiming some preexistent knowledge should be analytic, then again a prevenient justification for induction seems to be precluded, since the claim that if an inductive premise is truer, then the conclusion is likely to be true does not fit the standard conceptions of analyticity. A consideration of these matters is beyond the scope of the present spoken exchange.

There are, however, two more specific and quite influential reasons for thinking that an early approach is impossible that can be briefly considered, first, there is the assumption, originating in Hume, but since adopted by very many of others, that a move forward in the defence of induction would have to involve turning induction into deduction, i.e., showing, per impossible, that the inductive conclusion follows deductively from the premise, so that it is a formal contradiction to accept the latter and deny the former. However, it is unclear why a prior approach need be committed to anything this strong. It would be enough if it could be argued that it is deductively unlikely that such a premise is true and corresponding conclusion false.

Second, Reichenbach defends his view that pragmatic justification is the best that is possible by pointing out that a completely chaotic world in which there is simply not true conclusion to be found as to the proportion of As in addition that occur of, but Bs is neither impossible nor unlikely from a purely a prior standpoint, the suggestion being that therefore there can be no a prior reason for thinking that such a conclusion is true. Nevertheless, there is still a substring way in laying that a chaotic world is a prior neither impossible nor unlikely without any further evidence does not show that such a world os not a prior unlikely and a world containing such-and-such regularity might anticipatorially be somewhat likely in relation to an occurrence of a long running pattern of evidence in which a certain stable proportion of observed As are Bs ~. An occurrence, it might be claimed, that would be highly unlikely in a chaotic world (BonJour, 1986).

Goodmans new riddle of induction purports that we suppose that before some specific time t (perhaps the year 2000) we observe a larger number of emeralds (property A) and find them all to be green (property B). We proceed to reason inductively and conclude that all emeralds are green Goodman points out, however, that we could have drawn a quite different conclusion from the same evidence. If we define the term grue to mean green if examined before t and blue examined after t ʹ, then all of our observed emeralds will also be gruing. A parallel inductive argument will yield the conclusion that all emeralds are gruing, and hence that all those examined after the year 2000 will be blue. Presumably the first of these concisions is genuinely supported by our observations and the second is not. Nevertheless, the problem is to say why this is so and to impose some further restriction upon inductive reasoning that will permit the first argument and exclude the second.

The obvious alternative suggestion is that grue. Similar predicates do not correspond to genuine, purely qualitative properties in the way that green and blueness does, and that this is why inductive arguments involving them are unacceptable. Goodman, however, claims to be unable to make clear sense of this suggestion, pointing out that the relations of formal desirability are perfectly symmetrical: Grue may be defined in terms if, green and blue, but green an equally well be defined in terms of grue and green (blue if examined before t and green if examined after t).

The grued, paradoxes demonstrate the importance of categorization, in that sometimes it is itemized as gruing, if examined of a presence to the future, before future time t and green, or not so examined and blue. Even though all emeralds in our evidence class grue, we ought must infer that all emeralds are gruing. For grue is unprojectible, and cannot transmit credibility from known to unknown cases. Only projectable predicates are right for induction. Goodman considers entrenchment the key to projectibility having a long history of successful protection, grue is entrenched, lacking such a history, grue is not. A hypothesis is projectable, Goodman suggests, only if its predicates (or suitable related ones) are much better entrenched than its rivalrous past successes that do not assume future ones. Induction remains a risky business. The rationale for favouring entrenched predicates is pragmatic. Of the possible projections from our evidence class, the one that fits with past practices enables us to utilize our cognitive resources best. Its prospects of being true are worse than its competitors and its cognitive utility is greater.

So, to a better understanding of induction we should then literize its term for which is most widely used for any process of reasoning that takes us from empirical premises to empirical conclusions supported by the premises, but not deductively entailed by them. Inductive arguments are therefore kinds of applicative arguments, in which something beyond the content of the premise is inferred as probable or supported by them. Induction is, however, commonly distinguished from arguments to theoretical explanations, which share this applicative character, by being confined to inferences in which he conclusion involves the same properties or relations as the premises. The central example is induction by simple enumeration, where from premises telling that Fa, Fb, Fc . . . where a, b, cs, are all of some kind G, it is inferred that Gs from outside the sample, such as future Gs, will be F, or perhaps that all Gs are F. In this, which and the other persons deceive them, children may infer that everyone is a deceiver: Different, but similar inferences of a property by some object to the same objects future possession of the same property, or from the constancy of some law-like pattern in events and states of affairs ti its future constancy. All objects we know of attract each other with a force inversely proportional to the square of the distance between them, so perhaps they all do so, and will always do so.

The rational basis of any inference was challenged by Hume, who believed that induction presupposed belief in the uniformity of nature, but that this belief has no defence in reason, and merely reflected a habit or custom of the mind. Hume was not therefore sceptical about the role of reason in either explaining it or justifying it. Trying to answer Hume and to show that there is something rationally compelling about the inference referred to as the problem of induction. It is widely recognized that any rational defence of induction will have to partition well-behaved properties for which the inference is plausible (often called projectable properties) from badly behaved ones, for which it is not. It is also recognized that actual inductive habits are more complex than those of similar enumeration, and that both common sense and science pay attention to such giving factors as variations within the sample giving us the evidence, the application of ancillary beliefs about the order of nature, and so on.

Nevertheless, the fundamental problem remains that and experience condition by application show us only events occurring within a very restricted part of a vast spatial and temporal order about which we then come to believe things.

Uncompounded by its belonging of a confirmation theory finding of the measure to which evidence supports a theory fully formalized confirmation theory would dictate the degree of confidence that a rational investigator might have in a theory, given some-body of evidence. The grandfather of confirmation theory is Gottfried Leibniz (1646-1718), who believed that a logically transparent language of science would be able to resolve all disputes. In the 20th century a fully formal confirmation theory was a main goal of the logical positivist, since without it the central concept of verification by empirical evidence itself remains distressingly unscientific. The principal developments were due to Rudolf Carnap (1891-1970), culminating in his Logical Foundations of Probability (1950). Carnaps idea was that the measure necessitated would be the proportion of logically possible states of affairs in which the theory and the evidence both hold, compared ti the number in which the evidence itself holds that the probability of a preposition, relative to some evidence, is a proportion of the range of possibilities under which the proposition is true, compared to the total range of possibilities left by the evidence. The difficulty with the theory lies in identifying sets of possibilities so that they admit of measurement. It therefore demands that we can put a measure on the range of possibilities consistent with theory and evidence, compared with the range consistent with the evidence alone.

Among the obstacles the enterprise meets, is the fact that while evidence covers only a finite range of data, the hypotheses of science may cover an infinite range. In addition, confirmation proves to vary with the language in which the science is couched, and the Carnapian programme has difficulty in separating genuinely confirming variety of evidence from less compelling repetition of the same experiment. Confirmation also proved to be susceptible to acute paradoxes. Finally, scientific judgement seems to depend on such intangible factors as the problems facing rival theories, and most workers have come to stress instead the historically situated scene of what would appear as a plausible distinction of a scientific knowledge at a given time.

We collectively glorify our ability to think as the distinguishing characteristic of humanity; We personally and mistakenly glorify our thoughts as the distinguishing pattern of whom we are. From the inner voice of thought-as-words to the wordless images within our minds, thoughts create and limit our personal world. Through thinking we abstract and define reality, reason about it, react to it, recall past events and plan for the future. Yet thinking remains both woefully underdeveloped in most of us, as well as grossly overvalued. We can best gain some perspective on thinking in terms of energies.

Automatic thinking draws us away from the present, and wistfully allows our thoughts to meander where they would, carrying our passive attention along with them. Like water running down a mountain stream, thoughts running on auto-pilot careens through the spaces of perception, randomly triggering associative links within our vast storehouse of memory. By way of itself, such associative thought is harmless. However, our tendency to believe in, act upon, and drift away with such undirected thought keeps us operating in an automatic mode. Lulled into an inner passivity by our daydreams and thought streams, we lose contact with the world of actual perceptions, of real life. In the automatic mode of thinking, I am completely identified with my thoughts, believing my thoughts are I, and believing that I am the conceptualization forwarded by me to think of thoughts that are sometimes thought as unthinkable.

Another mode of automatic thinking consists of repetitious and habitual patterns of thought. These thought tapes and our running commentary on life, unexamined by the light of awareness, keep us enthralled, defining who we are and perpetuating all our limiting assumptions about what is possible for us. Driving and driven by our emotions, these ruts of thought create our false persona, the mask that keeps us disconnected from others and from our own authentic self. More than any other single factor, automatic thinking hinders our contact with presence, limits our being, and Forms our path. The autopilot of thought constantly calls us away from the most recent or the current of immediacy. Thus, keeping us fixed on the most superficial levels of our being.

Sometimes we even notice strange, unwanted thoughts that we consider horrible or shameful. We might be upset or shaken that we would think such thoughts, but those reactions only serves to sustain the problematic thoughts by feeding them energy. Furthermore, that self-disgust is based on the false assumption that we are our thoughts, that even unintentional thoughts, arising from our conditioned minds, are we. They are not we and we need not act upon or react to them. They are just thoughts with no inherent power and no real message about whom we are. We can just relax and let them go - or not. Troubling thoughts that recur over a long period and hinder our inner work may require us to examine and heal their roots in our conditioning, perhaps with the help of a psychotherapist.

Sensitive thinking puts us in touch with the meaning of our thoughts and enables us to think logically, solve problems, make plans, and carry on a substantive conversation. A good education develops our ability to think clearly and intentionally with the sensitive energy. With that energy level in our thinking brain, no longer totally submerged in the thought stream, we can move about in it, choosing among and directing our thoughts based on their meaning.

Conscious thinking means stepping out of the thought stream altogether, and surveying it from the shore. The thoughts themselves may even evaporate, leaving behind a temporary empty streambed. Consciousness reveals the banality and emptiness of ordinary thinking. Consciousness also permits us to think more powerfully, holding several ideas, their meanings and ramifications in our minds at once.

When the creative energy reaches thought, truly new ideas spring up. Creative thinking can happen after a struggle, after exhausting all known avenues of relevant ideas and giving up, shaping and emptying the stage so the creative spark may enter. The quiet, relaxed mind also leaves room for the creative thought, a clear channel for creativity. Creative and insightful thoughts come to all of us in regard to the situations we face in life. The trick is to be aware enough to catch them, to notice their significance, and if they withstand the light of sober and unbiased evaluation, to act on them.

In the spiritual path, we work to recognize the limitations of thought, to recognize its power over us, and especially to move beyond it. Along with Descartes, we subsist in the realm of “thoughts‘, but thoughts are just thoughts. They are not we. They are not who we are. No thought can enter the spiritual realms. Rather, the material world defines the boundaries of thought, despite its power to conceive lofty abstractions. We cannot think our way into the spiritual reality. On the contrary, identification with thinking prevents us from entering the depths. As long as we believe that refined thinking represents our highest capacity, we shackle ourselves exclusively to this world. All our thoughts, all our books, all our ideas wither before the immensity of the higher realms.

A richly developed body of spiritual practices engages of thought, from repetitive prayer and mantras, to contemplation of an idea, to visualizations of deities. In a most instructive and invaluable exercise, we learn to see beyond thought by embracing the gaps, the spaces between thoughts. After sitting quietly and relaxing for some time, we turn our attention toward the thought stream within us. We notice thoughts come and go of their own accord, without prodding or pushing from us. If we can abide in this relaxed watching of thought, without falling into the stream and flowing away with it, the thought stream begins to slow, the thoughts fragment. Less enthralled by our thoughts, we begin to see that we are not our thoughts. Less controlled by, and at the mercy of, our thoughts, we begin to be aware of the gaps between thought particles. These gaps open to consciousness, underlying all thought. Settling into these gaps, we enter and become the silent consciousness beneath thought. Instead of being in our thoughts, our thoughts are in us.

There is potentially a rich and productive interface between neuroscience/cognitive science. The two traditions, however, have evolved largely independent, based on differing sets of observations and objectives, and tend to use different conceptual frameworks and vocabulary representations. The distributive contributions to each their dynamic functions of finding a useful common reference to further exploration of the relations between neuroscience/cognitive science and psychoanalysis/psychotherapy.

Recent historical gaps between neuroscience/cognitive science and psychotherapy are being productively closed by, among other things, the suggestion that recent understandings of the nervous system as a modeler and predictor bear a close and useful similarity to the concepts of projection and transference. The gap could perhaps be valuably narrowed still further by a comparison in the two traditions of the concepts of the "unconscious" and the "conscious" and the relations between the two. It is suggested that these be understood as two independent "story generators" - each with different styles of function and both operating optimally as reciprocal contributors to each others' ongoing story evolution. A parallel and comparably optimal relation might be imagined for neuroscience/cognitive science and psychotherapy.

For the sake of argument, imagine that human behaviour and all that it entails (including the experience of being a human and interacting with a world that includes other humans) is a function of the nervous system. If this were so, then there would be lots of different people who are making observations of (perhaps different) aspects of the same thing, and telling (perhaps different) stories to make sense of their observations. The list would include neuroscientists and cognitive scientists and psychologists. It would include as well psychoanalysts, psychotherapists, psychiatrists, and social workers. If we were not too fussy about credentials, it should probably include as well educators, and parents and . . . babies? Arguably, all humans, from the time they are born, spend a considerable reckoning of time making observations of how people (others and themselves) behave and why, and telling stories to make sense of those observations.

The stories, of course, all differ from one another to greater or lesser degrees. In fact, the notion that "human behaviour and all that it entails . . . are a function of the nervous system" is itself a story used to make sense of observations by some people and not by any other? It is not my intent here to try to defend this particular story, or any other story for that matter. Very much to the contrary, what I want to do is to explore the implications and significance of the fact that there are different stories and that they might be about the same (some)thing.

In so doing, I want to try to create a new story that helps to facilitate an enhanced dialogue between neuroscience/cognitive science, on the one hand, and psychotherapy, on the other. That new stories of itself are stories of conflicting historical narratives . . . what is within being called the "nervous system" but others are free to call the "self," "mind," "soul," or whatever best fits their own stories. What is important is the idea that multiple things, evident by their conflicts, may not in fact be disconnected and adversarial entities but could rather be fundamentally, understandably, and valuably interconnected parts of the same thing.

"Non-conscious Prediction and a Role for Consciousness in Correcting Prediction Errors" by Regina Pally (Pally, 2004) is the take-off point for my enterprise. Pally is a practising psychiatrist, psychoanalyst, and psychotherapist who have actively engaged with neuroscientists to help make sense of her own observations. I am a neuroscientist who recently spent two years as an Academic Fellow of the Psychoanalytic Centre of Philadelphia, an engagement intended to expand my own set of observations and forms of story-telling. The significance of this complementarity, and of our similarities and differences, is that something will emerge in this commentary.

Many psychoanalysts (and psychotherapists too, I suspect) feel that the observations/stories of neuroscience/cognitive science are for their own activities at best, find to some irrelevance, and at worst destructive or are they not the same probability that holds for many neuroscientists/cognitive scientists. Pally clearly feels otherwise, and it is worth exploring a bit why this is so in her case. A general key, I think, is in her line "In current paradigms, the brain has intrinsic activity, is highly integrated, is interactive with the environment, and is goal-oriented, with predictions operating at every level, from lower systems to . . . the highest functions of abstract thought.” Contemporary neuroscience/cognitive science has indeed uncovered an enormous complexity and richness in the nervous system, "making it not so different from how psychoanalysts (or most other people) would characterize the self, at least not in terms of complexity, potential, and vagary." Given this complexity and richness, there is substantially less reason than there once was to believe psychotherapists and neuroscientists/cognitive scientists are dealing with two fundamentally different things.

Pally suspect, more aware of this than many psychotherapists because she has been working closely with contemporary neuroscientists who are excited about the complexity to be found in the nervous system. And that has an important lesson, but there is an additional one at least as important in the immediate context. In 1950, two neuroscientists wrote that, "the sooner we recognize the certainty of the complexity that is highly functional, just as those who recognize the Gestalts under which they leave the reflex physiologist confounded, in fact they support the simplest functions in the sooner that we will see that the previous terminological peculiarities that seem insurmountably carried between the lower levels of neurophysiology and higher behavioural theory simply dissolve away."

And in 1951 another said: " I am coming more to the conviction that the rudiments of every behavioural mechanism will be found far down in the evolutionary scale and represented in primitive activities of the nervous system."

Neuroscience (and what came to be cognitive science) was engaged from very early on in an enterprise committed to the same kind of understanding sought by psychotherapists, but passed through a phase (roughly from the 1950 through to the 1980's) when its own observations and stories were less rich in those terms. It was a period that gave rise to the notion that the nervous system was "simple" and "mechanistic," which in turn made neuroscience/cognitive science seem less relevant to those with broader concerns, perhaps even threatening and apparently adversarial if one equated the nervous system with "mind," or "self," or "soul," since mechanics seemed degrading to those ideas. Arguably, though, the period was an essential part of the evolution of the contemporary neuroscience/cognitive science story, one that laid needed groundwork for rediscovery and productive exploration of the richness of the nervous system. Psychoanalysis/psychotherapy, and, of course, move through their own story of evolution over its presented time. That the two stories seemed remote from one another during this period was never adequate evidence that they were not about the same thing but only an expression of their needed independent evolutions.

An additional reason that Pally is comfortable with the likelihood that psychotherapists and neuroscientists/cognitive scientists are talking about the same thing is her recognition of isomorphisms (or congruities, Pulver 2003) between the two sets of stories, places where different vocabularies in fact seem to be representing the same (or quite similar) things. I am not sure I am comfortable calling these "shared assumptions" (as Pally does) since they are actually more interesting and probably more significant if they are instead instances of coming to the same ideas from different directions (as I think they are). In this case, the isomorphisms tend to imply that rephrasing Gertrude Stein, that "there proves to be the actualization in the exception of there.” Regardless, Pally has entirely appropriately and, I think, usefully called attention to an important similarity between the psychotherapeutic concept of "transference" and an emerging recognition within neuroscience/cognitive science that the nervous system does not so much collect information about the world as generate a model of it, act in relation to that model, and then check incoming information against the predictions of that model. Pally's suggestion that this model reflects in part early interpersonal experiences, can be largely "unconscious," and so may cause inappropriate and troubling behaviour in current time seems to be entirely reasonable. So too, are those that constitute her thought, in that of the interactions with which an analyst can help by bringing the model to "consciousness" through the intermediary of recognizing the transference onto the analyst.

The increasing recognition of substantial complexity in the nervous system together with the presence of identifiable isomorphisms that provide a solid foundation for suspecting that psychotherapists and neuroscientists/cognitive scientists are indeed talking about the same thing. But the significance of different stories for better understanding a single thing lies as much in the differences between the stories as it does in their similarities/isomorphisms, in the potential for differing and not obviously isomorphic stories to modify another productively, and yielding a new story in the process. With this thought in mind, I want to call attention to some places where the psychotherapeutic and the neuroscientific/cognitive scientific stories have edges that rub against one another than smoothly fitting together. And perhaps to ways each could be usefully further evolved in response to those non-isomorphisms.

Unconscious stories and "reality.” Though her primary concern is with interpersonal relations, Pally clearly recognizes that transference and related psychotherapeutic phenomena are one (actually relatively small) facet of a much more general phenomenon, the creation, largely unconsciously, of stories that are understood to be but are not that any necessary thoughtful pronunciations inclined for the "real world.” Ambiguous figures illustrate the same general phenomenon in a much simpler case, that of visual perception. Such figures may be seen in either of two ways; They represent two "stories" with the choice between them being, at any given time, largely unconscious. More generally, a serious consideration of a wide array of neurobiological/cognitive phenomena clearly implies that, as Pally says, we do not see "reality," but only have stories to describe it that result from processes of which we are not consciously aware.

All of this raises some quite serious philosophical questions about the meaning and usefulness of the concept of "reality." In the present context, what is important is that it is a set of questions that sometimes seem to provide an insurmountable barrier between the stories of neuroscientists/cognitive scientists, who largely think they are dealing with reality, and psychotherapists, who feel more comfortable in more idiosyncratic and fluid spaces. In fact, neuroscience and cognitive science can proceed perfectly well in the absence of a well-defined concept of "reality" and, without being fully conscious of it, committing to fact as they do so. And psychotherapists actually make more use of the idea of "reality" than is entirely appropriate. There is, for example, a tendency within the psychotherapeutic community to presume that unconscious stories reflect "traumas" and other historically verifiable events, while the neurobiological/cognitive science story says quite clearly that they may equally reflect predispositions whose origins reflect genetic information and hence bear little or no relation to "reality" in the sense usually meant. They may, in addition, reflect random "play," putting them even further out of reach of easy historical interpretation. In short, with regard to the relation between "story" and "reality," each set of stories could usefully be modified by greater attention to the other. Differing concepts of "reality" (perhaps the very concept itself) gets in the way of usefully sharing stories. The mental/cognitive scientists' preoccupation with "reality" as an essential touchstone could valuably be lessened, and the therapist's sense of the validation of stories in terms of personal and historical idiosyncracies could be helpfully adjusted to include a sense of actual material underpinnings.

The Unconscious and the Conscious. Pally appropriately makes a distinction between the unconscious and the conscious, one that has always been fundamental to psychotherapy. Neuroscience/cognitive science has been slower to make a comparable distinction but is now rapidly beginning to catch up. Clearly some neural processes generate behaviour in the absence of awareness and intent and others yield awareness and intent with or without accompanying behaviour. An interesting question however, raised at a recent open discussion of the relations between neuroscience and psychoanalysis, is whether the "neurobiological unconscious" is the same thing as the "psychotherapeutic unconscious," and whether the perceived relations between the "unconscious" and the"conscious" are the same in the two sets of stories. Is this a case of an isomorphism or, perhaps more usefully, a masked difference?

An oddity of Pally's article is that she herself acknowledges that the unconscious has mechanisms for monitoring prediction errors and yet implies, both in the title of the paper, and in much of its argument, that there is something special or distinctive about consciousness (or conscious processing) in its ability to correct prediction errors. And here, I think, there is evidence of a potentially useful "rubbing of edges" between the neuroscientific/cognitive scientific tradition and the psychotherapeutic one. The issue is whether one regards consciousness (or conscious processing) as somehow "superior" to the unconscious (or unconscious processing). There is a sense in Pally of an old psychotherapeutic perspective of the conscious as a mechanism for overcoming the deficiencies of the unconscious, of the conscious as the wise father/mother and the unconscious as the willful child. Actually, Pally does not quite go this far, as I will point out in the following, but there is enough of a trend to illustrate the point and, without more elaboration, I do not think of many neuroscientists/cognitive scientists will catch Pally's more insightful lesson. I think Pally is almost certainly correct that the interplay of the conscious and the unconscious can achieve results unachievable by the unconscious alone, but think also that neither psychotherapy nor neuroscience/cognitive science are yet in a position to say exactly why this is so. So let me take a crack here at a new, perhaps bi-dimensional story that could help with that common problem and perhaps both traditions as well.

A major and surprising lesson of comparative neuroscience, supported more recently by neuropsychology (Weiskrantz, 1986) and, more recently still, by artificial intelligence, is that an extraordinarily rich repertoire of adaptive behaviour can occur unconsciously, in the absence of awareness of intent (be supported by unconscious neural processes). It is not only modelling the world and prediction. Error correction that can occur this way but virtually (and perhaps literally) the entire spectrum of behaviour externally observed, including fleeing from a threat, and of approaching good things, generating novel outputs, learning from doing so, and so on.

This extraordinary terrain, discovered by neuroanatomists, electrophysiologists, neurologists, behavioural biologists, and recently extended by others using more modern techniques, is the unconscious of which the neuroscientist/cognitive scientist speaks. It is the area that is so surprisingly rich that it creates, for some people, the puzzle about whether there is anything else at all. Moreover, it seems, at first glance, to be a totally different terrain from that of the psychotherapist, whose clinical experience reveals a territory occupied by drives, unfulfilled needs, and the detritus with which the conscious would prefer not to deal.

As indicated earlier, it is one of the great strengths of Pally's article to suggest that the two areas may in fact, turns out to be the same as in many ways that if they are of the same, then its question only compliments in what way are the "unconscious" and the "conscious" of showing to any difference? Where now are the "two stories?” Pally touches briefly on this point, suggesting that the two systems differ not so much (or at all?) In what they do, but rather in how they do it. This notion of two systems with different styles seems to me worth emphasizing and expanding. Unconscious processing is faster and handles many more variables simultaneously. Conscious processing is slower and handles numerously fewer variables at one time. It is likely that their equalling a host of other differences in style as well, in the handling of number for example, and of time.

In the present context, however, perhaps the most important difference in style is one that Lacan called attention to from a clinical/philosophical perspective - the conscious (conscious processing) have in itself forwarded by some objective "coherence," that it attempts to create a story that makes sense simultaneously of all its parts. The unconscious, on the other hand, is much more comfortable with bits and pieces lying around with no global order. To a neurobiologist/cognitive scientist, this makes perfectly good sense. The circuitry embodies that of the unconscious (sub-cortical circuitry?) Is an assembly of different parts organized for a large number of different specific purposes, and only secondarily linked together to try to assure some coordination? The circuitry, has, once, again, to involve in conscious processing (neo-cortical circuitry?) On the other hand, seems to both be more uniform and integrated and to have an objective for which coherence is central.

That central coherence is well-illustrated by the phenomena of "positive illusions,” exemplified by patients who receive a hypnotic suggestion that there is an object in a room and subsequently walk in ways that avoid the object while providing a variety of unrelated explanations for their behaviour. Similar "rationalization" is, of course, seen in schizophrenic patients and in a variety of fewer dramatic forms in psychotherapeutic settings. The "coherent" objective is to make a globally organized story out of the disorganized jumble, a story of (and constituting) the "self."

What all this introduces that which is the mind or brain for which it is actually organized to be constantly generating at least two different stories in two different styles. One, written by conscious processes in simpler terms, is a story of/about the "self" and experienced as such, for developing insights into how such a story can be constructed using neural circuitry. The other is an unconscious "story" about interactions with the world, perhaps better thought of as a series of different "models" about how various actions relate to various consequences. In many ways, the latter are the grist for the former.

In this sense, we are safely back to the two stories that are ideologically central in their manifestations as pronounced in psychotherapy, but perhaps with some added sophistication deriving from neuroscience/cognitive science. In particular, there is no reason to believe that one story is "better" than the other in any definitive sense. They are different stories based on different styles of story telling, with one having advantages in certain sorts of situations (quick responses, large numbers of variables, more direct relation to immediate experiences of pain and pleasure) and the other in other sorts of situations (time for more deliberate responses, challenges amenable to handling using smaller numbers of variables, more coherent, more able to defer immediate gratification/judgment.

In the clinical/psychotherapeutic context, an important implication of the more neutral view of two story-tellers outlined above is that one ought not to over-value the conscious, nor to expect miracles of the process of making conscious what is unconscious. In the immediate context, the issue is if the unconscious is capable of "correcting prediction errors,” then why appeal to the conscious to achieve this function? More generally, what is the function of that persistent aspect of psychotherapy that aspires to make the unconscious conscious? And why is it therapeutically effective when it is? Here, it is worth calling special attention to an aspect of Pally's argument that might otherwise get a bit lost in the details of her article: . . . the therapist encourages the wife consciously to stop and consider her assumption that her husband does not properly care about her, and effortfully to consider an alternative view and inhibit her impulse to reject him back. This, in turn, creates a new type of experience, one in which he is indeed more loving, such that she can develop new predictions."

It is not, as Pally describes it, the simple act of making something conscious that is therapeutically effective. What is necessary is to decompose the story consciously (something that is made possible by its being a story with a small number of variables) and, even what is more important, to see if the story generates a new "type of experience" that in turn causes the development of "new predictions." The latter, is an effect of the conscious on the unconscious, an alteration of the unconscious brought about by hearing, entertaining, and hence acting on a new story developed by the conscious. It is not "making things conscious" that is therapeutically effective; it is the exchange of stories that encourages the creation of a new story in the unconscious.

For quite different reasons, Grey (1995) earlier made a suggestion not dissimilar to Pally's, proposing that consciousness was activated when an internal model detected a prediction failure, but acknowledged he could see no reason "why the brain should generate conscious experience of any kind at all." Seemingly, in spite of her title, there seems of nothing really to any detection of prediction errors, especially of what is important that Pally's story is the detection of mismatches between two stories. One unconscious and the other conscious, and the resulting opportunity for both to shape a less trouble-making new story. That, briefly may be why the brain "should generate conscious experience,” to reap the benefits of having a second story teller with a different style. Paraphrasing Descartes, one might say "I am, and I can think, therefore I can change who I am.” It is not only the neurobiological "conscious" that can undergo change; it is the neurobiological "unconscious" as well.

More generally, I want to suggest that the most effective psychotherapy requires the recognitions, rapidly emanating from the neuro- sciences and their cognitive counterpart for which are exposed of each within the paradigms of science, that the brain/mind has evolved with two (or more) independent story tellers and has done so precisely because there are advantages to having independent story tellers that generate and exchange different stories. The advantage is that each can learn from the other, and the mechanisms to convey the stories and forth and for each story teller to learn from the stories of the others occurring as a part of our evolutionary endowment as well. The problems that bring patients into a therapist's office are problems in the breakdown of story exchange, for any of a variety of reasons, and the challenge for the therapist is to reinstate the confidence of each story teller in the value of the stories created by the other. Neither the conscious nor the unconscious is primary; they function best as an interdependent loop with each developing its own story facilitated by the semi-independent story of the other. In such an organization, there are not only no "real,” and no primacy for consciousness, there is only the ongoing development and, ideally, effective sharing of different stories.

There are, in the story I am outlining, implications for neuroscience/cognitive science as well. The obvious key questions are what does one mean (in terms of neurons and neuronal assemblies) by "stories," and in what ways are their construction and representation different in unconscious and conscious neural processing. But even more important, if the story I have outlined makes sense, what are the neural mechanisms by which unconscious and conscious stories are exchanged and by which each kind of story impacts on the other? And why (again in neural terms) does the exchange sometimes break down and fail in a way that requires a psychotherapist - an additional story teller - to be repaired?

Just as the unconscious and the conscious are engaged in a process of evolving stories for separate reasons and using separate styles, so too have been and will continue to be neuroscience/cognitive science and psychotherapy. And it is valuable that both communities continue to do so. But there is every reason to believe that the different stories are indeed about the same thing, not only because of isomorphisms between the differing stories but equally because the stories of each can, if listened to, are demonstrably of value to the stories of the other. When breakdowns in story sharing occur, they require people in each community who are daring enough to listen and be affected by the stories of the other community. Pally has done us all a service as such a person. I hope to further the constructs that bridge her to lay, and that others will feel inclined to join in an act of collectivity such that has enormous intellectual potential and relates directly too more seriously psychological need in the mental health arena. Indeed, there are reasons to believe that an enhanced skill at hearing, respecting, and learning from differing stories about similar things would be useful in a wide array of contexts.

The physical basis of consciousness appears to be the major and most

singular challenge to the scientific, reductionist world view. In the closing years of the second millennium, advances in the ability to record the activity of individual neurons in the brains of monkeys or other animals while they carry out particular tasks, combined with the explosive development of functional brain imaging in normal humans, has lead to a renewed empirical program to discover the scientific explanation of consciousness. This article reviews some of the relevant experimental work and argues that the most advantageous strategy for now is to focus on discovering the neuronal correlates of consciousness.

Consciousness is a puzzling state-dependent property of certain types of complex, adaptive systems. The best example of one type of such systems is a healthy and attentive human brain. If the brain is anaesthetized, consciousness ceases. Small lesions in the midbrain and thalamus of patients can lead to a complete loss of consciousness, while destruction of circumscribed parts of the cerebral cortex of patients can eliminate very specific aspects of consciousness, such as the ability to be aware of motion or to recognize objects as faces, without a concomitant loss of vision usually. Given the similarity in brain structure and behaviour, biologists commonly assume that at least some animals, in particular non-human primates, share certain aspects of consciousness with humans. Brain scientists, in conjunction with cognitive neuroscientists, are exploiting a number of empirical approaches that shed light on the neural basis of consciousness. Since it is not known to what extent, artificial systems, such as computers and robots, can become conscious, this article will exclude these from consideration.

Largely, neuroscientists have made a number of working assumptions that, in the fullness of time, need to be justified more fully.

(1) There is something to be explained; that is, the subjective content associated with a conscious sensation - what philosophers point to the qualia - does exist and has its physical basis in the brain. To what extent qualia and all other subjective aspects of consciousness can or cannot be explained within some reductionist framework remains highly controversially.

(2) Consciousness is a vague term with many usages and will, in the fullness of time, be replaced by a vocabulary that more accurately reflect the contribution of different brain processes (for a similar evolution, consider the usage of memory, that has been replaced by an entire hierarchy of more specific concepts). Common to all forms of consciousness is that it feels like something (e.g., to “see blue," to “experience a head-ache,” or to "reflect upon a memory"). Self-consciousness is but one form of consciousness.

It is possible that all the different aspects of consciousness (smelling, pain, visual awareness, effect, self-consciousness, and so on) employ a basic common mechanism or perhaps a few such mechanisms. If one could understand the mechanism for one aspect, then one will have gone most of the way toward understanding them all.

(3) Consciousness is a property of the human brain, a highly evolved system. It therefore must have a useful function to perform. Crick and Koch (1998) assumes that the function of the neuronal correlate of consciousness is to produce the best current interpretation of the environment-in the light of past experiences-and to make it available, for a sufficient time, to the parts of the brain that contemplate, plan and execute voluntary motor outputs (including language). This needs to be contrasted with the on-line systems that bypass consciousness but that can generate stereotyped behaviours.

Note that in normally developed individuals motor output is not necessary for consciousness to occur. This is demonstrated by lock-in syndrome in which patients have lost (nearly) all ability to move yet are clearly conscious.

(4) At least some animal species posses some aspects of consciousness. In particular, this is assumed to be true for non-human primates, such as the macaque monkey. Consciousness associated with sensory events in humans is likely to be related to sensory consciousness in monkeys for several reasons. Firstly, trained monkeys show similar behaviour to that of humans for many low-level perceptual tasks (e.g., detection and discrimination of visual motion or depth. Secondly, the gross neuroanatomy of humans and non-human primates are rather similar once the difference in size has been accounted for. Finally, functional magnetic resonance imaging of human cerebral cortex is confirming the existence of a functional organization in sensory cortical areas similar to that discovered by the use of single cell electrophysiology in the monkey. As a corollary, it follows that language is not necessary for consciousness to occur (although it greatly enriches human consciousness).

It is important to distinguish the general, enabling factors in the brain that are needed for any form of consciousness to occur from modulating ones that can up-or-down regulate the level of arousal, attention and awareness and from the specific factors responsible for a particular content of consciousness.

An easy example of an enabling factor would be a proper blood supply. Inactivate the heart and consciousness ceases within a fraction of a minute. This does not imply that the neural correlate of consciousness is in the heart (as Aristotle thought). A neuronal enabling factor for consciousness is the intralaminar nuclei of the thalamus. Acute bilateral loss of function in these small structures that are widely and reciprocally connected to the basal ganglia and cerebral cortex leads to an immediate coma or profound disruption in arousal and consciousness.

Among the neuronal modulating factors are the various activities in nuclei in the brain stem and the midbrain, often collectively referred to as the reticular activating system, that control in a widespread and quite specific manner the level of noradrenaline, serotonin and acetylcholine in the thalamus and forebrain. Appropriate levels of these neurotransmitters are needed for sleep, arousal, attention, memory and other functions critical to behaviour and consciousness.

Yet any particular content of consciousness is unlikely to arise from these structures, since they probably lack the specificity necessary to mediate a sharp pain in the right molar, the percept of the deep, blue California sky, the bouquet associated with a rich Bordeaux, a haunting musical melody and so on. These must be caused by specific neural activity in cortex, thalamus, basal ganglia and associated neuronal structures. The question motivating much of the current research into the neuronal basis of consciousness is the notion of the minimal neural activity that is sufficient to cause a specific conscious percept or memory.

For instance, when a subject consciously perceives a face, the retinal ganglion cells whose axons make up the optic nerve that carries the visual information to the brain proper are firing in response to the visual stimulus. Yet it is unlikely that this retinal activity directly correlates with visual perception. While such activity is evidently necessary for seeing a physical stimulus in the world, retinal neurons by themselves do not give rise to consciousness.

Given the comparative ease with which the brains of animals can be probed and manipulated, it seems opportune at this point in time to concentrate on the neural basis of sensory consciousness. Because primates are highly visual animals and much is known about the neuroanatomy, psychology and computational principles underling visual perception, visions has proven to be the most popular model systems in the brain sciences.

Cognitive and clinical research demonstrates that much complex information processing can occur without involving consciousness. This includes visual, auditory and linguistic priming, implicit memory, the implicit recognition of complex sequences, automatic behaviours such as driving a car or riding a bicycle and so on (Velmans 1991). The dissociations found in patients with lesions in the cerebral cortex (e.g., such as residual visual functions in the professed absence of any visual awareness known as clinical blind-sight in patients with lesions in preliminary visual cortex.

It can be said, that if one is without idea, then one is without concept, and as well, if one is without concept one is without an idea. An idea (Gk., eidos, visible form) be it a notion stretching all the way from one pole, where it denotes a subjective, internal presence in the mind, somehow thought of as representing something about the world, to the other pole, where it represents an eternal, timeless unchanging form or concept: The concept o the number series or of justice, for example, thought of as independent objected of enquiry and perhaps of knowledge. These two poles are not distinct meanings of the therm, although they give rise to many problems of interpretation, but between tem they define a space of philosophical problems. On the other hand, ideas are that with which er think, or in Locke’s terms, whatever the mind may be employed about in thinking. Looked at that way they seem to be inherently transient, fleeting, and unstable private presences. On the other hand, ideas provide the way in which objective knowledge can be expressed. They are the essential components of understanding, and any intelligible proposition that is true must be capable of being understood. Plato’s theory of Forms is a celebration of objective and timeless existence of ideas as concepts, and in his hands ideas are reified to the point where they make up the only rea world, of separate and perfect models of which the empirical world is only a poor cousin. This doctrine, notable in the Timaeus, opened the way for the Neoplatonic notion of ideas as the thoughts of God. The concept gradually lost this other-worldly aspect until after Descartes ideas become assimilated to whatever it is that lies in the mind of any thinking being.

The philosophical doctrine that reality is somehow mind-correlatives or mind co-ordinated - that the real objects comprising the “external world” are mot independent of cognizing minds, but only exist as in some way correlative to the mental operations. The doctrine centres on the conception that reality as we understand it reflects the working of mind. And it construes this as meaning that the inquiring mind itself to make a formative contribution not merely to our understanding character we attribute to it.

The cognitive scientist Jackendoff (1987) argues at length against the notion that consciousness and thoughts are inseparable and that introspection can reveal the contents of the mind. What is conscious about thoughts, are sensory aspects, such as visual images, sounds or silent speech? Both the process of thought and its content are not directly accessible to consciousness. Indeed, one tradition in psychology and psychoanalysis - going back to Sigmund Freud-hypothesizes that higher-level decision making and creativity are not accessible at a conscious level, although they influence behaviour.

Within the visual modality, Milner and Goodale (1995) have made a masterful case for the existence of so-called on-line systems that by-pass consciousness. Their function is to mediate relative stereotype visuo-motor behaviours, such as eye and arm movements, reaching, grasping, and postural adjustment and so on. In a very rapid, reflex-like manner. On-line systems work in egocentric coordinate systems, and lack certain types of perceptual illusions (e.g., size illusion) as well as direct access to working memory. These contrasts are well within the function of consciousness as alluded to from above, namely to synthesize information from many different sources and use it to plan behavioural patterns over time. Milner and Goodale argue that on-line systems are associated with the dorsal stream of visual information in the cerebral cortex, originating in the primary visual cortex and terminating in the posterior parietal cortex. The problem of consciousness can be broken down into several separate questions. Most, if not all of these, can then be subjected to scientific inquiry.

The major question that neuroscience must ultimately answer can be bluntly stated as follows: It is probable that at any moment some active neuronal processes in our head correlates with consciousness, while others do not; what is the difference between them? The specific processes that correlate with the current content of consciousness are referred to as the neuronal correlate of consciousness, or as the NCC. Whenever some information is represented in the NCC, it is represented in consciousness. The NCC is the minimal (minimal, since it is known that the entire brain is sufficient to give rise to consciousness) set of neurons, most likely distributed throughout certain cortical and subcortical areas, whose firing directly correlates with the perception of the subject at the time. Conversely, stimulating these neurons in the right manner with some yet unheard of technology should give rise to the same perception as before.

Discovering the NCC and its properties will mark a major milestone in any scientific theory of consciousness.

What is the character of the NCC? Most popular has been the belief that consciousness arises as an emergent property of a very large collection of interacting neurons (for instance, Libet 1993). In this view, it would be foolish to locate consciousness at the level of individual neurons. An alternative hypothesis is that there are special sets of “consciousness" neurons distributed throughout cortex and associated systems. Such neurons represent the ultimate neuronal correlate of consciousness, in the sense that the relevant activity of an appropriate subset of them is both necessary and sufficient to give rise to an appropriate conscious experience or percept (Crick and Koch 1998). Generating the appropriate activity in these neurons, for instance by suitable electrical stimulation during open skull surgery, would give rise to the specific percept.

Any-one subtype of NCC neurons would, most likely, be characterized by a unique combination of molecular, biophysical, pharmacological and anatomical traits. It is possible, of course, that all cortical neurons may be capable of participating in the representation of one percept or another, though not necessarily doing so for all percepts. The secret of consciousness would then be the type of activity of a temporary subset of them, consisting of all those cortical neurons that represent that particular percept at that moment. How activity of neurons across a multitude of brain areas that encode all of the different aspects associated with an object (e.g., the colour of the face, its facial expression, its gender and identity, the sound issuing from its mouth) is combined into some single percept remains puzzling and is known as the binding problem.

What, if anything, can we infer about the location of neurons whose activity correlates with consciousness? In the case of visual consciousness, it was surmised that these neurons must have access to visual information and project to the planning stages of the brain; That is to premotor and frontal areas. Since no neurons in the primary visual cortex of the macaque monkey project to any area forward of the central sulcus, Crick and Koch (1998) propose that neurons in V1 do not give rise to consciousness (although it is necessary for most forms of vision, just as the retina is). Ongoing electro physiological, psycho physical and imaging research in monkeys and humans is evaluating this prediction.

While the set of neurons that can express anyone particular conscious percept might constitute a relative small fraction of all neurons in anyone area, many more neurons might be necessary to support the firing activity leading up to the NCC. This might resolve the apparent paradox between clinical lessoning data suggesting that small and discrete lesions in the cortex can lead to very specific deficits (such as the inability to see colours or to recognize faces in the absence of other visual losses) and the functional imaging data that anyone visual stimulus can activate large swaths of cortex.

Conceptually, several other questions need to be answered about the NCC. What type of activity corresponds to the NCC (it has been proposed as long ago as the early part of the twentieth century that spiking activity synchronized across a population of neurons is a necessary condition for consciousness to occur)? What causes the NCC to occur? And, finally, what effect does the NCC have on postsynaptic structures, including motor output.

A promising experimental approach to locate the NCC is the use of bistable percepts in which a constant retinal stimulus gives rise to two percepts alternating in time, as in a Necker cube (Logothetis 1998). One version of this is binocular rivalry in which small images, say of a horizontal grating, are presented to the left eye and another image, say the vertical grating is shown to the corresponding location in the right eye. In spite of the constant visual stimulus, observers “see" the horizontal grating alternately every few seconds with the vertical one (Blake 1989). The brain does not allow for the simultaneous perception of both images.

It is possible, though difficult, to train a macaque monkey to report whether it is currently seeing the left or the right image. The distribution of the switching times and the way in which changing the contrast in one eye affects these leaves little to doubt, in that monkeys and humans experience the same basic phenomenon. In a series of elegant experiments, Logothetis and colleagues (Logothetis 1998) recorded from a variety of visual cortical areas in the awake macaque monkey while the animal performed a binocular rivalry task. In undeveloped visual cortices, only a small fraction of cells modulates their response as a function of the percept of the monkey, while 20 to 30% of neurons in higher visual areas in the cortex do so. The majority of cells increased their firing rate in response to one or the other retinal stimulus with little regard to what the animal perceives at the time. In contrast, in a high-level cortical area such as the inferior temporal cortex, almost all neurons responded only to the perceptual dominant stimulus (in other words, a “face” cell only fired when the animal indicated by its performance that it saw the face and not the pattern presented to the other eye). This makes it likely that the NCC involves activity in neurons in the inferior temporal lobe. Lesions in the homologous area in the human brain are known to cause very specific deficits in the conscious face or object recognition. However, it is possible that specific interactions between IT cells and neurons in parts of the prefrontal cortex are necessary in order for the NCC to be generated

Functional brain imaging in humans undergoing binocular rivalry has revealed that areas in the right prefrontal cortex are in activating during the perceptual switch from one percept to the other.

A number of alternate experimental paradigms are being investigated using electro physiological recordings of individual neurons in behaving animals and human patients, combined with functional brain imaging. Common to these is the manipulation of the complex and changing relationship between physical stimulus and the conscious percept. For instance, when subjects are forced rapidly to respond to a low saliency target, both monkeys and human’s sometimes claim to perceive such a target in the absence of any physical target consciously (false alarm) or fail to respond to a target (miss). The NCC in the appropriate sensory area should mirror the perceptual report under these dissociated conditions. Visual illusions constitute another rich source of experiments that can provide information concerning the neurons underlying these illusory percepts. A classical example is the motion affected in which a subject stares at a constantly moving stimulus (such as a waterfall) for a fraction of a minute or longer. Immediately after this conditioning period, a stationary stimulus will appear to move in the opposite direction. Because of the conscious experience of motion, one would expect, the subject’s cortical motion areas to be activated in the absence of any moving stimulus.

Future techniques, most likely based on the molecular identification and manipulation of discrete and identifiable subpopulations of cortical cells in appropriate animals, will greatly help in this endeavour

Identifying the type of activity and the type of neurons that gives rise to specific conscious percept in animals and humans would only be the first, even if critical, step in understanding consciousness. One also needs to know where these cells project to, their postsynaptic action, how they develop in early childhood, what happens to them in mental diseases known to affect consciousness in patients, such as schizophrenia or autism, and so on. And, of course, a final theory of consciousness would have to explain the central mystery, why a physical system with particular architectures gives rise to feelings and qualia.

The central structure of an experience is its intentionality, its being directed toward something, as it is an experience of or about some object. An experience is directed toward an object by virtue of its content or meaning (which represents the object) together with appropriate enabling conditions.

Phenomenology as a discipline is distinct from but related to other key disciplines in philosophy, such as ontology, epistemology, logic, and ethics. Phenomenology has been practised in various guises for centuries, however, its maturing qualities have begun in the early parts of the 20th century. The works that have dramatically empathized the growths of phenomenology are accredited through the works of Husserl, Heidegger, Sartre, Merleau-Ponty and others. Phenomenological issues of intentionality, consciousness, qualia, and first-person perspective have been prominent in recent philosophy of mind.

Phenomenology is commonly understood in either of two ways: as a disciplinary field in philosophy, or as a movement in the history of philosophy.

The discipline of phenomenology may be defined initially as the study of structures of experience, or consciousness. Literally, phenomenology is the study of "phenomena": Appearances of things, or things as they appear in our experience, or the ways we experience things, thus the meaning’s things have in our experience. Phenomenology studies conscious experience as experienced from the subjective or first person point of view. This field of philosophy is then to be distinguished from, and related to, the other main fields of philosophy: Ontology (the study of being or what is), epistemology (the study of knowledge), logic (the study of valid reasoning), ethics (the study of right and wrong action), etc.

The historical movement of phenomenology is the philosophical tradition launched in the first half of the 20th century by Edmund Husserl, Martin Heidegger, Maurice Merleau-Ponty, Jean-Paul Sartre. In that movement, the discipline of phenomenology was prized as the proper foundation of all philosophy - as opposed, say, to ethics or metaphysics or epistemology. The methods and characterization of the discipline were widely debated by Husserl and his successors, and these debates continue to the present day. (The definitions of Phenomenological offered above will thus be debatable, for example, by Heideggerians, but it remains the starting point in characterizing the discipline.)

In recent philosophy of mind, the term "phenomenology" is often restricted to the characterization of sensory qualities of seeing, hearing, etc.: What it is like to have sensations of various kinds. However, our experience is normally much richer in content than mere sensation. Accordingly, in the Phenomenological tradition, phenomenology is given a much wider range, addressing the meaning things have in our experience, notably, the significance of objects, events, tools, the flow of time, the self, and others, as these things arise and are experienced in our "life-world.”

Phenomenology as a discipline has been central to the tradition of continental European philosophy throughout the 20th century, while philosophy of mind has evolved in the Austro-Anglo-American tradition of analytic philosophy that developed throughout the 20th century. Yet the fundamental character of our mental activity is pursued in overlapping ways within these two traditions. Accordingly, the perspective on phenomenology drawn in this article will accommodate both traditions. The main concern here will be to characterize the discipline of phenomenology, in contemporary views, while also highlighting the historical tradition that brought the discipline into its own.

Basically, phenomenology studies the structure of various types of experience ranging from perception, thought, memory, imagination, emotion, desire, and volition to bodily awareness, embodied action, and social activity, including linguistic activity. The structure of these forms of experience typically involves what Husserl called "intentionality,” that is, the directedness of experience toward things in the world, the property of consciousness that it is a consciousness of or about something. According to classical Husserlian phenomenology, our experiences abide toward the direction that represents or "intends" of things only through particular concepts, thoughts, ideas, images, etc. These make up the meaning or content of a given experience, and are distinct from the things they present or mean.

The basic intentional structure of consciousness, we come to find in reflection or analysis, in that of which involves further forms of experience. Thus, phenomenology develops a complex account of temporal awareness (within the stream of consciousness), spatial awareness (notably in perception), attention (distinguishing focal and marginal or "horizonal" awareness), awareness of one's own experience (self-consciousness, in one sense), self-awareness (awareness-of-oneself), the self in different roles (as thinking, acting, etc.), embodied action (including kinesthetic awareness of one's movement), determination or intention represents its desire for action (more or less explicit), awareness of other persons (in empathy, intersubjectivity, collectivity), linguistic activity (involving meaning, communication, understanding others), social interaction (including collective action), and everyday activity in our surrounding life-world (in a particular culture).

Furthermore, in a different dimension, we find various grounds or enabling conditions - conditions of the possibility - of intentionality, including embodiment, bodily skills, cultural context, language and other social practices, social background, and contextual aspects of intentional activities. Thus, phenomenology leads from conscious experience into conditions that help to give experience its intentionality. Traditional phenomenology has focussed on subjective, practical, and social conditions of experience. Recent philosophy of mind, however, has focussed especially on the neural substrate of experience, on how conscious experience and mental representation or intentionality is grounded in brain activity. It remains a difficult question how much of these grounds of experience fall within the province of phenomenology as a discipline. Cultural conditions thus seem closer to our experience and to our familiar self-understanding than do the electrochemical workings of our brain, much less our dependence on quantum-mechanical states of physical systems to which we may belong. The cautious thing to say is that phenomenology leads in some ways into at least some background conditions of our experience.

The discipline of phenomenology is defined by its domain of study, its methods, and its main results. Phenomenology studies structures of conscious experience as experienced from the first-person point of view, along with relevant conditions of experience. The central structure of an experience is its intentionality, the way it is directed through its content or meaning toward a certain object in the world.

We all experience various types of experience including perception, imagination, thought, emotion, desire, volition, and action. Thus, the domain of phenomenology is the range of experiences including these types (among others). Experience includes not only relatively passive experience as in vision or hearing, but also active experience as in walking or hammering a nail or kicking a ball. (The range will be specific to each species of being that enjoys consciousness; Our focus is on our own human experience. Not all conscious beings will, or will be able to, practice phenomenology, as we do.)

Conscious experiences have a unique feature: we experience them, we live through them or perform them. Other things in the world we may observe and engage. But we do not experience them, in the sense of living through or performing them. This experiential or first-person feature - that of being experienced - is an essential part of the nature or structure of conscious experience: as we say, "I see/think/desire/do . . ." This feature is both a Phenomenological and an ontological feature of each experience: it is part of what it is for the experience to be experienced (Phenomenological) and part of what it is for the experience to be (ontological).

How shall we study conscious experience? We reflect on various types of experiences just as we experience them. That is to say, we proceed from the first-person point of view. However, we do not normally characterize an experience at the time we are performing it. In many cases we do not have that capability: a state of intense anger or fear, for example, consumes the entire focus at the time. Rather, we acquire a background of having lived through a given type of experience, and we look to our familiarity with that type of experience: hearing a song, seeing a sunset, thinking about love, intending to jump a hurdle. The practice of phenomenology assumes such familiarity with the type of experiences to be characterized. Importantly, also, it is types of experience that phenomenology pursues, rather than a particular fleeting experience - unless its type is what interests us.

Classical phenomenologists practised some three distinguishable methods. (1) We describe a type of experience just as we find it in our own (past) experience. Thus, Husserl and Merleau-Ponty spoke of pure description of lived experience. (2) We interpret a type of experience by relating it to relevant features of context. In this vein, Heidegger and his followers spoke of hermeneutics, the art of interpretation in context, especially social and linguistic context. (3) We analyse the form of a type of experience. In the end, all the classical phenomenologists practised analysis of experience, factoring out notable features for further elaboration.

These traditional methods have been ramified in recent decades, expanding the methods available to phenomenology. Thus: (4) In a logico-semantic model of phenomenology, we specify the truth conditions for a type of thinking (say, where I think that dogs chase cats) or the satisfaction conditions for a type of intention (say, where I intend or will to jump that hurdle). (5) In the experimental paradigm of cognitive neuroscience, we design empirical experiments that tend to confirm or refute aspects of experience (say, where a brain scan shows electrochemical activity in a specific region of the brain thought to subserve a type of vision or emotion or motor control). This style of "neurophenomenology" assumes that conscious experience is grounded in neural activity in embodied action in appropriate surroundings - mixing pure phenomenology with biological and physical science in a way that was not wholly congenial to traditional phenomenologists.

What makes an experience conscious is a certain awareness one has of the experience while living through or performing it. This form of inner awareness has been a topic of considerable debate, centuries after the issue arose with Locke's notion of self-consciousness on the heels of Descartes' sense of consciousness (conscience, co-knowledge). Does this awareness-of-experience consist in a kind of inner observation of the experience, as if one were doing two things at once? (Brentano argued no.) Is it a higher-order perception of one's mind's operation, or is it a higher-order thought about one's mental activity? (Recent theorists have proposed both.) Or is it a different form of inherent structure? (Sartre took this line, drawing on Brentano and Husserl.) These issues are beyond the scope of this article, but notice that these results of Phenomenological analysis, that shape the characterlogical domain of study and the methodology appropriate to the domain. For awareness-of-experience is a defining trait of conscious experience, the trait that gives experience a first-person, lived character. It is that lived character of experience that allows a first-person perspective on the object of study, namely, experiences, and that perspective is characteristic of the methodology of phenomenology.

Conscious experience is the starting point of phenomenology, but experience shades off into fewer overtly conscious phenomena. As Husserl and others stressed, we are only vaguely aware of things in the margin or periphery of attention, and we are only implicitly aware of the wider horizon of things in the world around us. Moreover, as Heidegger stressed, in practical activities like walking along, or hammering a nail, or speaking our native tongue, we are not explicitly conscious of our habitual patterns of action. Furthermore, as psychoanalysts have stressed, much of our intentional mental activity is not conscious at all, but may become conscious in the process of therapy or interrogation, as we come to realize how we feel or think about something. We should allow, then, that the domain of phenomenology - our own experience - spreads out from conscious experience into semi-conscious and even unconscious mental activity, along with relevant background conditions implicitly invoked in our experience. (These issues are subject to debate; the point here is to open the door to the question of where to draw the boundary of the domain of phenomenology.)

To begin an elementary exercise in phenomenology, consider some typical experiences one might have in everyday life, characterized in the first person: (1) I see that fishing boat off the coast as dusk descends over the Pacific. (2) I hear that helicopter whirring overhead as it approaches the hospital. (3) I am thinking that phenomenology differs from psychology. (4) I wish that warm rain from Mexico were falling like last week. (5) I imagine a fearsome creature like that in my nightmare. (6) I intend to finish my writing by noon. (7) I walk carefully around the broken glass on the sidewalk. (8) I stroke a backhand cross-court with that certain underspin. (9) I am searching for the words to make my point in conversation.

Here are rudimentary characterizations of some familiar types of experience. Each sentence is a simple form of Phenomenological description, articulating in everyday English the structure of the type of experience so described. The subject term "I" indicate the first-person structure of the experience: The intentionality proceeds from the subject. The verb indicates the type of intentional activity describing recognition, thought, imagination, etc. Of central importance is the way that objects of awareness are presented or intended in our experiences, especially, the way we see or conceive or think about objects. The direct-object expression ("that fishing boat off the coast") articulates the mode of presentation of the object in the experience: the content or meaning of the experience, the core of what Husserl called noema. In effect, the object-phrase expresses the noema of the act described, that is, to the extent that language has appropriate expressive power. The overall form of the given sentence articulates the basic form of intentionality in the experience: Subject-act-content-object.

Rich Phenomenological description or interpretation, as in Husserl, Merleau-Ponty et al., will far outrun such simple Phenomenological descriptions as above. But such simple descriptions bring out the basic form of intentionality. As we interpret the Phenomenological description further, we may assess the relevance of the context of experience. And we may turn to wider conditions of the possibility of that type of experience. In this way, in the practice of phenomenology, we classify, describe, interpret, and analyse structures of experiences in ways that answer to our own experience.

In such interpretive-descriptive analyses of experience, we immediately observe that we are analysing familiar forms of consciousness, conscious experience of or about this or that. Intentionality is thus the salient structure of our experience, and much of the phenomenology proceeds as the study of different aspects of intentionality. Thus, we explore structures of the stream of consciousness, the enduring self, the embodied self, and bodily action. Furthermore, as we reflect on how these phenomena work, we turn to the analysis of relevant conditions that enable our experiences to occur as they do, and to represent or intend as they do. Phenomenology then leads into analyses of conditions of the possibility of intentionality, conditions involving motor skills and habits, backgrounding social practices, and often language, with its special place in human affairs, presents the following definition: "Phoneme, . . .”

The Oxford English Dictionary indicated of its knowledge, where science as itself is a contained source of phenomena as distinct from being (ontology). That division of any science that describes and classifies its phenomena. From the Greek phainomenon, appearance. In philosophy, the term is used in the first sense, amid debates of theory and methodology. In physics and philosophy of science, the term is used in the second sense, but only occasionally.

So its root meaning, then, phenomenology is the study of phenomena: Literally, appearances as opposed to reality. This ancient distinction launched philosophy as we emerged from Plato's cave. Yet the discipline of phenomenology did not blossom until the 20th century and remains poorly understood in many circles of contemporary philosophy. What is that discipline? How did philosophy move from a root concept of phenomena to the discipline of phenomenology?

Originally, in the 18th century, "phenomenology" meant the theory of appearances fundamental to empirical knowledge, especially sensory appearances. The term seems to have been introduced by Johann Heinrich Lambert, a follower of Christian Wolff. Subsequently, Immanuel Kant used the term occasionally in various writings, as did Johann Gottlieb Fichte and G. W. F. Hegel. By 1889 Franz Brentano used the term to characterize what he called "descriptive psychology. From there Edmund Husserl took up the term for his new science of consciousness, and the rest is history.

Suppose we say phenomenology study’s phenomena: Of what appears to us - and its appearing. How shall we understand phenomena? The term has a rich history in recent centuries, in which we can see traces of the emerging discipline of phenomenology.

In a strict empiricist vein, what appears before the mind accedes of sensory data or qualia: either patterns of one's own sensations (seeing red here now, feeling this ticklish feeling, hearing that resonant bass tone) or sensible patterns of worldly things, say, the looks and smells of flowers (what John Locke called secondary qualities of things). In a strict rationalist vein, by contrast, what appears before the mind of ideas, rationally formed "clear and distinct ideas" (in René Descartes' ideal). In Immanuel Kant's theory of knowledge, fusing rationalist and empiricist aims, what appears to the mind are phenomena defined as things-as-they-appear or things-as-they-are-represented (in a synthesis of sensory and conceptual forms of objects-as-known). In Auguste Comte's theory of science, phenomena (phenomenes) are the facts (faits, what occurs) that a given science would explain.

In 18th and 19th century epistemology, then, phenomena are the starting points in building knowledge, especially science. Accordingly, in a familiar and still current sense, phenomena are whatever we observe (perceive) and seek to explain. Discipline of psychology emerged late in the 19th century, however, phenomena took on a somewhat different guise. In Franz Brentano's Psychology from an Empirical Standpoint (1874), phenomena are of what is to occur in the mind: Mental phenomena are acts of consciousness (or their contents), and physical phenomena are objects of external perception starting with colours and shapes. For Brentano, physical phenomena exist "intentionally" in acts of consciousness. This view revives a Medieval notion Brentano called "intentional in-existence. Nevertheless, the ontology remains undeveloped (what is it to exist in the mind, and do physical objects exist only in the mind?). More generally, we might say that phenomena are whatever we are conscious of: objects and events around us, other people, ourselves, even (in reflection) our own conscious experiences, as we experience these. In a certain technical sense, phenomena are things as they are given to our consciousness, whether in perception or imagination or thought or volition. This conception of phenomena would soon inform the new discipline of phenomenology.

Brentano distinguished descriptive psychology from genetic psychology. Where genetic psychology seeks the causes of various types of mental phenomena, descriptive psychology defines and classifies the various types of mental phenomena, including perception, judgment, emotion, etc. According to Brentano, every mental phenomenon, or act of consciousness, is directed toward some object, and only mental phenomena are so directed. This thesis of intentional directedness was the hallmark of Brentano's descriptive psychology. In 1889 Brentano used the term "phenomenology" for descriptive psychology, and the way was paved for Husserl's new science of phenomenology.

Phenomenology as we know it was launched by Edmund Husserl in his Logical Investigations (1900-01). Two importantly different lines of theory came together in that monumental work: Psychological theory, on the heels of Franz Brentano (and William James, whose Principles of Psychology appeared in 1891 and greatly impressed Husserl); Its logically semantic theory, are the heels of Bernard Bolzano and Husserl's contemporaries who founded modern logic, including Gottlob Frege. (Interestingly, both lines of research trace back to Aristotle, and both reached importantly new results in Husserl's day.)

Husserl's Logical Investigations was inspired by Bolzano's ideal of logic, while taking up Brentano's conception of descriptive psychology. In his Theory of Science (1835) Bolzano distinguished between subjective and objective ideas or representations (Vorstellungen). In effect Bolzano criticized Kant and before him the classical empiricists and rationalists for failing to make this sort of distinction, thereby rendering phenomena merely subjective. Logic studies objective ideas, including propositions, which in turn make up objective theories as in the sciences. Psychology would, by contrast, study subjective ideas, the concrete contents (occurrences) of mental activities in particular minds at a given time. Husserl was after both, within a single discipline. So phenomena must be reconceived as objective intentional contents (sometimes called intentional objects) of subjective acts of consciousness. Phenomenology would then study this complex of consciousness and correlated phenomena. In Ideas I (Book One, 1913) Husserl introduced two Greek words to capture his version of the Bolzanoan distinction: noesis and noema (from the Greek verb noéaw, meaning to perceive, thinks, intend, from where the noun nous or mind). The intentional process of consciousness is called noesis, while its ideal content is called noema. The noema of an act of consciousness Husserl characterized both as an ideal meaning and as "the object as intended.” Thus the phenomenon, or object-as-it-appears, becomes the noema, or object-as-it-is-intended. The interpretations of Husserl's theory of noema have been several and amount to different developments of Husserl's basic theory of intentionality. (Is the noema an aspect of the object intended, or rather a medium of intention?)

For Husserl, then, phenomenology integrates a kind of psychology with a kind of logic. It develops a descriptive or analytic psychology in that it describes and Analysed types of subjective mental activity or experience, in short, act of consciousness. Yet it develops a kind of logic - a theory of meaning (today we say logical semantics) - in that it describes and Analysed objective contents of consciousness: Ideas, concepts, images, propositions, in short, ideal meanings of various types that serve as intentional contents, or noematic meanings, of various types of experience. These contents are shareable by different acts of consciousness, and in that sense they are objective, ideal meanings. Following Bolzano (and to some extent the platonistic logician Hermann Lotze), Husserl opposed any reduction of logic or mathematics or science to mere psychology, to how the public happens to think, and in the same spirit he distinguished phenomenology from mere psychology. For Husserl, phenomenology would study consciousness without reducing the objective and shareable meanings that inhabit experience to merely subjective happenstances. Ideal meaning would be the engine of intentionality in acts of consciousness.

A clear conception of phenomenology awaited Husserl's development of a clear model of intentionality. Indeed, phenomenology and the modern concept of intentionality emerged hand-in-hand in Husserl's Logical Investigations (1900-01). With theoretical foundations laid in the Investigations, Husserl would then promote the radical new science of phenomenology in Ideas I (1913). And alternative visions of phenomenology would soon follow.

Phenomenology matured and was nurtured through the works of Husserl, much as epistemology came about by means of its own nutrition but through Descartes study, and ontology or metaphysics came into its own with Aristotle on the heels of Plato. Yet phenomenology has been practised, with or without the name, for many centuries. When Hindu and Buddhist philosophers reflected on states of consciousness achieved in a variety of meditative states, they were practising phenomenology. When Descartes, Hume, and Kant characterized states of perception, thought, and imagination, they were practising phenomenology. When Brentano classified varieties of mental phenomena (defined by the directedness of consciousness), he was practising phenomenology. When William James appraised kinds of mental activity in the stream of consciousness (including their embodiment and their dependence on habit), he too was practising phenomenology. And when recent analytic philosophers of mind have addressed issues of consciousness and intentionality, they have often been practising phenomenology. Still, the discipline of phenomenology, its roots tracing back through the centuries, came full to flower in Husserl.

Husserl's work was followed by a flurry of Phenomenological writing in the first half of the 20th century. The diversity of traditional phenomenology is apparent in the Encyclopaedic of Phenomenology (Kluwer Academic Publishers, 1997, Dordrecht and Boston), which features separate articles on some seven types of phenomenology. (1) Transcendental constitutive phenomenology studies how objects are constituted in pure or transcendental consciousness, setting aside questions of any relation to the natural world around us. (2) Naturalistic constitutive phenomenology studies how consciousness constitutes or takes things in the world of nature, assuming with the natural attitude that consciousness is part of nature. (3) Existential phenomenology studies concrete human existence, including our experience of free choice or action in concrete situations. (4) Generative historicist phenomenology studies how meaning, as found in our experience, is generated in historical processes of collective experience over time. (5) Genetic phenomenology studies the genesis of meanings of things within one's own stream of experience. (6) Hermeneutical phenomenology studies interpretive structures of experience, how we understand and engage things around us in our human world, including ourselves and others. (7) Realistic phenomenology studies the structure of consciousness and intentionality, assuming it occurs in a real world that is largely external to consciousness and not somehow brought into being by consciousness.

The most famous of the classical phenomenologists were Husserl, Heidegger, Sartre, and Merleau-Ponty. In these four thinkers we find different conceptions of phenomenology, different methods, and different results. A brief sketch of their differences will capture both a crucial period in the history of phenomenology and a sense of the diversity of the field of phenomenology.

In his Logical Investigations (1900-01) Husserl outlined a complex system of philosophy, moving from logic to philosophy of language, to ontology (theory of universals and parts of wholes), to a Phenomenological theory of intentionality, and finally to a Phenomenological theory of knowledge. Then in Ideas I (1913) he focussed squarely on phenomenology itself. Husserl defined phenomenology as "the science of the essence of consciousness,” entered on the defining trait of intentionality, approached explicitly "in the first person." In this spirit, we may say phenomenology is the study of consciousness - that is, conscious experience of various types - as experienced from the first-person point of view. In this discipline we study different forms of experience just as we experience them, from the perspective of the subject living through or performing them. Thus, we characterize experiences of seeing, hearing, imagining, thinking, feeling (i.e., emotion), wishing, desiring, willing, and acting, that is, embodied volitional activities of walking, talking, cooking, carpentering, etc. However, not just any characterization of an experience will do. Phenomenological analysis of a given type of experience will feature the ways in which we ourselves would experience that form of conscious activity. And the leading property of our familiar types of experience is their intentionality, their being a consciousness of or about something, something experienced or presented or engaged in a certain way. How I see or conceptualize or understand the object I am dealing with defines the meaning of that object in my current experience. Thus, phenomenology features a study of meaning, in a wide sense that includes more than what is expressed in language.

In Ideas I Husserl presented phenomenology with a transcendental turn. In part this means that Husserl took on the Kantian idiom of "transcendental idealism,” looking for conditions of the possibility of knowledge, or of consciousness generally, and arguably turning away from any reality beyond phenomena. But Husserl's transcendental, turn also involved his discovery of the method of epoché (from the Greek skeptics' notion of abstaining from belief). We are to practice phenomenology, Husserl proposed, by "bracketing" the question of the existence of the natural world around us. We thereby turn our attention, in reflection, to the structure of our own conscious experience. Our first key result is the observation that each act of consciousness is a consciousness of something, that is, intentional, or directed toward something. Consider my visual experience wherein I see a tree across the square. In Phenomenological reflection, we need not concern ourselves with whether the tree exists: my experience is of a tree whether or not such a tree exists. However, we do need to concern ourselves with how the object is meant or intended. I see a Eucalyptus tree, not a Yucca tree; I see that object as a referentially exposed Eucalyptus tree, with certain shape and with bark stripping off, etc. Thus, bracketing the tree itself, we turn our attention to my experience of the tree, and specifically to the content or meaning in my experience. This tree-as-perceived Husserl calls the noema or noematic sense of the experience.

Philosophers succeeding Husserl debated the proper characterization of phenomenology, arguing over its results and its methods. Adolf Reinach, an early student of Husserl's (who died in World War I), argued that phenomenology should remain merged with a total inference by some realistic ontologism, as in Husserl's Logical Investigations. Roman Ingarden, a Polish phenomenologist of the next generation, continued the resistance to Husserl's turn to transcendental idealism. For such philosophers, phenomenology should not bracket questions of being or ontology, as the method of epoché would suggest. And they were not alone. Martin Heidegger studied Husserl's early writings, worked as Assistant to Husserl in 1916, and in 1928, succeeded Husserl in the prestigious chair at the University of Freiburg. Heidegger had his own ideas about phenomenology.

In Being and Time (1927) Heidegger unfurled his rendition of phenomenology. For Heidegger, we and our activities are always "in the world,” our being is being-in-the-world, so we do not study our activities by bracketing the world, rather we interpret our activities and the meaning things have for us by looking to our contextual relations to things in the world. Indeed, for Heidegger, phenomenology resolves into what he called "fundamental ontology.” We must distinguish beings from their being, and we begin our investigation of the meaning of being in our own case, examining our own existence in the activity of "Dasein" (that being whose being is in each case my own). Heidegger resisted Husserl's neo-Cartesian emphasis on consciousness and subjectivity, including how perception presents things around us. By contrast, Heidegger held that our more basic ways of relating to things are in practical activities like hammering, where the phenomenology reveals our situation in a context of equipment and in being-with-others.

In Being and Time Heidegger approached phenomenology, in a quasi-poetic idiom, through the root meanings of "logos" and "phenomena,” so that phenomenology is defined as the art or practice of "letting things show themselves.” In Heidegger's inimitable linguistic play on the Greek roots, “phenomenology” means . . . - to let that which shows itself to be seen from itself in the very way in which it shows itself from itself. Here Heidegger explicitly parodies Husserl's call, "To the things themselves,” or "To the phenomena themselves!" Heidegger went on to emphasize practical forms of comportment or better relating (Verhalten) as in hammering a nail, as opposed to representational forms of intentionality as in seeing or thinking about a hammer. Much, of Being and Time develops an existential interpretation of our modes of being including, famously, our being-toward-death.

In a very different style, in clear analytical prose, in the text of a lecture course called The Basic Problems of Phenomenology (1927), Heidegger traced the question of the meaning of being from Aristotle through many other thinkers into the issues of phenomenology. Our understanding of beings and their being comes ultimately through phenomenology. Here the connection with classical issues of ontology is more apparent, and consonant with Husserl's vision in the Logical Investigations (an early source of inspiration for Heidegger). One of Heidegger's most innovative ideas was his conception of the "ground" of being, looking to modes of being more fundamental than the things around us (from trees to hammers). Heidegger questioned the contemporary concern with technology, and his writing might suggest that our scientific theories are historical artifacts that we use in technological practice, rather than systems of ideal truth (as Husserl had held). Our deep understanding of being, in our own case, comes rather from phenomenology, Heidegger held.

In the 1930s phenomenology migrated from Austrian and then German philosophy into French philosophy. The way had been paved in Marcel Proust's in Search of Lost Time, in which the narrator recounts in close detail his vivid recollections of experiences, including his famous associations with the smell of freshly baked madeleines. This sensibility to experience traces to Descartes' work, and French phenomenology has been an effort to preserve the central thrust of Descartes' insights while rejecting mind-body dualism. The experience of one's own body, or one's lived or living body, has been an important motif in many French philosophers of the 20th century

In the novel Nausea (1936) Jean-Paul Sartre described a bizarre course of experience in which the protagonist, writing in the first person, describes how ordinary objects lose their meaning until he encounters pure being at the foot of a chestnut tree, and in that moment recovers his sense of his own freedom. In Being and Nothingness (1943, written partly while a prisoner of war), Sartre developed his conception of Phenomenological ontology. Consciousness is a consciousness of objects, as Husserl had stressed. In Sartre's model of intentionality, the central player in consciousness is a phenomenon, and the occurrence of a phenomenon is just a consciousness-of-an-object. The chestnut tree I see is, for Sartre, such a phenomenon in my consciousness. Indeed, all things in the world, as we normally experience them, are phenomena, beneath or behind which lies their "being-in-itself.” Consciousness, by contrast, has "being-for-itself,” inasmuch as consciousness is not only a consciousness-of-its-object but also a pre-reflective consciousness-of-itself (conscience de soi). Yet for Sartre, unlike Husserl, that "I" or self is nothing but a sequence of acts of consciousness, notably including radically free choices (like a Humean bundle of perceptions).

For Sartre, the practice of phenomenology proceeds by a deliberate reflection on the structure of consciousness. Sartre's method is in effect a literary style of interpretive description of different types of experience in relevant situations - a practice that does not really fit the methodological proposals of either Husserl or Heidegger, but makes use of Sartre's great literary skill. (Sartre wrote many plays and novels and was awarded the Nobel Prize in Literature.)

Sartre's phenomenology in Being and Nothingness became the philosophical foundation for his popular philosophy of existentialism, sketched in his famous lecture "Existentialism is a Humanism" (1945). In Being and Nothingness Sartre emphasized the experience of freedom of choice, especially the project of choosing oneself, the defining pattern of one's past actions. Through vivid description of the "look" of the Other, Sartre laid groundwork for the contemporary political significance of the concept of the Other (as in other groups or ethnicities). Indeed, in The Second Sex (1949) Simone de Beauvoir, Sartre's life-long companion, launched contemporary feminism with her nuance account of the perceived role of women as Other.

In 1940s Paris, Maurice Merleau-Ponty joined with Sartre and Beauvoir in developing phenomenology. In Phenomenology of Perception (1945) Merleau-Ponty developed a rich variety of phenomenology emphasizing the role of the body in human experience. Unlike Husserl, Heidegger, and Sartre, Merleau-Ponty looked to experimental psychology, analysing the reported experience of amputees who felt sensations in a phantom limb. Merleau-Ponty rejected both associationist psychology, focussed on correlations between sensation and stimulus, and intellectualist psychology, focussed on rational construction of the world in the mind. (Think of the behaviorist and computationalist models of mind in more recent decades of empirical psychology.) Instead, Merleau-Ponty focussed on the "body image,” our experience of our own body and its significance in our activities. Extending Husserl's account of the lived body (as opposed to the physical body), Merleau-Ponty resisted the traditional Cartesian separation of mind and body. For the body image is neither in the mental realm nor in the mechanical-physical realm. Rather, my body is, as it were, me in my engaged action with things I perceive including other people.

The scope of Phenomenology of Perception is characteristic of the breadth of classical phenomenology, not least because Merleau-Ponty drew (with generosity) on Husserl, Heidegger, and Sartre while fashioning his own innovative vision of phenomenology. His phenomenology addressed the role of attention in the phenomenal field, the experience of the body, the spatiality of the body, the motility of the body, the body in sexual being and in speech, other selves, temporality, and the character of freedom so important in French existentialism. Near the end of a chapter on the cogito (Descartes' "I think, therefore I am"), Merleau-Ponty succinctly captures his embodied, existential form of phenomenology, writing: Insofar as, when I reflect on the essence of subjectivity, I find it bound up with that of the body and that of the world, this is because my existence as subjectivity [= consciousness] is merely one with my existence as a body and with the existence of the world, and because the subject that I am, for when taken seriously, is inseparable from this body and this world. In short, consciousness is embodied (in the world), and equally body is infused with consciousness (with cognition of the world).

In the years since Hussserl, Heidegger, et al. wrote that phenomenologists have dug into all these classical issues, including intentionality, temporal awareness, intersubjectivity, practical intentionality, and the social and linguistic contexts of human activity. Interpretation of historical texts by Husserl et al. has played a prominent role in this work, both because the texts are rich and difficult and because the historical dimension is itself part of the practice of continental European philosophy. Since the 1960s, philosophers trained in the methods of analytic philosophy have also dug into the foundations of phenomenology, with an eye to 20th century work in philosophy of logic, language, and mind.

Phenomenology was already linked with logical and semantic theory in Husserl's Logical Investigations. Analytic phenomenology picks up on that connection. In particular, Dagfinn F¿llesdal and J. N. Mohanty have explored historical and conceptual relations between Husserl's phenomenology and Frége's logical semantics (in Frége's "On Sense and Reference,” 1892). For Frege, an expression refers to an object by way of a sense: Thus, two expressions (say, "the morning star" and "the evening star") may refer to the same object (Venus) but express different senses with different manners of presentation. For Husserl, similarly, an experience (or an act of consciousness) intends or refers to an object by way of a noema or noematic sense: Thus, two experiences may refer to the same object but have different noematic senses involving different ways of presenting the object (for example, in seeing the same object from different sides). Indeed, for Husserl, the theory of intentionality is a generalization of the theory of linguistic reference: as linguistic reference is mediated by sense, so intentional reference is mediated by noematic sense.

More recently, analytic philosophers of mind have rediscovered phenomenologically issues of mental representation, intentionality, consciousness, sensory experience, intentional content, and context-of-thought. Some of these analytic philosophers of mind hark back to William James and Franz Brentano at the origins of modern psychology, and some look to empirical research in today's cognitive neuroscience. Some researchers have begun to combine Phenomenological issues with issues of neuroscience and behavioural studies and mathematical modelling. Such studies will extend the methods of traditional phenomenology as the Zeitgeist moves on. We address philosophy of mind below.

The discipline of phenomenology forms one basic field in philosophy among others. How is phenomenology distinguished from, and related to, other fields in philosophy?

Traditionally, philosophy includes at least four core fields or disciplines: Ontology, epistemology, ethics, logic. Suppose phenomenology joins that list. Consider then these elementary definitions of field: (1) Ontology is the study of beings or their being - what is. (2) Epistemology is the study of knowledge - how we know. (3) Logic is the study of valid reasoning - how to reason. (4) Ethics is the study of right and wrong - how we should act. (5) Phenomenology is the study of our experience - how we experience. The domains of study in these five fields are clearly different, and they seem to call for different methods of study.

Philosophers have sometimes argued that one of these fields is "first philosophy,” the most fundamental discipline, on which all philosophy or all knowledge or wisdom rests. Historically (it may be argued), Socrates and Plato put ethics first, then Aristotle put metaphysics or ontology first, then Descartes put epistemology first, then Russell put logic first, and then Husserl (in his later transcendental phase) put phenomenology first.

Consider epistemology. As we saw, phenomenology helps to define the phenomena on which knowledge claims rest, according to modern epistemology. On the other hand, phenomenology itself claims to achieve knowledge about the nature of consciousness, a distinctive description of the first-person knowledge. Through a form of intuition, consider logic, as a logical theory of meaning led Husserl into the theory of intentionality, the heart of phenomenology. On one account, phenomenology explicates the intentional or semantic force of ideal meanings, and propositional meanings are central to logical theory. But logical structure is expressed in language, either ordinary language or symbolic languages like those of predicate logic or mathematics or computer systems. It remains an important issue of debate where and whether language shapes specific forms of experience (thought, perception, emotion) and their content or meaning. So there is an important (if disputed) relation between phenomenology and logico-linguistic theory, especially philosophical logic and philosophy of language (as opposed to mathematical logic per se)

Consider ontology. Phenomenology studies (among other things) the nature of consciousness, which is a central issue in metaphysics or ontology, and one that leads into the traditional mind-body problem. Husserlian methodology would bracket the question of the existence of the surrounding world, thereby separating phenomenology from the ontology of the world. Yet Husserl's phenomenology presupposes theory about species and individuals (universals and particulars), relations of part and whole, and ideal meanings - all parts of ontology

Now consider ethics: Phenomenology might play a role in ethics by offering analyses of the structure of will, valuing, happiness, and care for others (in empathy and sympathy). Historically, though, ethics has been on the horizon of phenomenology. Husserl largely avoided ethics in his major works, though he featured the role of practical concerns in the structure of the life-world or of Geist (spirit, or culture, as in Zeitgeist). He once delivered a course of lectures giving ethics (like logic) a basic place in philosophy, indicating the importance of the phenomenology of sympathy in grounding ethics. In Being and Time Heidegger claimed not to pursue ethics while discussing phenomena ranging from care, conscience, and guilt to "fallenness" and "authenticity" (all phenomena with theological echoes). In Being and Nothingness Sartre Analysed with subtlety the logical problem of "bad faith,” yet he developed an ontology of value as produced by willing in good faith (which sounds like a revised Kantian foundation for morality). Beauvoir sketched an existentialist ethics, and Sartre left unpublished notebooks on ethics. However, an explicit Phenomenological approach to ethics emerged in the works of Emannuel Levinas, a Lithuanian phenomenologist who heard Husserl and Heidegger in Freiburg before moving to Paris. In Totality and Infinity (1961), modifying themes drawn from Husserl and Heidegger, Levinas focussed on the significance of the "face" of the other, explicitly developing grounds for ethics in this range of phenomenology, writing an impressionistic style of prose with allusions to religious experience.

Allied with ethics are political and social philosophies. Sartre and Merleau-Ponty were politically engaged, in 1940s Paris and their existential philosophies (phenomenologically based) suggest a political theory based in individual freedom. Sartre later sought an explicit blend of existentialism with Marxism. Still, political theory has remained on the borders of phenomenology. Social theory, however, has been closer to phenomenology as such. Husserl Analysed the Phenomenological structure of the life-world and Geist generally, including our role in social activity. Heidegger stressed social practice, which he found more primordial than individual consciousness. Alfred Schutz developed a phenomenology of the social world. Sartre continued the Phenomenological appraisal of the meaning of the other, the fundamental social formation. Moving outward from Phenomenological issues, Michel Foucault studied the genesis and meaning of social institutions, from prisons to insane asylums. And Jacques Derrida has long practised a kind of phenomenology of language, pursuing sociologic meaning in the "deconstruction" of wide-ranging texts. Aspects of French "poststructuralist" theory are sometimes interpreted as broadly Phenomenological, but such issues are beyond the present purview.

Classical phenomenology, then, ties into certain areas of epistemology, logic, and ontology, and leads into parts of ethical, social, and political theory.

It ought to be obvious that phenomenology has a lot to say in the area called philosophy of mind. Yet the traditions of phenomenology and analytic philosophy of mind have not been closely joined, despite overlapping areas of interest. So it is appropriate to close this survey of phenomenology by addressing philosophy of mind, one of the most vigorously debated areas in recent philosophy.

The tradition of analytic philosophy began, early in the 20th century, with analyses of language, notably in the works of Gottlob Frege, Bertrand Russell, and Ludwig Wittgenstein. Then in The Concept of Mind (1949) Gilbert Ryle developed a series of analyses of language about different mental states, including sensation, belief, and will. Though Ryle is commonly deemed a philosopher of ordinary language, Ryle himself said The Concept of Mind could be called phenomenology. In effect, Ryle Analysed our Phenomenological understanding of mental states as reflected in ordinary language about the mind. From this linguistic phenomenology Ryle argued that Cartesian mind-body dualism involves a category mistake (the logic or grammar of mental verbs - "believe,” "see,” etc. - does not mean that we ascribe belief, sensation, etc., to "the ghost in the machine"). With Ryle's rejection of mind-body dualism, the mind-body problem was re-awakened: What is the ontology of mind/body, and how are mind and body related?

René Descartes, in his epoch-making Meditations on First Philosophy (1641), had argued that minds and bodies are two distinct kinds of being or substance with two distinct kinds of attributes or modes: Bodies are characterized by spatiotemporal physical properties, while minds are characterized by properties of thinking (including seeing, feeling, etc.). Centuries later, phenomenology would find, with Brentano and Husserl, that mental acts are characterized by consciousness and intentionality, while natural science would find that physical systems are characterized by mass and force, ultimately by gravitational, electromagnetic, and quantum fields. Where do we find consciousness and intentionality in the quantum-electromagnetic-gravitational field that, by hypothesis, orders everything in the natural world in which we humans and our minds exist? That is the mind-body problem today. In short, phenomenology by any other name lies at the heart of the contemporary, mind-body problem.

After Ryle, philosophers sought a more explicit and generally naturalistic ontology of mind. In the 1950s materialism was argued anew, urging that mental states are identical with states of the central nervous system. The classical identity theory holds that each token mental state (in a particular person's mind at a particular time) is identical with a token brain state (in that a person's brain at that time). The weaker of materialisms, holds instead, that each type of mental state is identical with a type of brain state. But materialism does not fit comfortably with phenomenology. For it is not obvious how conscious mental states as we experience them - sensations, thoughts, emotions - can simply be the complex neural states that somehow subserve or implement them. If mental states and neural states are simply identical, in token or in type, where in our scientific theory of mind does the phenomenology occur - is it not simply replaced by neuroscience? And yet experience is part of what is to be explained by neuroscience.

In the late 1960s and 1970s the computer model of mind set it, and functionalism became the dominant model of mind. On this model, mind is not what the brain consists in (electrochemical transactions in neurons in vast complexes). Instead, mind is what brains do: They are function of mediating between information coming into the organism and behaviour proceeding from the organism. Thus, a mental state is a functional state of the brain or of the human (or an animal) organism. More specifically, on a favourite variation of functionalism, the mind is a computing system: Mind is to brain as software is to hardware; Thoughts are just programs running on the brain's "NetWare.” Since the 1970s the cognitive sciences - from experimental studies of cognition to neuroscience - have tended toward a mix of materialism and functionalism. Gradually, however, philosophers found that Phenomenological aspects of the mind pose problems for the functionalist paradigm too.

In the early 1970s Thomas Nagel argued in "What Is It Like to Be a Bat?" (1974) that consciousness itself - especially the subjective character of what it is like to have a certain type of experience - escapes physical theory. Many philosophers pressed the case that sensory qualia - what it is like to feel pain, to see red, etc. - are not addressed or explained by a physical account of either brain structure or brain function. Consciousness has properties of its own. And yet, we know, it is closely tied to the brain. And, at some level of description, neural activities implement computation.

In the 1980s John Searle argued in Intentionality (1983) (and further in The Rediscovery of the Mind (1991)) that intentionality and consciousness are essential properties of mental states. For Searle, our brains produce mental states with properties of consciousness and intentionality, and this is all part of our biology, yet consciousness and intentionality require to "first-person" ontology. Searle also argued that computers simulate but do not have mental states characterized by intentionality. As Searle argued, a computer system has a syntax (processing symbols of certain shapes) but has no semantics (the symbols lack meaning: we interpret the symbols). In this way Searle rejected both materialism and functionalism, while insisting that mind is a biological property of organisms like us: our brains "secrete" consciousness

The analysis of consciousness and intentionality is central to phenomenology as appraised above, and Searle's theory of intentionality reads like a modernized version of Husserl's. (Contemporary logical theory takes the form of stating truth conditions for propositions, and Searle characterizes a mental state's intentionality by specifying its "satisfaction conditions"). However, there is an important difference in background theory. For Searle explicitly assumes the basic worldview of natural science, holding that consciousness is part of nature. But Husserl explicitly brackets that assumption, and later phenomenologists - including Heidegger, Sartre, Merleau-Ponty - seem to seek a certain sanctuary for phenomenology beyond the natural sciences. And yet phenomenology itself should be largely neutral about further theories of how experience arises, notably from brain activity.

The philosophy or theory of mind overall may be factored into the following disciplines or ranges of theory relevant to mind: Phenomenology studies conscious experience as experienced, analysing the structure - the types, intentional forms and meanings, dynamics, and (certain) enabling conditions - of perception, thought, imagination, emotion, and volition and action.

Neuroscience studies the neural activities that serve as biological substrate to the various types of mental activity, including conscious experience. Neuroscience will be framed by evolutionary biology (explaining how neural phenomena evolved) and ultimately by basic physics (explaining how biological phenomena are grounded in physical phenomena). Here lie the intricacies of the natural sciences. Part of what the sciences are accountable for is the structure of experience, Analysed by phenomenology.

Cultural analysis studies the social practices that help to shape or serve as cultural substrate of the various types of mental activity, including conscious experience. Here we study the import of language and other social practices.

Ontology of mind studies the ontological type of mental activity in general, ranging from perception (which involves causal input from environment to experience) to volitional action (which involves causal output from volition to bodily movement).

This division of labour in the theory of mind can be seen as an extension of Brentano's original distinction between descriptive and genetic psychology. Phenomenology offers descriptive analyses of mental phenomena, while neuroscience (and wider biology and ultimately physics) offers models of explanation of what causes or gives rise to mental phenomena. Cultural theory offers analyses of social activities and their impact on experience, including ways language shapes our thought, emotion, and motivation. And ontology frames all these results within a basic scheme of the structure of the world, including our own minds.

Meanwhile, from an epistemological standpoint, all these ranges of theory about mind begin with how we observe and reason about and seek to explain phenomena we encounter in the world. And that is where phenomenology begins. Moreover, how we understand each piece of theory, including theory about mind, is central to the theory of intentionality, as it was, the semantics of thought and experience in general. And that is the heart of phenomenology.

The discipline of phenomenology may be defined as the study of structures of experience or consciousness. Literally. , Phenomenology is the

Study of "phenomena": Appearances of things, or things as they appear in our experience, or the ways we experience things, thus the meaning’s things have in our experience. Phenomenology studies conscious experience as experienced from the subjective or first person point of view. This field of philosophy is then to be distinguished from, and related to, the other main fields of philosophy: ontology (the study of being or what is), epistemology (the study of knowledge), logic (the study of valid reasoning), ethics (the study of right and wrong action), etc.

The historical movement of phenomenology is the philosophical tradition launched in the first half of the 20th century by Edmund Husserl, Martin Heidegger, Maurice Merleau-Ponty, Jean-Paul Sartre. In that movement, the discipline of phenomenology was prized as the proper foundation of all philosophy - as opposed, say, to ethics or metaphysics or epistemology. The methods and characterization of the discipline were widely debated by Husserl and his successors, and these debates continue to the present day. (The definition of phenomenology offered above will thus is debatable, for example, by Heideggerians, but it remains the starting point in characterizing the discipline.)

In recent philosophy of mind, the term "phenomenology" is often restricted to the characterization of sensory qualities of seeing, hearing, etc.: what it is like to have sensations of various kinds. However, our experience is normally much richer in content than mere sensation. Accordingly, in the Phenomenological tradition, phenomenology is given a much wider range, addressing the meaning things have in our experience, notably, the significance of objects, events, tools, the flow of time, the self, and others, as these things arise and are experienced in our "life-world.”

Phenomenology as a discipline has been central to the tradition of continental European philosophy throughout the 20th century, while philosophy of mind has evolved in the Austro-Anglo-American tradition of analytic philosophy that developed throughout the 20th century. Yet the fundamental character of our mental activity is pursued in overlapping ways within these two traditions. Accordingly, the perspective on phenomenology drawn in this article will accommodate both traditions. The main concern here will be to characterize the discipline of phenomenology, in contemporary views, while also highlighting the historical tradition that brought the discipline into its own.

Basically, phenomenology studies the structure of various types of experience ranging from perception, thought, memory, imagination, emotion, desire, and volition to bodily awareness, embodied action, and social activity, including linguistic activity. The structure of these forms of experience typically involves what Husserl called "intentionality,” that is, the directedness of experience toward things in the world, the property of consciousness that it is a consciousness of or about something. According to classical Husserlian phenomenology, our experience remains directed towardly and represented or "intends" - things only through particular concepts, thoughts, ideas, images, etc. These make up the meaning or content of a given experience, and are distinct from the things they present or mean.

The basic intentional structure of consciousness, we find in reflection or analysis, involves further forms of experience. Thus, phenomenology develops a complex account of temporal awareness (within the stream of consciousness), spatial awareness (notably in perception), attention (distinguishing focal and marginal or "horizonal" awareness), awareness of one's own experience (self-consciousness, in one sense), self-awareness (awareness-of-oneself), the self in different roles (as thinking, acting, etc.), embodied action (including kinesthetic awareness of one's movement), purposive intention for its desire for action (more or less explicit), awareness of other persons (in empathy, intersubjectivity, collectivity), linguistic activity (involving meaning, communication, understanding others), social interaction (including collective action), and everyday activity in our surrounding life-world (in a particular culture).

Furthermore, in a different dimension, we find various grounds or enabling conditions -conditions of the possibility - of intentionality, including embodiment, bodily skills, cultural context, language and other social practices, social background, and contextual aspects of intentional activities. Thus, phenomenology leads from conscious experience into conditions that help to give experience its intentionality. Traditional phenomenology has focussed on subjective, practical, and social conditions of experience. Recent philosophy of mind, however, has focussed especially on the neural substrate of experience, on how conscious experience and mental representation or intentionality is grounded in brain activity. It remains a difficult question how much of these grounds of experience fall within the province of phenomenology as a discipline. Cultural conditions thus seem closer to our experience and to our familiar self-understanding than do the electrochemical workings of our brain, much less our dependence on quantum-mechanical states of physical systems to which we may belong. The cautious thing to say is that phenomenology leads in some ways into at least some background conditions of our experience.

Phenomenology studies structures of conscious experience as experienced from the first-person point of view, along with relevant conditions of experience. The central structure of an experience is its intentionality, the way it is directed through its content or meaning toward a certain object in the world.

We all experience various types of experience including perception, imagination, thought, emotion, desire, volition, and action. Thus, the domain of phenomenology is the range of experiences including these types (among others). Experience includes not only relatively passive experience as in vision or hearing, but also active experience as in walking or hammering a nail or kicking a ball. (The range will be specific to each species of being that enjoys consciousness; Our focus is on our own, human, experience. Not all conscious beings will, or will be able to, practice phenomenology, as we do.)

Conscious experiences have a unique feature: We experience them, we live through them or perform them. Other things in the world we may observe and engage. But we do not experience them, in the sense of living through or performing them. This experiential or first-person feature - that of being experienced -is an essential part of the nature or structure of conscious experience: as we say, "I see / think / desire / do . . ." This feature is both a Phenomenological and an ontological feature of each experience: it is part of what it is for the experience to be experienced (Phenomenological) and part of what it is for the experience to be (ontological).

How shall we study conscious experience? We reflect on various types of experiences just as we experience them. That is to say, we proceed from the first-person point of view. However, we do not normally characterize an experience at the time we are performing it. In many cases we do not have that capability: a state of intense anger or fear, for example, consumes the entire focus at the time. Rather, we acquire a background of having lived through a given type of experience, and we look to our familiarity with that type of experience: While hearing a song, seeing the sun set, thinking about love, intending to jump a hurdle. The practice of phenomenology assumes such familiarity with the type of experiences to be characterized. Importantly, it is atypical of experience that phenomenology pursues, rather than a particular fleeting experience - unless its type is what interests us.

Classical phenomenologists practised some three distinguishable methods. (1) We describe a type of experience just as we find it in our own (past) experience. Thus, Husserl and Merleau-Ponty spoke of pure description of lived experience. (2) We interpret a type of experience by relating it to relevant features of context. In this vein, Heidegger and his followers spoke of hermeneutics, the art of interpretation in context, especially social and linguistic context. (3) We analyse the form of a type of experience. In the end, all the classical phenomenologists practised analysis of experience, factoring out notable features for further elaboration.

These traditional methods have been ramified in recent decades, expanding the methods available to phenomenology. Thus: (4) In a logico-semantic model of phenomenology, we specify the truth conditions for a type of thinking (say, where I think that dogs chase cats) or the satisfaction conditions for a type of intention (say, where I intend or will to jump that hurdle). (5) In the experimental paradigm of cognitive neuroscience, we design empirical experiments that tend to confirm or refute aspects of experience (say, where a brain scan shows electrochemical activity in a specific region of the brain thought to subserve a type of vision or emotion or motor control). This style of "neurophenomenology" assumes that conscious experience is grounded in neural activity in embodied action in appropriate surroundings - mixing pure phenomenology with biological and physical science in a way that was not wholly congenial to traditional phenomenologists.

What makes an experience conscious is a certain awareness one has of the experience while living through or performing it. This form of inner awareness has been a topic of considerable debate, centuries after the issue arose with Locke's notion of self-consciousness on the heels of Descartes' sense of consciousness (conscience, co-knowledge). Does this awareness-of-experience consist in a kind of inner observation of the experience, as if one were doing two things at once? (Brentano argued no.) Is it a higher-order perception of one's mind's operation, or is it a higher-order thought about one's mental activity? (Recent theorists have proposed both.) Or is it a different form of inherent structure? (Sartre took this line, drawing on Brentano and Husserl.) These issues are beyond the scope of this article, but notice that these results of Phenomenological analysis shape the characterization of the domain of study and the methodology appropriate to the domain. For awareness-of-experience is a defining trait of conscious experience, the trait that gives experience a first-person, lived character. It is that a living characterization resembling its self, that life is to offer the experience through which allows a first-person perspective on the object of study, namely, experience, and that perspective is characteristic of the methodology of phenomenology.

Conscious experience is the starting point of phenomenology, but experience shades off into fewer overtly conscious phenomena. As Husserl and others stressed, we are only vaguely aware of things in the margin or periphery of attention, and we are only implicitly aware of the wider horizon of things in the world around us. Moreover, as Heidegger stressed, in practical activities like walking along, or hammering a nail, or speaking our native tongue, we are not explicitly conscious of our habitual patterns of action. Furthermore, as psychoanalysts have stressed, much of our intentional mental activity is not conscious at all, but may become conscious in the process of therapy or interrogation, as we come to realize how we feel or think about something. We should allow, then, that the domain of phenomenology - our own experience - spreads out from conscious experience into semiconscious and even unconscious mental activity, along with relevant background conditions implicitly invoked in our experience. (These issues are subject to debate; the point here is to open the door to the question of where to draw the boundary of the domain of phenomenology.)

To begin an elementary exercise in phenomenology, consider some typical experiences one might have in everyday life, characterized in the first person: (1) “I” witnesses that fishing boat off the coast as dusk descends over the Pacific. (2) I hear that helicopter whirring overhead as it approaches the hospital. (3) I am thinking that phenomenology differs from psychology. (4) I wish that warm rain from Mexico were falling like last week. (5) I imagine a fearsome creature like that in my nightmare. (6) I intend to finish my writing by noon. (7) I walk carefully around the broken glass on the sidewalk. (8) I stroke a backhand cross-court with that certain underspin. (9) I am searching for the words to make my point in conversation.

Here are rudimentary characterizations of some familiar types of experience. Each sentence is a simple form of Phenomenological description, articulating in everyday English the structure of the type of experience so described. The subject term of "I,” indicates the first-person structure of the experience: The intentionality proceeds from the subject. As the verb indicates, the type of intentional activity so described, as perception, thought, imagination, etc. Of central importance is the way that objects of awareness are presented or intended in our experiences, especially, the way we see or conceive or think about objects. The direct-object expression ("that fishing boat off the coast") articulates the mode of presentation of the object in the experience: The content or meaning of the experience, the core of what Husser called noema. In effect, the object-phrase expresses the noema of the act described, that is, to the extent that language has appropriate expressive power. The overall form of the given sentence articulates of a basic form of intentionality, in that of an experience has to its own subject-act-content-object.

Fruitful Phenomenological description or interpretation, as in Husserl or Merleau-Ponty, will far outrun such simple Phenomenological descriptions as above. But such simple descriptions bring out the basic form of intentionality. As we interpret the Phenomenological description further, we may assess the relevance of the context of experience. And we may turn to wider conditions of the possibility of that type of experience. In this way, in the practice of phenomenology, we classify, describe, interpret, and analyse structures of experiences in ways that answer to our own experience.

In such interpretive-descriptive analyses of experience, we immediately observe that we are analysing familiar forms of consciousness, conscious experience of or about this or that. Intentionality is thus the salient structure of our experience, and much of the phenomenology proceeds as the study of different aspects of intentionality. Thus, we explore structures of the stream of consciousness, the enduring self, the embodied self, and bodily action. Furthermore, as we reflect on how these phenomena work, we turn to the analysis of relevant conditions that enable our experiences to occur as they do, and to represent or intend as they do. Phenomenology then leads into analyses of conditions of the possibility of intentionality, conditions involving motor skills and habits, backgrounding to social practices, and often language, with its special place in human affairs. The Oxford English Dictionary presents the following definition: "Phenomenology. (i) The science of phenomena as distinct from being (ontology). (ii) That division of any science that describes and classifies its phenomena. From the Greek phainomenon, appearance." In philosophy, the term is used in the first sense, amid debates of theory and methodology. In physics and philosophy of science, the term is used in the second sense, even if only occasionally.

In its root meaning, then, phenomenology is the study of phenomena: Literally, appearances as opposed to reality. This ancient distinction launched philosophy as we emerged from Plato's cave. Yet the discipline of phenomenology did not blossom until the 20th century and remains poorly understood in many circles of contemporary philosophy. What is that discipline? How did philosophy move from a root concept of phenomena to the discipline of phenomenology?

Originally, in the 18th century, "phenomenology" meant the theory of appearances fundamental to empirical knowledge, especially sensory appearances. The term seems to have been introduced by Johann Heinrich Lambert, a follower of Christian Wolff. Subsequently, Immanuel Kant used the term occasionally in various writings, as did Johann Gottlieb Fichte and G. W. F. Hegel. By 1889 Franz Brentano used the term to characterize what he called "descriptive psychology.” From there Edmund Husserl took up the term for his new science of consciousness, and the rest is history.

Suppose we say phenomenology study’s phenomena: what appears to us - and its appearing? How shall we understand phenomena? The term has a rich history in recent centuries, in which we can see traces of the emerging discipline of phenomenology.

In a strict empiricist vein, what appears before the mind are sensory data or qualia: either patterns of one's own sensations (seeing red here now, feeling this ticklish feeling, hearing that resonant bass tone) or sensible patterns of worldly things, say, the looks and smells of flowers (what John Locke called secondary qualities of things). In a strict rationalist vein, by contrast, what appears before the mind are ideas, rationally formed "clear and distinct ideas" (in René Descartes' ideal). In Immanuel Kant's theory of knowledge, fusing rationalist and empiricist aims, what appears to the mind are phenomena defined as things-as-they-appear or things-as-they-are-represented (in a synthesis of sensory and conceptual forms of objects-as-known). In Auguste Comte's theory of science, phenomena (phenomenes) are the facts (faits, what occurs) that a given science would explain.

In 18th and 19th century epistemology, then, phenomena are the starting points in building knowledge, especially science. Accordingly, in a familiar and still current sense, phenomena are whatever we observe (perceive) and seek to explain.

As the discipline of psychology emerged late in the 19th century, however, phenomena took on a somewhat different guise. In Franz Brentano's Psychology from an Empirical Standpoint (1874), phenomena are of what occurs in the mind: Mental phenomena are acts of consciousness (or their contents), and physical phenomena are objects of external perception starting with colours and shapes. For Brentano, physical phenomena exist "intentionally" in acts of consciousness. This view revives a Medieval notion Brentano called "intentional in-existence. However, the ontology remains undeveloped (what is it to exist in the mind, and do physical objects exist only in the mind?). Moreover, phenomenons are whatever we are conscious of, as a phenomenon might that its events lay succumbantly around us, other people, ourselves. Even (in reflection) our own conscious experiences, as we experience these. In a certain technical sense, phenomena are things as they are given to our consciousness, whether in perception or imagination or thought or volition. This conception of phenomena would soon inform the new discipline of phenomenology.

Brentano distinguished descriptive psychology from genetic psychology. Where genetic psychology seeks the causes of various types of mental phenomena, descriptive psychology defines and classifies the various types of mental phenomena, including perception, judgment, emotion, etc. According to Brentano, every mental phenomenon, or act of consciousness, is directed toward some object, and only mental phenomena are so directed. This thesis of intentional directedness was the hallmark of Brentano's descriptive psychology. In 1889 Brentano used the term "phenomenology" for descriptive psychology, and the way was paved for Husserl's new science of phenomenology.

Phenomenology as we know it was launched by Edmund Husserl in his Logical Investigations (1900-01). Two importantly different lines of theory came together in that monumental work: Psychological theory, on the heels of Franz Brentano (and William James, whose Principles of Psychology appeared in 1891 and greatly impressed Husserl); And logical or semantic theory, on the heels of Bernard Bolzano and Hussserl's contemporaries who founded modern logic, including Gottlob Frege. (Interestingly, both lines of research trace back to Aristotle, and both reached importantly new results in Hussserl's day.)

Hussserl's Logical Investigations was inspired by Bolzano's ideal of logic, while taking up Brentano's conception of descriptive psychology. In his Theory of Science (1835) Bolzano distinguished between subjective and objective ideas or representations (Vorstellungen). In effect Bolzano criticized Kant and before him the classical empiricists and rationalists for failing to make this sort of distinction, thereby rendering phenomena merely subjective. Logic studies objective ideas, including propositions, which in turn make up objective theories as in the sciences. Psychology would, by contrast, study subjective ideas, the concrete contents (occurrences) of mental activities in particular minds at a given time. Husserl was after both, within a single discipline. So phenomena must be reconceived as objective intentional contents (sometimes called intentional objects) of subjective acts of consciousness. Phenomenology would then study this complex of consciousness and correlated phenomena. In Ideas I (Book One, 1913) Husserl introduced two Greek words to capture his version of the Bolzanoan distinction: noesis and noema (from the Greek verb noéaw, meaning to perceive, think, intend, from what place the noun nous or mind). The intentional process of consciousness is called noesis, while its ideal content is called noema. The noema of an act of consciousness Husserl characterized both as an ideal meaning and as "the object as intended.” Thus the phenomenon, or object-as-it-appears, becomes the noema, or object-as-it-is-intended. The interpretations of Husserl's theory of noema have been several and amount to different developments of Husserl's basic theory of intentionality. (Is the noema an aspect of the object intended, or rather a medium of intention?)

For Husserl, then, phenomenology integrates a kind of psychology with a kind of logic. It develops a descriptive or analytic psychology in that it describes and analytical divisions of subjective mental activity or experience, in short, acts of consciousness. Yet it develops a kind of logic - a theory of meaning (today we say logical semantics) -by that, it describes and approves to analytical justification that an objective content of consciousness, brings forthwith the ideas, concepts, images, propositions, in short, ideal meanings of various types that serve as intentional contents, or noematic meanings, of various types of experience. These contents are shareable by different acts of consciousness, and in that sense they are objective, ideal meanings. Following Bolzano (and to some extent the platonistic logician Hermann Lotze), Husserl opposed any reduction of logic or mathematics or science to mere psychology, to how human beings happen to think, and in the same spirit he distinguished phenomenology from mere psychology. For Husserl, phenomenology would study consciousness without reducing the objective and shareable meanings that inhabit experience to merely subjective happenstances. Ideal meaning would be the engine of intentionality in acts of consciousness.

A clear conception of phenomenology awaited Husserl's development of a clear model of intentionality. Indeed, phenomenology and the modern concept of intentionality emerged hand-in-hand in Husserl's Logical Investigations (1900-01). With theoretical foundations laid in the Investigations, Husserl would then promote the radical new science of phenomenology in Ideas. And alternative visions of phenomenology would soon follow.

Phenomenology came into its own with Husserl, much as epistemology came into its own with Descartes, and ontology or metaphysics came into its own with Aristotle on the heels of Plato. Yet phenomenology has been practised, with or without the name, for many centuries. When Hindu and Buddhist philosophers reflected on states of consciousness achieved in a variety of meditative states, they were practising phenomenology. When Descartes, Hume, and Kant characterized states of perception, thought, and imagination, they were practising phenomenology. When Brentano classified varieties of mental phenomena (defined by the directedness of consciousness), he was practising phenomenology. When William James appraised kinds of mental activity in the stream of consciousness (including their embodiment and their dependence on habit), he too was practising phenomenology. And when recent analytic philosophers of mind have addressed issues of consciousness and intentionality, they have often been practising phenomenology. Still, the discipline of phenomenology, its roots tracing back through the centuries, came full to flower in Husserl.

Husserl's work was followed by a flurry of Phenomenological writing in the first half of the 20th century. The diversity of traditional phenomenology is apparent in the Encyclopaedia of Phenomenology (Kluwer Academic Publishers, 1997, Dordrecht and Boston), which features separate articles on some seven types of phenomenology. (1) Transcendental constitutive phenomenology studies how objects are constituted in pure or transcendental consciousness, setting aside questions of any relation to the natural world around us. (2) Naturalistic constitutive phenomenology studies how consciousness constitutes or takes things in the world of nature, assuming with the natural attitude that consciousness is part of nature. (3) Existential phenomenology studies concrete human existence, including our experience of free choice or action in concrete situations. (4) Generative historicist phenomenology studies how meaning, as found in our experience, is generated in historical processes of collective experience over time. (5) Genetic phenomenology studies the genesis of meanings of things within one's own stream of experience. (6) Hermeneutical phenomenology studies interpretive structures of experience, how we understand and engage things around us in our human world, including ourselves and others. (7) Realistic phenomenology studies the structure of consciousness and intentionality, assuming it occurs in a real world that is largely external to consciousness and not somehow brought into being by consciousness.

The most famous of the classical phenomenologists were Husserl, Heidegger, Sartre, and Merleau-Ponty. In these four thinkers we find different conceptions of phenomenology, different methods, and different results. A brief sketch of their differences will capture both a crucial period in the history of phenomenology and a sense of the diversity of the field of phenomenology.

In his Logical Investigations (1900-01) Husserl outlined a complex system of philosophy, moving from logic to philosophy of language, to ontology (theory of universals and parts of wholes), to a Phenomenological theory of intentionality, and finally to a Phenomenological theory of knowledge. Then in Ideas I (1913) he focussed squarely on phenomenology itself. Husserl defined phenomenology as "the science of the essence of consciousness,” entered on the defining trait of intentionality, approached explicitly "in the first person." In this spirit, we may say phenomenology is the study of consciousness - that is, conscious experience of various types - as experienced from the first-person point of view. In this discipline we study different forms of experience just as we experience them, from the perspective of its topic for living through or performing them. Thus, we characterize experiences of seeing, hearing, imagining, thinking, feeling (i.e., emotion), wishing, desiring, willing, and acting, that is, embodied volitional activities of walking, talking, cooking, carpentering, etc. However, not just any characterization of an experience will do. Phenomenological analysis of a given type of experience will feature the ways in which we ourselves would experience that form of conscious activity. And the leading property of our familiar types of experience is their intentionality, their being a consciousness of or about something, something experienced or presented or engaged in a certain way. How I see or conceptualize or understand the object I am dealing with defines the meaning of that object in my current experience. Thus, phenomenology features a study of meaning, in a wide sense that includes more than what is expressed in language.

In Ideas, Husserl presented phenomenology with a transcendental turn. In part this means that Husserl took on the Kantian idiom of "transcendental idealism,” looking for conditions of the possibility of knowledge, or of consciousness generally, and arguably turning away from any reality beyond phenomena. But Hussserl's transcendental, and turns to involve his discovery of the method of epoché (from the Greek skeptics' notion of abstaining from belief). We are to practice phenomenology, Husserl proposed, by "bracketing" the question of the existence of the natural world around us. We thereby turn our attention, in reflection, to the structure of our own conscious experience. Our first key result is the observation that each act of consciousness is a consciousness of something, that is, intentional, or directed toward something. Consider my visual experience wherein I see a tree across the square. In Phenomenological reflection, we need not concern ourselves with whether the tree exists: my experience is of a tree whether or not such a tree exists. However, we do need to concern ourselves with how the object is meant or intended. I see a Eucalyptus tree, not a Yucca tree; I see the object as a Eucalyptus tree, with a certain shape, with bark stripping off, etc. Thus, bracketing the tree itself, we turn our attention to my experience of the tree, and specifically to the content or meaning in my experience. This tree-as-perceived Husserl calls the noema or noematic sense of the experience.

Philosophers succeeding Husserl debated the proper characterization of phenomenology, arguing over its results and its methods. Adolf Reinach, an early student of Husserl's (who died in World War I), argued that phenomenology should remain cooperatively affiliated within there be of the view that finds to some associative values among the finer qualities that have to them the realist’s ontology, as in Husserl's Logical Investigations. Roman Ingarden, a Polish phenomenologist of the next generation, continued the resistance to Hussserl's turn to transcendental idealism. For such philosophers, phenomenology should not bracket questions of being or ontology, as the method of epoché would suggest. And they were not alone. Martin Heidegger studied Hussserl's early writings, worked as Assistant to Husserl in 1916, and in 1928 Husserl was to succeed in the prestigious chair at the University of Freiburg. Heidegger had his own ideas about phenomenology.

In Being and Time (1927) Heidegger unfurled his rendition of phenomenology. For Heidegger, we and our activities are always "in the world,” our being is being-in-the-world, so we do not study our activities by bracketing the world, rather we interpret our activities and the meaning things have for us by looking to our contextual relations to things in the world. Indeed, for Heidegger, phenomenology resolves into what he called "fundamental ontology.” We must distinguish beings from their being, and we begin our investigation of the meaning of being in our own case, examining our own existence in the activity of "Dasein" (that being whose being is in each case my own). Heidegger resisted Husserl's neo-Cartesian emphasis on consciousness and subjectivity, including how perception presents things around us. By contrast, Heidegger held that our more basic ways of relating to things are in practical activities like hammering, where the phenomenology reveals our situation in a context of equipment and in being-with-others

In Being and Time Heidegger approached phenomenology, in a quasi-poetic idiom, through the root meanings of "logos" and "phenomena,” so that phenomenology is defined as the art or practice of "letting things show themselves.” In Heidegger's inimitable linguistic play on the Greek roots, “phenomenology” means, . . . to let that which shows itself be seen from themselves in the very way in which it shows itself from itself. Here Heidegger explicitly parodies Hussserl's call, "To the things themselves!", or "To the phenomena themselves!" Heidegger went on to emphasize practical forms of comportment or better relating (Verhalten) as in hammering a nail, as opposed to representational forms of intentionality as in seeing or thinking about a hammer. Being and Time developed an existential interpretation of our modes of being including, famously, our being-toward-death.

In a very different style, in clear analytical prose, in the text of a lecture course called The Basic Problems of Phenomenology (1927), Heidegger traced the question of the meaning of being from Aristotle through many other thinkers into the issues of phenomenology. Our understanding of beings and their being comes ultimately through phenomenology. Here the connection with classical issues of ontology is more apparent, and consonant with Hussserl's vision in the Logical Investigations (an early source of inspiration for Heidegger). One of Heidegger's most innovative ideas was his conception of the "ground" of being, looking to modes of being more fundamental than the things around us (from trees to hammers). Heidegger questioned the contemporary concern with technology, and his writing might suggest that our scientific theories are historical artifacts that we use in technological practice, rather than systems of ideal truth (as Husserl had held). Our deep understanding of being, in our own case, comes rather from phenomenology, Heidegger held.

In the 1930s phenomenology migrated from Austrian and then German philosophy into French philosophy. The way had been paved in Marcel Proust's In Search of Lost Time, in which the narrator recounts in close detail his vivid recollections of experiences, including his famous associations with the smell of freshly baked madeleines. This sensibility to experience traces to Descartes' work, and French phenomenology has been an effort to preserve the central thrust of Descartes' insights while rejecting mind-body dualism. The experience of one's own body, or one's lived or living body, has been an important motif in many French philosophers of the 20th century.

In the novel Nausea (1936) Jean-Paul Sartre described a bizarre course of experience in which the protagonist, writing in the first person, describes how ordinary objects lose their meaning until he encounters pure being at the foot of a chestnut tree, and in that moment recovers his sense of his own freedom. In Being and Nothingness (1943, written partly while a prisoner of war), Sartre developed his conception of Phenomenological ontology. Consciousness is a consciousness of objects, as Husserl had stressed. In Sartre's model of intentionality, the central player in consciousness is a phenomenon, and the occurrence of a phenomenon is just a consciousness-of-an-object. The chestnut tree I see is, for Sartre, such a phenomenon in my consciousness. Indeed, all things in the world, as we normally experience them, are phenomena, beneath or behind which lies their "being-in-itself.” Consciousness, by contrast, has "being-for-itself,” since everything conscious is not only a consciousness-of-its-object but also a pre-reflective consciousness-of-itself (conscience). Yet for Sartre, unlike Husserl, the formal "I" or self is nothing but a sequence of acts of consciousness, notably including radically free choices (like a Humean bundle of perceptions).

For Sartre, the practice of phenomenology proceeds by a deliberate reflection on the structure of consciousness. Sartre's method is in effect a literary style of interpretive description of different types of experience in relevant situations - a practice that does not really fit the methodological proposals of either Husserl or Heidegger, but makes benefit from Sartre's great literary skill. (Sartre wrote many plays and novels and was awarded the Nobel Prize in Literature.)

Sartre's phenomenology in Being and Nothingness became the philosophical foundation for his popular philosophy of existentialism, sketched in his famous lecture "Existentialism is a Humanism" (1945). In Being and Nothingness Sartre emphasized the experience of freedom of choice, especially the project of choosing oneself, the defining pattern of one's past actions. Through vivid description of the "look" of the Other, Sartre laid groundwork for the contemporary political significance of the concept of the Other (as in other groups or ethnicities). Indeed, in The Second Sex (1949) Simone de Beauvoir, Sartre's life-long companion, launched contemporary feminism with her nuance account of the perceived role of women as Other.

In 1940s Paris, Maurice Merleau-Ponty joined with Sartre and Beauvoir in developing phenomenology. In Phenomenology of Perception (1945) Merleau-Ponty developed a rich variety of phenomenology emphasizing the role of the body in human experience. Unlike Husserl, Heidegger, and Sartre, Merleau-Ponty looked to experimental psychology, analysing the reported experience of amputees who felt sensations in a phantom limb. Merleau-Ponty rejected both associationist psychology, focussed on correlations between sensation and stimulus, and intellectualist psychology, focussed on rational construction of the world in the mind. (Think of the behaviorist and computationalist models of mind in more recent decades of empirical psychology.) Instead, Merleau-Ponty focussed on the "body image,” our experience of our own body and its significance in our activities. Extending Hussserl's account of the lived body (as opposed to the physical body), Merleau-Ponty resisted the traditional Cartesian separation of mind and body. For the body image is neither in the mental realm nor in the mechanical-physical realm. Rather, my body is, as it were, me in my engaged action with things I perceive including other people.

The scope of Phenomenology of Perception is characteristic of the breadth of classical phenomenology, not least because Merleau-Ponty drew (with generosity) on Husserl, Heidegger, and Sartre while fashioning his own innovative vision of phenomenology. His phenomenology addressed the role of attention in the phenomenal field, the experience of the body, the spatiality of the body, the motility of the body, the body in sexual being and in speech, other selves, temporality, and the character of freedom so important in French existentialism. Near the end of a chapter on the cogito (Descartes' "I think, therefore I am"), Merleau-Ponty succinctly captures his embodied, existential form of phenomenology, writing: Insofar as, when I reflect on the essence of subjectivity, I find it bound up with that of the body and that of the world, this is because my existence as subjectivity [= consciousness] is merely one with my existence as a body and with the existence of the world, and because the subject that I am, when appropriated concrete, it is inseparable from this body and this world.

In short, consciousness is embodied (in the world), and equally body is infused with consciousness (with cognition of the world).

In the years since Hussserl, Heidegger, et al, wrote that its topic or ways of conventional study are to phenomenologists of having in accord dug into all these classical disseminations that include, intentionality, temporal awareness, intersubjectivity, practical intentionality, and the social and linguistic contexts of human activity. Interpretation of historical texts by Husserl et al. has played a prominent role in this work, both because the texts are rich and difficult and because the historical dimension is itself part of the practice of continental European philosophy. Since the 1960s, philosophers trained in the methods of analytic philosophy have also dug into the foundations of phenomenology, with an eye to 20th century work in philosophy of logic, language, and mind.

Phenomenology was already linked with logical and semantic theory in Husserl's Logical Investigations. Analytic phenomenology picks up on that connection. In particular, Dagfinn F¿llesdal and J. N. Mohanty have explored historical and conceptual relations between Husserl's phenomenology and Frége's logical semantics (in Frége's "On Sense and Reference,” 1892). For Frége, an expression refers to an object by way of a sense: Thus, two expressions (say, "the morning star" and "the evening star") may refer to the same object (Venus) but express different senses with different manners of presentation. For Husserl, similarly, an experience (or act of consciousness) intends or refers to an object by way of a noema or noematic sense: Consequently, two experiences may refer to the same object but have different noematic senses involving different ways of presenting the object (for example, in seeing the same object from different sides). Indeed, for Husserl, the theory of intentionality is a generalization of the theory of linguistic reference: as linguistic reference is mediated by sense, so intentional reference is mediated by noematic sense.

More recently, analytic philosophers of mind have rediscovered Phenomenological issues of mental representation, intentionality, consciousness, sensory experience, intentional content, and context-of-thought. Some of these analytic philosophers of mind hark back to William James and Franz Brentano at the origins of modern psychology, and some look to empirical research in today's cognitive neuroscience. Some researchers have begun to combine Phenomenological issues with issues of neuroscience and behavioural studies and mathematical modelling. Such studies will extend the methods of traditional phenomenology as the Zeitgeist moves on.

The discipline of phenomenology forms one basic field in philosophy among others. How is phenomenology distinguished from, and related to, other fields in philosophy?

Traditionally, philosophy includes at least four core fields or disciplines: Ontology, epistemology, ethics, logic presupposes phenomenology as it joins that list. Consider then these elementary definitions of field: (1) Ontology is the study of beings or their being - what is. (2) Epistemology is the study of knowledge - how we know. (3) Logic is the study of valid reasoning - how to reason. (4) Ethics is the study of right and wrong - how we should act. (5) Phenomenology is the study of our experience - how we experience.

The domains of study in these five fields are clearly different, and they seem to call for different methods of study.

Philosophers have sometimes argued that one of these fields is "first philosophy,” the most fundamental discipline, on which all philosophy or all knowledge or wisdom rests. Historically (it may be argued), Socrates and Plato put ethics first, then Aristotle put metaphysics or ontology first, then Descartes put epistemology first, then Russell put logic first, and then Husserl (in his later transcendental phase) put phenomenology first.

Consider epistemology. As we saw, phenomenology helps to define the phenomena on which knowledge claims rest, according to modern epistemology. On the other hand, phenomenology itself claims to achieve knowledge about the nature of consciousness, a distinctive description of first-person knowledge, through a form of intuition.

Consider logic saw being a logical theory of meaning, in that this had persuaded Husserl into the theory of intentionality, the heart of phenomenology. On one account, phenomenology explicates the intentional or semantic force of ideal meanings, and propositional meanings are central to logical theory. But logical structure is expressed in language, either ordinary language or symbolic languages like those of predicate logic or mathematics or computer systems. It remains an important issue of debate where and whether language shapes specific forms of experience (thought, perception, emotion) and their content or meaning. So there is an important (if disputed) relation between phenomenology and logico-linguistic theory, especially philosophical logic and philosophy of language (as opposed to mathematical logic per se).

Consider ontology. Phenomenology studies (among other things) the nature of consciousness, which is a central issue in metaphysics or ontology, and one that lead into the traditional mind-body problem. Husserlian methodology would bracket the question of the existence of the surrounding world, thereby separating phenomenology from the ontology of the world. Yet Husserl's phenomenology presupposes theory about species and individuals (universals and particulars), relations of part and whole, and ideal meanings - all parts of ontology.

Now consider ethics. Phenomenology might play a role in ethics by offering analyses of the structure of will, valuing, happiness, and care for others (in empathy and sympathy). Historically, though, ethics has been on the horizon of phenomenology. Husserl largely avoided ethics in his major works, though he featured the role of practical concerns in the structure of the life-world or of Geist (spirit, or culture, as in Zeitgeist). He once delivered a course of lectures giving ethics (like logic) a basic place in philosophy, indicating the importance of the phenomenology of sympathy in grounding ethics. In Being and Time Heidegger claimed not to pursue ethics while discussing phenomena ranging from care, conscience, and guilt to "fallenness" and "authenticity" (all phenomena with theological echoes). In Being and Nothingness Sartre analysed with subtlety the logical problem of "bad faith,” yet he developed an ontology of value as produced by willing in good faith (which sounds like a revised Kantian foundation for morality). Beauvoir sketched an existentialist ethics, and Sartre left unpublished notebooks on ethics. However, an explicit Phenomenological approach to ethics emerged in the works of Emannuel Levinas, a Lithuanian phenomenologist who heard Husserl and Heidegger in Freiburg before moving to Paris. In Totality and Infinity (1961), modifying themes drawn from Husserl and Heidegger, Levinas focussed on the significance of the "face" of the other, explicitly developing grounds for ethics in this range of phenomenology, writing an impressionistic style of prose with allusions to religious experience.

Allied with ethics that on the same line, signify political and social philosophy. Sartre and Merleau-Ponty were politically captivated in 1940s Paris, and their existential philosophies (phenomenologically based) suggest a political theory based in individual freedom. Sartre later sought an explicit blend of existentialism with Marxism. Still, political theory has remained on the borders of phenomenology. Social theory, however, has been closer to phenomenology as such. Husserl analysed the Phenomenological structure of the life-world and Geist generally, including our role in social activity. Heidegger stressed social practice, which he found more primordial than individual consciousness. Alfred Schutz developed a phenomenology of the social world. Sartre continued the Phenomenological appraisal of the meaning of the other, the fundamental social formation. Moving outward from Phenomenological issues, Michel Foucault studied the genesis and meaning of social institutions, from prisons to insane asylums. And Jacques Derrida has long practised a kind of phenomenology of language, seeking socially meaning in the "deconstruction" of wide-ranging texts. Aspects of French "poststructuralist" theory are sometimes interpreted as broadly Phenomenological, but such issues are beyond the present purview.

Classical phenomenology, then, ties into certain areas of epistemology, logic, and ontology, and leads into parts of ethical, social, and political theory.

It ought to be obvious that phenomenology has a lot to say in the area called philosophy of mind. Yet the traditions of phenomenology and analytic philosophy of mind have not been closely joined, despite overlapping areas of interest. So it is appropriate to close this survey of phenomenology by addressing philosophy of mind, one of the most vigorously debated areas in recent philosophy.

The tradition of analytic philosophy began, early in the 20th century, with analyses of language, notably in the works of Gottlob Frége, Bertrand Russell, and Ludwig Wittgenstein. Then in The Concept of Mind (1949) Gilbert Ryle developed a series of analyses of language about different mental states, including sensation, belief, and will. Though Ryle is commonly deemed a philosopher of ordinary language, Ryle himself said The Concept of Mind could be called phenomenology. In effect, Ryle analysed our Phenomenological understanding of mental states as reflected in ordinary language about the mind. From this linguistic phenomenology Ryle argued that Cartesian mind-body dualism involves a category mistake (the logic or grammar of mental verbs - "believe,” "see,” etc. -does not mean that we ascribe belief, sensation, etc., to "the ghost in the machine"). With Ryle's rejection of mind-body dualism, the mind-body problem was re-awakened: What is the ontology of mind/body, and how are mind and body related?

René Descartes, in his epoch-making Meditations on First Philosophy (1641), had argued that minds and bodies are two distinct kinds of being or substance with two distinct kinds of attributes or modes: bodies are characterized by spatiotemporal physical properties, while minds are characterized by properties of thinking (including seeing, feeling, etc.). Centuries later, phenomenology would find, with Brentano and Husserl, that mental acts are characterized by consciousness and intentionality, while natural science would find that physical systems are characterized by mass and force, ultimately by gravitational, electromagnetic, and quantum fields. Where do we find consciousness and intentionality in the quantum-electromagnetic-gravitational field that, by hypothesis, orders everything in the natural world in which we humans and our minds exist? That is the mind-body problem today. In short, phenomenology by any other name lies at the heart of the contemporary, mind-body problem.

After Ryle, philosophers sought a more explicit and generally naturalistic ontology of mind. In the 1950s materialism was argued anew, urging that mental states are identical with states of the central nervous system. The classical identity theory holds that each token mental state (in a particular person's mind at a particular time) is identical with a token brain state (in that person's brain at that time). A weaker materialism holds, instead, that each type of mental state is identical with a type of brain state. But materialism does not fit comfortably with phenomenology. For it is not obvious how conscious mental states as we experience them - sensations, thoughts, emotions - can simply be the complex neural states that somehow subserve or implement them. If mental states and neural states are simply identical, in token or in type, where in our scientific theory of mind does the phenomenology occur - is it not simply replaced by neuroscience? And yet experience is part of what is to be explained by neuroscience.

In the late 1960s and 1970s the computer model of mind set it, and functionalism became the dominant model of mind. On this model, mind is not what the brain consists in (electrochemical transactions in neurons in vast complexes). Instead, mind is what brains do: They are function of mediating between information coming into the organism and behaviour proceeding from the organism. Thus, a mental state is a functional state of the brain or of the human or an animal organism. More specifically, on a favourite variation of functionalism, the mind is a computing system: Mind is to brain as software is to hardware; Thoughts are just programs running on the brain's "NetWare.” Since the 1970s the cognitive sciences - from experimental studies of cognition to neuroscience - have tended toward a mix of materialism and functionalism. Gradually, however, philosophers found that Phenomenological aspects of the mind pose problems for the functionalist paradigm too.

In the early 1970s Thomas Nagel argued in "What Is It Like to Be a Bat?" (1974) that consciousness itself - especially the subjective character of what it is like to have a certain type of experience - escapes physical theory. Many philosophers pressed the case that sensory qualia - what it is like to feel pain, to see red, etc. - are not addressed or explained by a physical account of either brain structure or brain function. Consciousness has properties of its own. And yet, we know, it is closely tied to the brain. And, at some level of description, neural activities implement computation.

In the 1980s John Searle argued in Intentionality (1983) (and further in The Rediscovery of the Mind (1991)) that intentionality and consciousness are essential properties of mental states. For Searle, our brains produce mental states with properties of consciousness and intentionality, and this is all part of our biology, yet consciousness and intentionality require the "first-person" ontology. Searle also argued that computers simulate but do not have mental states characterized by intentionality. As Searle argued, a computer system has of the syntax (processing symbols of certain shapes) but has no semantics (the symbols lack meaning: We interpret the symbols). In this way Searle rejected both materialism and functionalism, while insisting that mind is a biological property of organisms like us: Our brains "secrete" consciousness.

The analysis of consciousness and intentionality is central to phenomenology as appraised above, and Searle's theory of intentionality reads like a modernized version of Husserl's. (Contemporary logical theory takes the form of stating truth conditions for propositions, and Searle characterizes a mental state's intentionality by specifying its "satisfaction conditions"). However, there is an important difference in background theory. For Searle explicitly assumes the basic worldview of natural science, holding that consciousness is part of nature. But Husserl explicitly brackets that assumption, and later phenomenologists - including Heidegger, Sartre, Merleau-Ponty - seem to seek a certain sanctuary for phenomenology beyond the natural sciences. And yet phenomenology itself should be largely neutral about further theories of how experience arises, notably from brain activity.

The philosophy or theory of mind overall may be factored into the following disciplines or ranges of theory relevant to mind: Phenomenology studies conscious experience as experienced, analysing the structure - the types, intentional forms and meanings, dynamics, and (certain) enabling conditions - of perception, thought, imagination, emotion, and volition and action.

Neuroscience studies the neural activities that serve as biological substrate to the various types of mental activity, including conscious experience. Neuroscience will be framed by evolutionary biology (explaining how neural phenomena evolved) and ultimately by basic physics (explaining how biological phenomena are grounded in physical phenomena). Here lie the intricacies of the natural sciences. Part of what the sciences are accountable for is the structure of experience, analysed by phenomenology.

Cultural analysis studies the social practices that help to shape or serve as cultural substrate of the various types of mental activity, including conscious experience. Here we study the import of language and other social practices. Ontology of mind studies the ontological type of mental activity in general, ranging from perception (which involves causal input from environment to experience) to volitional action (which involves causal output from volition to bodily movement).

This division of labour in the theory of mind can be seen as an extension of Brentano's original distinction between descriptive and genetic psychology. Phenomenology offers descriptive analyses of mental phenomena, while neuroscience (and wider biology and ultimately physics) offers models of explanation of what causes or gives rise to mental phenomena. Cultural theory offers analyses of social activities and their impact on experience, including ways language shapes our thought, emotion, and motivation. And ontology frames all these results within a basic scheme of the structure of the world, including our own minds.

Meanwhile, from an epistemological standpoint, all these ranges of theory about mind begin with how we observe and reason about and seek to explain phenomena we encounter in the world. And that is where phenomenology begins. Moreover, how we understand each piece of theory, including theory about mind, is central to the theory of intentionality, as it were, the semantics of thought and experience in general. And that is the heart of phenomenology.

There is potentially a rich and productive interface between neuroscience/cognitive science. The two traditions, however, have evolved largely independent, based on differing sets of observations and objectives, and tend to use different conceptual frameworks and vocabulary representations. The distributive contributions to each their dynamic functions of finding a useful common reference to further exploration of the relations between neuroscience/cognitive science and psychoanalysis/psychotherapy.

Forthwith, is the existence of a historical gap between neuroscience/cognitive science and psychotherapy is being productively closed by, among other things, the suggestion that recent understandings of the nervous system as a modeler and predictor bear a close and useful similarity to the concepts of projection and transference. The gap could perhaps be valuably narrowed still further by a comparison in the two traditions of the concepts of the "unconscious" and the "conscious" and the relations between the two. It is suggested that these be understood as two independent "story generators" - each with different styles of function and both operating optimally as reciprocal contributors to each others' ongoing story evolution. A parallel and comparably optimal relation might be imagined for neuroscience/cognitive science and psychotherapy.

For the sake of argument, imagine that human behaviour and all that it entails (including the experience of being a human and interacting with a world that includes other humans) is a function of the nervous system. If this were so, then there would be lots of different people who are making observations of (perhaps different) aspects of the same thing, and telling (perhaps different) stories to make sense of their observations. The list would include neuroscientists and cognitive scientists and psychologists. It would include as well psychoanalysts, psychotherapists, psychiatrists, and social workers. If we were not too fussy about credentials, it should probably include as well educators, and parents and . . . babies? Arguably, all humans, from the time they are born, spend significant measures of their time making observations of how people (others and themselves) behave and why, and telling stories to make sense of those observations.

The stories, of course, all differ from one another to greater or lesser degrees. In fact, the notion that "human behaviour and all that it entails . . . is a function of the nervous system" is itself a story used to make sense of observations by some people and not by others. It is not my intent here to try to defend this particular story, or any other story for that matter. Very much to the contrary, is to explore the implications and significance of the fact that there ARE different stories and that they might be about the same (some)thing

In so doing, I want to try to create a new story that helps to facilitate an enhanced dialogue between neuroscience/cognitive science, on the one hand, and psychotherapy, on the other. That new story is itself is a story of conflicting stories within . . . what is called the "nervous system" but others are free to call the "self," "mind," "soul," or whatever best fits their own stories. What is important is the idea that multiple things, evident by their conflicts, may not in fact be disconnected and adversarial entities but could rather be fundamentally, understandably, and valuably interconnected parts of the same thing.

Many practising psychoanalysts (and psychotherapists too, I suspect) feel that the observations/stories of neuroscience/cognitive science are for their own activities, least of mention, are at primes of irrelevance, and at worst destructive, and the same probable holds for many neuroscientists/cognitive scientists. Pally clearly feels otherwise, and it is worth exploring a bit why this is so in her case. A general key, I think, is in her line "In current paradigms, the brain has intrinsic activity, is highly integrated, is interactive with the environment, and is goal-oriented, with predictions operating at every level, from lower systems to . . . the highest functions of abstract thought." Contemporary neuroscience/cognitive science has indeed uncovered an enormous complexity and richness in the nervous system, "making it not so different from how psychoanalysts (or most other people) would characterize the self, at least not in terms of complexity, potential, and vagary." Given this complexity and richness, there is substantially less reason than there once was to believe psychotherapists and neuroscientists/cognitive scientists are dealing with two fundamentally different thing’s ally suspects, more aware of this than many psychotherapists because she has been working closely with contemporary neuroscientists who are excited about the complexity to be found in the nervous system. And that has an important lesson, but there is an additional one at least as important in the immediate context. In 1950, two neuroscientists wrote: "The sooner we realize that not to expect of expectation itself, which we would recognize the fact that the complex and higher functional Gestalts that leave the reflex physiologist dumfounded in fact send roots down to the simplest basal functions of the CNS, the sooner we will see that the previously terminologically insurmountable barrier between the lower levels of neurophysiology and higher behavioural theory simply dissolves away."

And in 1951 another said, "I am becoming subsequently forwarded by the conviction that the rudiments of every behavioural mechanism will be found far down in the evolutionary scale and represented in primitive activities of the nervous system."

Neuroscience (and what came to be cognitive science) was engaged from very early on in an enterprise committed to the same kind of understanding sought by psychotherapists, but passed through a phase (roughly from the 1950's to the 1980's) when its own observations and stories were less rich in those terms. It was a period that gave rise to the notion that the nervous system was "simple" and "mechanistic," which in turn made neuroscience/cognitive science seem less relevant to those with broader concerns, perhaps even threatening and apparently adversarial if one equated the nervous system with "mind," or "self," or "soul," since mechanics seemed degrading to those ideas. Arguably, though, the period was an essential part of the evolution of the contemporary neuroscience/cognitive science story, one that laid needed groundwork for rediscovery and productive exploration of the richness of the nervous system. Psychoanalysis/psychotherapy of course went through its own story evolution over this time. That the two stories seemed remote from one another during this period was never adequate evidence that they were not about the same thing but only an expression of their needed independent evolutions.

An additional reason that Pally is comfortable with the likelihood that psychotherapists and neuroscientists/cognitive scientists are talking about the same thing is her recognition of isomorphisms (or congruities, Pulver 2003) between the two sets of stories, places where different vocabularies in fact seem to be representing the same (or quite similar) things. I am not sure I am comfortable calling these "shared assumptions" (as Pally does) since they are actually more interesting and probably more significant if they are instead instances of coming to the same ideas from different directions (as I think they are). In this case, the isomorphisms tend to imply that, rephrasing Gertrude Stein, "that there exists an actual there.” Regardless, Pally has entirely appropriately and, I think, usefully called attention to an important similarity between the psychotherapeutic concept of "transference" and an emerging recognition within neuroscience/cognitive science that the nervous system does not so much collect information about the world as generate a model of it, act in relation to that model, and then check incoming information against the predictions of that model. Pally's suggestion that this model reflects in part early interpersonal experiences, can be largely "unconscious," and so may cause inappropriate and troubling behaviour in current time seems entirely reasonable. So too is she to think of thoughts that there is an interaction with the analyst, and this can be of some help by bringing the model to "consciousness" through the intermediary of recognizing the transference onto the analyst.

The increasing recognition of substantial complexity in the nervous system together with the presence of identifiable isomorphisms provides a solid foundation for suspecting that psychotherapists and neuroscientists/cognitive scientists are indeed talking about the same thing. But the significance of different stories for better understanding a single thing lies as much in the differences between the stories as it does in their similarities/isomorphisms, in the potential for differing and not obviously isomorphic stories productively to modify each other, yielding a new story in the process. With this thought in mind, I want to call attention to some places where the psychotherapeutic and the neuroscientific/cognitive scientific stories have edges that rub against one another rather than smoothly fitting together. And perhaps to ways each could be usefully further evolved in response to those non-isomorphisms.

Unconscious stories and "reality.” Though her primary concern is with interpersonal relations, Pally clearly recognizes that transference and related psychotherapeutic phenomena are one (actually relatively small) facet of a much more general phenomenon, the creation, largely unconsciously, of stories that are understood to be that of what are not necessarily reflective of the "real world.” Ambiguous figures illustrate the same general phenomenon in a much simpler case, that of visual perception. Such figures may be seen in either of two ways; They represent two "stories" with the choice between them being, at any given time, largely unconscious. More generally, a serious consideration of a wide array of neurobiological/cognitive phenomena clearly implies that, as Pally said, that if we could ever see "reality," but only have stories to describe it that result from processes of which we are not consciously aware.

All of this raises some quite serious philosophical questions about the meaning and usefulness of the concept of "reality." In the present context, what is important is that it is a set of questions that sometimes seem to provide an insurmountable barrier between the stories of neuroscientists/cognitive scientists, who by and large think they are dealing with reality, and psychotherapists, who feel more comfortable in more idiosyncratic and fluid spaces. In fact, neuroscience and cognitive science can proceed perfectly well in the absence of a well-defined concept of "reality" and, without being fully conscious of it, does in fact do so. And psychotherapists actually make more use of the idea of "reality" than is entirely appropriate. There is, for example, a tendency within the psychotherapeutic community to presume that unconscious stories reflect "traumas" and other historically verifiable events, while the neurobiological/cognitive science story says quite clearly that they may equally reflect predispositions whose origins reflect genetic information and hence bear little or no relation to "reality" in the sense usually meant. They may, in addition, reflect random "play" (Grobstein, 1994), putting them even further out of reach of easy historical interpretation. In short, with regard to the relation between "story" and "reality," each set of stories could usefully be modified by greater attention to the other. Differing concepts of "reality" (perhaps the very concept itself) gets in the way of usefully sharing stories. The neurobiologists and/or/cognitive scientists' preoccupation with "reality" as an essential touchstone could valuably be lessened, and the therapist's sense of the validation of story in terms of personal and historical idiosyncracies could be helpfully adjusted to include a sense of actual material underpinnings.

The Unconscious and the Conscious. Pally appropriately makes a distinction between the unconscious and the conscious, one that has always been fundamental to psychotherapy. Neuroscience/cognitive science has been slower to make a comparable distinction but is now rapidly beginning to catch up. Clearly some neural processes generate behaviour in the absence of awareness and intent and others yield awareness and intent with or without accompanying behaviour. An interesting question however, raised at a recent open discussion of the relations between neuroscience and psychoanalysis, is whether the "neurobiological unconscious" is the same thing as the "psychotherapeutic unconscious," and whether the perceived relations between the "unconscious" and the"conscious" are the same in the two sets of stories. Is this a case of an isomorphism or, perhaps more usefully, a masked difference?

An oddity of Pally's article is that she herself acknowledges that the unconscious has mechanisms for monitoring prediction errors and yet implies, both in the title of the paper, and in much of its argument, that there is something special or distinctive about consciousness (or conscious processing) in its ability to correct prediction errors. And here, I think, there is evidence of a potentially useful "rubbing of edges" between the neuroscientific/cognitive scientific tradition and the psychotherapeutic one. The issue is whether one regards consciousness (or conscious processing) as somehow "superior" to the unconscious (or unconscious processing). There is a sense in Pally of an old psychotherapeutic perspective of the conscious as a mechanism for overcoming the deficiencies of the unconscious, of the conscious as the wise father/mother and the unconscious as the willful child. Actually, Pally does not quite go this far, but there is enough of a trend to illustrate the point and, without more elaboration, I do not think many neuroscientists/cognitive scientists will catch Pally's more insightful lesson. I think Pally is almost certainly correct that the interplay of the conscious and the unconscious can achieve results unachievable by the unconscious alone, but think also that neither psychotherapy nor neuroscience/cognitive science are yet in a position to say exactly why this is so. So let me take a crack here at a new, perhaps bi-dimensional story that could help with that common problem and perhaps both traditions as well.

A major and surprising lesson of comparative neuroscience, supported more recently by neuropsychology (Weiskrantz, 1986) and, more recently still, by artificial intelligence is that an extraordinarily rich repertoire of adaptive behaviour can occur unconsciously, in the absence of awareness of intent (be supported by unconscious neural processes). It is not only modelling of the world and prediction and error correction that can occur this way but virtually (and perhaps literally) the entire spectrum of behaviour externally observed, including fleeing from threat, approaching good things, generating novel outputs, learning from doing so, and so on.

This extraordinary terrain, discovered by neuroanatomists, electrophysiologists, neurologists, behavioural biologists, and recently extended by others using more modern techniques, is the unconscious of which the neuroscientist/cognitive scientist speaks. It is a terrain so surprisingly rich that it creates, for some people, the inpuzzlement about whether there is anything else at all. Moreover, it seems, at first glance, to be a totally different terrain from that of the psychotherapist, whose clinical experience reveals a territory occupied by drives, unfulfilled needs, and the detritus with which the conscious would prefer not to deal.

As indicated earlier, it is one of the great strengths of Pally's article to suggest that the two terrains may in fact turns out to be the same in many ways, but if they are of the same line, it then becomes the question of whether or not it feels in what way nature resembles the "unconscious" and the "conscious" different? Where now are the "two stories?” Pally touches briefly on this point, suggesting that the two systems differ not so much (or at all?) in what they do, but rather in how they do it. This notion of two systems with different styles seems to me worth emphasizing and expanding. Unconscious processing is faster and handles many more variables simultaneously. Conscious processing is slower and handles several variables at one time. It is likely that there appear to a host of other differences in style as well, in the handling of number for example, and of time.

In the present context, however, perhaps the most important difference in style is one that Lacan called attention to from a clinical/philosophical perspective - the conscious (conscious processing) that has in resemblance to some objective "coherence," that is, it attempts to create a story that makes sense simultaneously of all its parts. The unconscious, on the other hand, is much more comfortable with bits and pieces lying around with no global order. To a neurobiologist/cognitive scientist, this makes perfectly good sense. The circuitry includes the unconscious (sub-cortical circuitry?) assembly of different parts organized for a large number of different specific purposes, and only secondarily linked together to try to assure some coordination? The circuitry preserves the conscious precessings (neo-cortical circuitry?), that, on the other hand, seems to both be more uniform and integrated and to have an objective for which coherence is central.

That central coherence is well-illustrated by the phenomena of "positive illusions,” exemplified by patients who receive a hypnotic suggestion that there is an object in a room and subsequently walk in ways that avoid the object while providing a variety of unrelated explanations for their behaviour. Similar "rationalization" is, of course, seen in schizophrenic patients and in a variety of fewer dramatic forms in psychotherapeutic settings. The "coherent" objective is to make a globally organized story out of the disorganized jumble, a story of (and constituting) the "self."

What this thoroughly suggests is that the mind/brain be actually organized to be constantly generating at least two different stories in two different styles. One, written by conscious processes in simpler terms, is a story of/about the "self" and experienced as such, for developing insights into how such a story can be constructed using neural circuitry. The other is an unconscious "story" about interactions with the world, perhaps better thought of as a series of different "models" about how various actions relate to various consequences. In many ways, the latter is the grist for the former.

In this sense, we are safely back to the two story ideas that has been central to psychotherapy, but perhaps with some added sophistication deriving from neuroscience/cognitive science. In particular, there is no reason to believe that one story is "better" than the other in any definitive sense. They are different stories based on different styles of story telling, with one having advantages in certain sorts of situations (quick responses, large numbers of variables, more direct relation to immediate experiences of pain and pleasure) and the other in other sorts of situations (time for more deliberate responses, challenges amenable to handling using smaller numbers of variables, more coherent, more able to defer immediate gratification/judgment.

In the clinical/psychotherapeutic context, an important implication of the more neutral view of two story-tellers outlined above is that one ought not to over-value the conscious, nor to expect miracles of the process of making conscious what is unconscious. In the immediate context, the issue is if the unconscious is capable of "correcting prediction errors,” then why appeal to the conscious to achieve this function? More generally, what is the function of that persistent aspect of psychotherapy that aspires to make the unconscious conscious? And why is it therapeutically effective when it is? Here, it is worth calling special attention to an aspect of Pally's argument that might otherwise get a bit lost in the details of her article: . . . the therapist encourages the wife to stop consciously and consider her assumption that her husband does not properly care about her, and to effort fully consider an alternative view and inhibit her impulse to reject him back. This, in turn, creates a new type of experience, one in which he is indeed more loving, such that she can develop new predictions."

It is not, as Pally describes it, the simple act of making something conscious that is therapeutically effective. What is necessary is too consciously recompose the story (something that is made possible by its being a story with a small number of variables) and, even more important, to see if the story generates a new "type of experience" that in turn causes the development of "new predictions." The latter, I suggest, is an effect of the conscious on the unconscious, an alteration of the unconscious brought about by hearing, entertaining, and hence acting on a new story developed by the conscious. It is not "making things conscious" that is therapeutically effective; it is the exchange of stories that encourages the creation of a new story in the unconscious.

For quite different reasons, Grey (1995) earlier made a suggestion not dissimilar to Pally's, proposing that consciousness was activated when an internal model detected a prediction failure, but acknowledged he could see no reason "why the brain should generate conscious experience of any kind at all." It seems to me that, despite her title, it is not the detection of prediction errors that is important in Pally's story. Instead, it is the detection of mismatches between two stories, one unconscious and the other conscious, and the resulting opportunity for both to shape a less trouble-making new story. That, in brief, is it to why the brain "should generate conscious experience,” and reap the benefits of having a second story teller with which a different style of paraphrasing Descartes, one might know of another in what one might say "I am, and I can think, therefore I can change who I am.” It is not only the neurobiological "conscious" that can undergo change; it is the neurobiological "unconscious" as well.

More generally, the most effective psychotherapy requires the recognitions that assume their responsibility is rapidly emerging from neuroscience/cognitive science, that the brain/mind has evolved with two (or more) independent story tellers and has done so precisely because there are advantages to having independent story tellers that generate and exchange different stories. The advantage is that each can learn from the other, and the mechanisms to convey the stories and forth and for each story teller to learn from the stories of the other are a part of our evolutionary endowment as well. The problems that bring patients into a therapist's office are problems in the breakdown of story exchange, for any of a variety of reasons, and the challenge for the therapist is to reinstate the confidence of each story teller in the value of the stories created by the other. Neither the conscious nor the unconscious is primary; they function best as an interdependent loop with each developing its own story facilitated by the semi-independent story of the other. In such an organization, there is not only no "real,” and no primacy for consciousness, there is only the ongoing development and, ideally, effective sharing of different stories.

There are, in the story I am outlining, implications for neuroscience/cognitive science as well. The obvious key questions are what does one mean (in terms of neurons and neuronal assemblies) by "stories," and in what ways are their construction and representation different in unconscious and conscious neural processing. But even more important, if the story I have outlined makes sense, what are the neural mechanisms by which unconscious and conscious stories are exchanged and by which each kind of story impacts on the other? And why (again in neural terms) does the exchange sometimes break down and fail in a way that requires a psychotherapist - an additional story teller - to be repaired?

Just as the unconscious and the conscious are engaged in a process of evolving stories for separate reasons and using separate styles, so too have been and will continue to be neuroscience/cognitive science and psychotherapy. And it is valuable that both communities continue to do so. But there is every reason to believe that the different stories are indeed about the same thing, not only because of isomorphisms between the differing stories but equally because the stories of each can, if listened to, be demonstrably of value to the stories of the other. When breakdowns in story sharing occur, they require people in each community who are daring enough to listen and be affected by the stories of the other community. Pally has done us all a service as such a person. I hope my reactions to her article will help further to construct the bridge she has helped to lay, and that others will feel inclined to join in an act of collective story telling that has enormous intellectual potential and relates as well very directly to a serious social need in the mental health arena. Indeed, there are reasons to believe that an enhanced skill at hearing, respecting, and learning from differing stories about similar things would be useful in a wide array of contexts.

There is now a more satisfactory range of ideas available [in the field of consciousness studies] . . . They involve mostly quantum objects called Bose-Einstein condensates that may be capable of forming ephemeral but extended structures in the brain (Pessa). Marshall's original idea (based on the work of Frölich) was that the condensates that comprise the physical basis of mind, form from activity of vibrating molecules (dipoles) in nerve cell membranes. One of us (Clarke) has found theoretical evidence that the distribution of energy levels for such arrays of molecules prevents this happening in the way that Marshall first thought. However, the occurrence of similar condensates centring around the microtubules that are an important part of the structure of every cell, including nerve cells, remains a theoretical possibility (del Giudice et al.). Hameroff has pointed out that single-cell organisms such as 'paramecium' can perform quite complicated actions normally thought to need a brain. He suggests that their 'brain' be in their microtubules. Shape changes in the constituent proteins (tubulin) could subserve computational functions and would involve quantum phenomena of the sort envisaged by del Giudice. This raises the intriguing possibility that the most basic cognitive unit is provided, not by the nerve cell synapse as is usually supposed, but by the microtubular structure within cells. The underlying intuition is that the structures formed by Bose-Einstein condensates are the building Forms of mental life; in relation to perception they are models of the world, transforming a pleasant view, say, into a mental structure that represents some of the inherent qualities of that view.

We thought that, if there is anything to ideas of this sort, the quantum nature of awareness should be detectable experimentally. Holism and non-locality are features of the quantum world with no precise classical equivalents. The former presupposes that the interacting systems have to be considered as wholes - you cannot deal with one part in isolation from the rest. Non-locality means, among other things, that spatial separation between its parts does not alter the requirement to deal with an interacting system holistically. If we could detect these in relation to awareness, we would show that consciousness cannot be understood solely in terms of classical concepts.

Generative thought and words are the attempts to discover the relation between thought and speech at the earliest stages of phylogenetic and ontogenetic development. We found no specific interdependence between the genetic roots of thought and of word. It became plain that the inner relationship we were looking for was not a prerequisite for, but rather a product of, the historical development of human consciousness.

In animals, even in anthropoids whose speech is phonetically like human speech and whose intellect is akin to man’s, speech and thinking are not interrelated. A prelinguistic epoché through which times interval in thought and a preintellectual period in speech undoubtedly exist also in the development of the child. Thought and word are not connected by a primary bond. A connection originates, changes, and grows in the course of the evolution of thinking and speech.

It would be wrong, however, to regard thought and speech as two unrelated processes either parallel or crossing at certain points and mechanically influencing each other. The absence of a primary bond does not mean that a connection between them can be formed only in a mechanical way. The futility of most of the earlier investigations was largely due to the assumption that thought and word were isolated, independent elements, and verbal thought the fruit of their external union.

The method of analysis based on this conception was bound to fail. It sought to explain the properties of verbal thought by breaking it up into its component elements, thought and word, neither of which, taken separately, possessed the properties of the whole. This method is not true analysis helpful in solving concrete problems. It leads, rather, to generalisation. We compared it with the analysis of water into hydrogen and oxygen - which can result only in findings applicable to all water existing in nature, from the Pacific Ocean to a raindrop. Similarly, the statement that verbal thought is composed of intellectual processes and speech is functionally proper applications to all verbal thought and all its manifestations and explains none of the specific problems facing the student of verbal thought.

We tried a new approach to the subject and replaced analysis into elements by analysis into units, each of which retains in simple form all the properties of the whole. We found this unit of verbal thought in word meaning.

The meaning of a word represents such a close amalgam of thought and language that it is hard to tell whether it is a phenomenon of speech or a phenomenon of thought. A word without meaning is an empty sound; meaning, therefore, is a criterion of “word,” its indispensable component. It would seem, then, that it may be regarded as a phenomenon of speech. But from the point of view of psychology, the meaning of every word is a generalisation or a concept. And since generalisations and concepts are undeniably acts of thought, but we may regard meaning as a phenomenon of thinking. It does not follow, however, that meaning formally belongs in two different spheres of psychic life. Word meaning is a phenomenon of thought only insofar as thought is embodied in speech, and of speech only insofar as speech is connected with thought and illumined by it. It is a phenomenon of verbal thought, or meaningful speech - a union of word and thought.

Our experimental investigations fully confirm this basic thesis. They not only proved that concrete study of the development of verbal thought is made possible by the use of word meaning as the analytical unit but they also led to a further thesis, which we consider the major result of our study and which issues directly from the further thesis that word meanings develop. This insight must replace the postulate of the immutability of word meanings.

From the point of view of the old schools of psychology, the bond between word and meaning is an associative bond, established through the repeated simultaneous perception of a certain sound and a certain object. A word calls to mind its content as the overcoat of a friend reminds us of that friend, or a house of its inhabitants. The association between word and meaning may grow stronger or weaker, be enriched by linkage with other objects of a similar kind, spread over a wider field, or become more limited, i.e., it may undergo quantitative and external changes, but it cannot change its psychological nature. To do that, it would have to cease being an association. From that point of view, any development in word meanings is inexplicable and impossible - an implication that impeded linguistics as well as psychology. Once having committed itself to the association theory, semantics persisted in treating word meaning as an association between a word’s sound and its content. All words, from the most concrete to the most abstract, appeared to be formed in the same manner in regard to meaning, and to contain nothing peculiar to speech as such; a word made us think of its meaning just as any object might remind us of another. It is hardly surprising that semantics did not even pose the larger question of the development of word meanings. Development was reduced to changes in the associative connections between single words and single objects: A word brawn to denote at first one object and then become associated with another, just as an overcoat, having changed owners, might remind us first of one person and later of another. Linguistics did not realize that in the historical evolution of language the very structure of meaning and its psychological nature also change. From primitive generalisations, verbal thought rises to the most abstract concepts. It is not merely the content of a word that changes, but the way in which reality is generalised and reflected in a word.

Equally inadequate is the association theory in explaining the development of word meanings in childhood. Here, too, it can account only for the pure external, quantitative changes in the bonds uniting word and meaning, for their enrichment and strengthening, but not for the fundamental structural and psychological changes that can and do occur in the development of language in children.

Oddly enough, the fact that associationism in general had been abandoned for some time did not seem to affect the interpretation of word and meaning. The Wuerzburg school, whose main object was to prove the impossibility of reducing thinking to a mere play of associations and to demonstrate the existence of specific laws governing the flow of thought, did not revise the association theory of word and meaning, or even recognise the need for such a revision. It freed thought from the fetters of sensation and imagery and from the laws of association, and turned it into a purely spiritual act. By so doing, it went back to the prescientific concepts of St. Augustine and Descartes and finally reached extreme subjective idealism. The psychology of thought was moving toward the ideas of Plato. Speech, at the same time, was left at the mercy of association. Even after the work of the Wuerzburg school, the connection between a word and its meaning was still considered a simple associative bond. The word was seen as the external concomitant of thought, its attire only, having no influence on its inner life. Thought and speech had never been as widely separated as during the Wuerzburg period. The overthrow of the association theory in the field of thought actually increased its sway in the field of speech.

The work of other psychologists further reinforced this trend. Selz continued to investigate thought without considering its relation to speech and came to the conclusion that man’s productive thinking and the mental operations of chimpanzees were identical in nature – so completely did he ignore the influence of words on thought.

Even Ach, who made a special studies in phraseology, by the meaning of who tried to overcome the correlation in his theory of concepts, did not go beyond assuming the presence of “determining tendencies” operative, along with associations, in the process of concept formation. Hence, the conclusions he reached did not change the old understanding of word meaning. By identifying concept with meaning, he did not allow for development and changes in concepts. Once established, the meaning of a word was set forever; Its development was completed. The same principles were taught by the very psychologists Ach attacked. To both sides, the starting point was also the end of the development of a concept; the disagreement concerned only the way in which the formation of word meanings began.

In Gestalt psychology, the situation was not very different. This school was more consistent than others in trying to surmount the general principle of a collective associationism. Not satisfied with a partial solution of the problem, it tried to liberate thinking and speech from the rule of association and to put both under the laws of structure formation. Surprisingly, even this most progressive of modern psychological schools made no progress in the theory of thought and speech.

For one thing, it retained the complete separation of these two functions. In the light of Gestalt psychology, the relationship between thought and word appears as a simple analogy, a reduction of both to a common structural denominator. The formation of the first meaningful words of a child is seen as similar to the intellectual operations of chimpanzees in Koehler’s experiments. Words that filter through the structure of things and acquire a certain functional meaning, in much the same way as the stick, to the chimpanzee, becomes part of the structure of obtaining the fruit and acquires the functional meaning of tool, that the connection between word and meaning is no longer regarded as a matter of simple association but as a matter of structure. That seems like a step forward. But if we look more closely at the new approach, it is easy to see that the step forward is an illusion and that we are still standing in the same place. The principle of structure is applied to all relations between things in the same sweeping, undifferentiated way as the principle of association was before it. It remains impossible to deal with the specific relations between word and meaning.

They are from the outset accepted as identical in principle with any and all other relations between things. All cats are as grey in the dusk of Gestalt psychology as in the earlier plexuities that assemble in universal associationism.

While Ach sought to overcome the associationism with “determining tendencies,” Gestalt psychology combatted it with the principle of structure - retaining, however, the two fundamental errors of the older theory: the assumption of the identical nature of all connections and the assumption that word meanings do not change. The old and the new psychology both assume that the development of a word’s meaning is finished as soon as it emerges. The new trends in psychology brought progress in all branches except in the study of thought and speech. Here the new principles resemble the old ones like twins.

If Gestalt psychology is at a standstill in the field of speech, it has made a big step backward in the field of thought. The Wuerzburg school at least recognised that thought had laws of its own. Gestalt psychology denies their existence. By reducing to a common structural denominator the perceptions of domestic fowl, the mental operations of chimpanzees, the first meaningful words of the child, and the conceptual thinking of the adult, it obliterates every distinction between the most elementary perception and the highest forms of thought.

This may be summed up as follows: All the psychological schools and trends overlook the cardinal point that every thought is a generalisation. They all study word and meaning without any reference to development. As long as these two conditions persist in the successive trends, there cannot be much difference in the treatment of the problem.

The discovery that word meanings evolve leads the study of thought and speech out of a blind alley. Word meanings are dynamic rather than static formations. They change as the child develops; they change also with the various ways in which thought functions.

If word meanings change in their inner nature, then the relation of thought to word also changes. To understand the dynamics of that relationship, we must supplement the genetic approach of our main study by functional analysis and examine the role of word meaning in the process of thought.

Let us consider the process of verbal thinking from the first dim stirring of a thought to its formulation. What we want to show now is not how meanings develop over long periods of time but the way they function in the live process of verbal thought. On the basis of such a functional analysis, we will be able to show also that each stage in the development of word meaning has its own particular relationship between thought and speech. Since functional problems are most readily solved by examining the highest form of a given activity, we will, for a while, put aside the problem of development and consider the relations between thought and word in the mature mind.

The leading idea in the following discussion can be reduced to this formula: The relation of thought to word is not a thing but a process, a continual movement back and forth from thought to word and from word to thought. In that process the relation of thought to word undergoes changes that they may be regarded as development in the functional sense. Thought is not merely expressed in words; it comes into existence through them. Every thought tends to connect something with something else, to establish a relationship between things. Every thought moves, grows and develops, fulfils a function, solves a problem. This flow of thought occurs as an inner movement through a series of planes. An analysis of the interaction of thought and word must begin with an investigation of the different phases and planes a thought traverses before it is embodied in words.

The first thing such a study reveals is the need to distinguish between two planes of speech. Both the inner, meaningful, semantic aspect of speech and the external, phonetic aspects, though forming a true unity, have their own laws of movement. The unity of speech is a complex, not a homogeneous, unity. A number of facts in the linguistic development of the child indicate independent movement in the phonetic and the semantic spheres. We will point out two of the most important of these facts.

In mastering external speech, the child starts from one word, then connects two or three words; a little later, he advances from simple sentences to more complicated ones, and finally to coherent speech made up of series of such sentences; in other words, he proceeds from a part to the whole. In regard to meaning on the other hand, the first word of the child is a whole sentence. Semantically, the child starts from the whole, from a meaningful complex, and only later begins to master the separate semantic units, the meanings of words, and to divide his formerly undifferentiated thought into those units. The external and the semantic aspects of speech develop in opposite directions – one from the particular to the whole, from word to sentence, and the other from the whole to the particular, from sentence to word.

This in itself suffices to show how important it is to distinguish between the vocal and the semantic aspects of speech. Since they move in reverse directions, their development does not coincide, but that does not mean that they are independent of each other. On the contrary, their difference is the first stage of a close union. In fact, our example reveals their inner relatedness as clearly as it does their distinction. A child’s thought, precisely because it is born as a dim, amorphous whole, must find expression in a single word. As his thought becomes more differentiated, the child is less apt to express it in single words but constructs a composite whole. Conversely, progress in speech to the differentiated whole of a sentence helps the child’s thoughts to progress from a homogeneous whole to well-defined parts. Thought and word are not cut from one pattern. In a sense, there are more differences than likenesses between them. The structure of speech does not simply mirror the structure of thought that is why words cannot be put on by thought like a ready-made garment. Thought undergoes many changes as it turns into speech. It does not merely find expression in speech; It finds its reality and form. The semantic and the phonetic developmental processes are essentially one, precisely because of their reverse directions.

The second, equally important fact emerges at a later period of development. Piaget demonstrated that the child uses subordinate clauses with because, although, etc., long before he grasps the structures of meaning corresponding to these syntactic forms. Grammar precedes logic. Here, too, as in our previous example, the discrepancy does not exclude union but is, in fact, necessary for union.

In adults the divergence between the semantic and the phonetic aspects of speech is even more striking. Modern, psychologically oriented linguistics is familiar with this phenomenon, especially in regard too grammatical and psychological subject and predicate. For example, in the sentence “The clock fell,” emphasis and meaning may change in different situations. Suppose I notice that the clock has stopped and ask how this happened. The answer is, “The clock fell.” Grammatical and psychological subject coincide: “The clock” is the first idea in my consciousness; “fell” is what is said about the clock. But if I hear a crash in the next room and inquire what happened, and get the same answer, subject and predicate are psychologically reversed. I knew something had fallen – that is what we are talking about. “The clock” completes the idea. The sentence could be changed to: “What has fallen is the clock”; Then the grammatical and the psychological subject would coincide. In the prologue to his play Duke Ernst von Schwaben, Uhland says: “Grim scenes will pass before you.” Psychologically, “will pass” is the subject. The spectator knows he will see events unfold the additional idea, the predicate, remains in “grim scenes.” Uhland meant, “What will pass before your eyes are a tragedy.” Any part of a sentence may become the psychological predicate, the carrier of topical emphasis: on the other hand, entirely different meanings may lie hidden behind one grammatical structure. Accord between syntactical and psychological organisation is not as prevalent as we tend to assume – rather, it is a requirement that is seldom met. Not only subject and predicate, but grammatical gender, number, case, tense, degree, etc. has their psychological doubles. A spontaneous utterance wrong from the point of view of grammar, may have charm and aesthetic value. Absolute correctness is achieved only beyond natural language, in mathematics. Our daily speech continually fluctuates between the ideals of mathematical and of imaginative harmony.

We will illustrate the interdependence of the semantic and the grammatical aspects of language by citing two examples that show that changes in formal structure can entail far-reaching changes in meaning.

In translating the fable “La Cigale et la Fourmi,” Krylov substituted a dragonfly for La Fontaine’s grasshopper. In French Grasshopper is feminine and therefore well suited to symbolise a light-hearted, carefree attitude. The nuance would be lost in a literal translation, since in Russian Grasshopper is masculine. When he settled for dragonflies, which is feminine in Russian, Krylov disregarded the literal meaning in favour of the grammatical form required to render La Fontaine’s thought.

Tjutchev did the same in his translation of Heine’s poem about a fir and a palm. In German fir is masculine and palm feminine, and the poem suggests the love of a man for a woman. In Russian, both trees are feminine. To retain the implication, Tjutchev replaced the fir by a masculine cedar. Lermontov, in his more literal translation of the same poem, deprived it of these poetic overtones and gave it an essentially different meaning, more abstract and generalised. One grammatical detail may, on occasion, change the whole of which is to purport of what is said.

Behind words, there is the independent grammar of thought, the syntax of word meanings. The simplest utterance, far from reflecting a constant, rigid correspondence between sound and meaning, is really a process. Verbal expressions cannot emerge fully formed but must develop gradually. This complex process of transition from meaning to sound must itself be developed and perfected. The child must learn to distinguish between semantics and phonetics and understand the nature of the difference. At first he uses verbal forms and meanings without being conscious of them as separate. The word, to the child, is an integral part of the object it denotes. Such a conception seems to be characteristic of primitive linguistic consciousness. We all know the old story about the rustic who said he wasn’t surprised that savants with all their instruments could figure out the size of stars and their course – what baffled him was how they found out their names. Simple experiments show that preschool children “explain” the names of objects by their attributes. According to them, an animal is called “cow” because it has horns, “calves” because its horns are still small, “dog” because it is small and has no horns; an object is called “car” because it is not an animal. When asked whether one could interchange the names of objects, for instance call a cow “ink,” and ink “cow,” children will answer no, “because ink is used for writing, and the cow gives milk.” An exchange of names would mean an exchange of characteristic features, so inseparable is the connection between them in the child’s mind. In one experiment, the children were told that in a game a dog would be called “cow.” Here is a typical sample of questions and answers: Does a cow have horns? “Yes.” “But do you not remember that the cow is really a dog? Come now, does a dog have horns? “Sure, if it is a cow, if it is called cow, it has horns. That kind of dog has to have little horns.

We can see how difficult it is for children to separate the name of an object from its attributes, which cling to the name when it is transferred like possessions following their owner.

The fusion of the two planes of speech, semantic and vocal begins to break down as the child grows older, and the distance between them gradually increases. Each stage in the development of word meanings has its own specific interrelation of the two planes. A child’s ability to communicate through language is directly related to the differentiation of word meanings in his speech and consciousness.

To understand this, we must remember a basic characteristic of the structure of word meanings. In the semantic structure of a word, we distinguish between referent and meaning correspondingly, we distinguish a word’s nominative from its significative function. When we compare these structural and functional relations at the earliest, middle, and advanced stages of development, we find the following genetic regularity: In the beginning, only the nominative functions exist, and semantically, only the unbiased objective becomes the reference, and independent of naming, and meaning independent of reference, appear later and develop along the paths we have attempted to trace and describe.

Only when this development is completed does the child become fully able to formulate his own thought and to understand the speech of others. Until then, his usage of words coincides with that of adults in its objective reference but not in its meaning.

We must probe still deeper and explore the plane of inner speech lying beyond the semantic plane. We will discuss here some of the data of the special investigation we have made of it. The relationship of thought and word cannot be understood in all its complexity without a clear understanding of the psychological nature of inner speech. Yet, of all the problems connected with thought and language, this is perhaps the most complicated, beset as it is with terminological and other misunderstandings.

The term inner speech, or endophasy, has been applied to various phenomena, and authors argue about different things that they call by the same name. Originally, inner speech seems to have been understood as verbal memory. An example would be the silent recital of a poem known by heart. In that case, inner speech differs from vocal speech only as the idea or image of an object differs from the real object. It was in this sense that inner speech was understood by the French authors who tried to find out how words were reproduced in memory – whether as auditory, visual, motor, or synthetic images. We will see that word memory is indeed one of the constituent elements of inner speech but not all of it.

In a second interpretation, inner speech is seen as truncated external speech - as “speech minus sound” (Mueller) or “sub-vocal speech” (Watson). Bekhterev defined it as a speech reflex inhibited in its motor part. Such an explanation is by no measure of sufficiency. Silent “pronouncing” of words is not equivalent to the total process of inner speech.

The third definition is, on the contrary, too broad. To Goldstein, the term covers everything that precedes the motor act of speaking, including Wundt’s “motives of speech” and the indefinable, non-sensory and non-motor specific speech experience -, i.e., the whole interior aspect of any speech activity. It is hard to accept the equation of inner speech with an inarticulate inner experience in which the separate identifiable structural planes are dissolved without trace. This central experience is common to all linguistic activity, and for this reason alone Goldstein’s interpretation does not fit that specific, unique function that alone deserves the name of inner speech. Logically developed, Goldstein’s view must lead to the thesis that inner speech is not speech at all but rather an intellectual and affective-volitional activity, since it includes the motives of speech and the thought that is expressed in words.

To get a true picture of inner speech, one must embark upon that which is a specific formation, with its own laws and complex relations to the other forms of speech activity. Before we can study its relation to thought, on the one hand, and to speech, on the other, we must determine its special characteristics and function.

Inner speech allows one to speak for one’s external oration, for which of the others would be surprising, if such a difference in function did not affect the structure of the two kinds of speech. Absence of vocalisation per se is only a consequence of the specific nature of inner speech, which is neither an antecedent of external speech nor its reproduction in memory but is, in a sense, the opposite of external speech. The latter is the turning of thought into words, its materialisation and objectification. With inner speech, the process is reversed: Speech turns into inward thought. Consequently, their structures must differ.

The area of inner speech is one of the most difficult to investigate. It remained almost inaccessible to experiments until ways were found to apply the genetic method of experimentation. Piaget was the first to pay attention to the child’s egocentric speech and to see its theoretical significance, but he remained blind to the most important trait of egocentric speech - its genetic connection with inner speech – and this warped his interpretation of its function and structure. We made that relationship the central problem of our study and thus were able to investigate the nature of inner speech with unusual completeness. A number of considerations and observations led us to conclude that egocentric speech is a stage of development preceding inner speech: Both fulfil intellectual functions; Their structures are similar; egocentric speech disappears at school age, when inner speech begins to develop. From all this we infer that one change into the other.

If this transformation does take place, then egocentric speech provides the key to the study of inner speech. One advantage of approaching inner speech through egocentric speech is its accessibility to experimentation and observation. It is still vocalised, audible speech, i.e., external in its mode of expression, but at the same time inner speech in function and structure. To study an internal process, in that it is necessary to externalise it experimentally, by connecting it with some outer activity; barely then is objective functional analysis possible. Egocentric speech is, in fact, a natural experiment of this type.

This method has another great advantage: Since egocentric speech can be studied at the time when some of its characteristics are waning and new ones forming, we are able to judge which traits are essential to inner speech and which are only temporary, and thus to determine the goal of this movement from egocentric to inner speech -, i.e., the nature of inner speech.

Before we go on to the results obtained by this method, we will briefly discuss the nature of egocentric speech, stressing the differences between our theory and Piaget’s. Piaget contends that the child’s egocentric speech is a direct expression of the egocentrism of his thought, which in turn is a compromise between the primary autism of his thinking and its gradual socialisation. As the child grows older, and as autism overturns the associative remembers affiliated to socialisation progresses, leading to the waning of egocentrism in his thinking and speech.

In Piaget’s conception, the child in his egocentric speech does not adapt himself to the thinking of adults. His thought remains entirely egocentric; This makes his talk incomprehensibly to others. Egocentric speech has no function in the child’s realistic thinking or activity, but it merely accompanies them. And since it is an expression of egocentric thought, it disappears together with the child’s egocentrism. From its climax at the beginning of the child’s development, egocentric speech drops to zero on the threshold of school age. Its history is one of involution rather than evolution. It has no future.

In our conception, egocentric speech is a phenomenon of the transition from interpsychic to intrapsychic functioning, i.e., from the social, collective activity of the child to his more individualised activity - a pattern of development common to all the higher psychological functions. Speech for oneself originates through differentiation from speech for others. Since the main course of the child’s development is one of gradual individualisation, this tendency is reflected in the function and structure of his speech.

The function of egocentric speech is similar to that of inner speech: It does not merely accompany the child’s activity; it serves mental orientation, conscious understanding; it helps in overcoming difficulties; it is speech for oneself, intimately and usefully connected with the child’s thinking. Its fate is very different from that described by Piaget. Egocentric speech develops along a rising not a declining, curve; it goes through an evolution, not an involution. In the end, it becomes inner speech.

Our hypothesis has several advantages over Piaget’s: It explains the function and development of egocentric speech and, in particular, its sudden increase when the child’s face’s difficulties that demand consciousness and reflection – a fact uncovered by our experiments and which Piaget’s theory cannot explain. But the greatest advantage of our theory is that it supplies a satisfying answer to a paradoxical situation described by Piaget himself. To Piaget, the quantitative drop in egocentric speech as the child grows older means the withering of that form of speech. If that were so, its structural peculiarities might also be expected to decline; it is hard to believe that the process would affect only its quantity, and not its inner structure. The child’s thought becomes infinitely less egocentric between the ages of three and seven. If the characteristics of egocentric speech that make it incomprehensible to others are indeed rooted in egocentrism, they should become less apparent as that form of speech becomes less frequent; Egocentric speech should approach social speech and become ever more intelligible. Yet what are the facts? Is the talk of a three-year-old harder to follow than that of a seven-year-old? Our investigation established that the traits of egocentric speech that makes for inscrutability are at their lowest point at three and at their peak at seven. They develop in a reverse direction to the frequency of egocentric speech. While the latter keeps declining and reaches the point of zero at school age, the structural characteristics become more pronounced.

This throws a new light on the quantitative decrease in egocentric speech, which is the cornerstone of Piaget’s thesis.

What does this decrease mean? The structural peculiarities of speech for oneself and its differentiation from external speech increase with age. What is it that diminishes? Only one of its aspects verbalizes. Does this mean that egocentric speech as a whole is dying out? We believe that it does not, for how then could we explain the growth of the functional and structural traits of egocentric speech? On the other hand, their growth is perfectly compatible with the decrease of vocalisation - indeed, clarifies its meaning. Its rapid dwindling and the equally rapid growth of the other characteristics are contradictory in appearance only.

To explain this, let us start from an undeniable, experimentally established fact. The structural and functional qualities of egocentric speech become more marked as the child develops. At three, the difference between egocentric and social speech matches that to zero; At seven, we have speech that in structure and function is totally unlike social speech. A differentiation of the two speech functions has taken place. This is a fact - and facts are notoriously hard to refute.

Once we accept this, everything else falls into place. If the developing structural and functional peculiarities of egocentric speech progressively isolate it from external speech, then its vocal aspect must fade away. This is exactly what happens between three and seven years. With the progressive isolation of speech for oneself, its vocalisation becomes unnecessary and meaningless and, because of its growing structural peculiarities, also impossible. Speech for oneself cannot find expression in external speech. The more independent and autonomous egocentric speech becomes, the poorer it grows in its external manifestations. In the end it separates itself entirely from speech for others, ceases to be vocalised, and thus appears to die out.

But this is only an illusion. To interpret the sinking coefficient of egocentric speech as a sign that this kind of speech is dying out is like saying that the child stops counting when he ceases to use his fingers and starts adding in his head. In reality, behind the symptoms of dissolution lies a progressive development, the birth of a new speech form.

The decreasing vocalisation of egocentric speech denotes a developing abstraction from sound, the child’s new faculty to “think words” instead of pronouncing them. This is the positive meaning of the sinking coefficient of egocentric speech. The downward curve indicates development toward inner speech.

We can see that all the known facts about the functional, structural, and genetic characteristics of self-indulgent or egocentric speech points to one thing: It develops in the direction of inner speech. Its developmental history can be understood only as a gradual unfolding of the traits of inner speech.

We believe that this corroborates our hypothesis about the origin and nature of egocentric speech. To turn our hypothesis into a certainty, we must devise an experiment capable of showing which of the two interpretations is correct. What are the data for this critical experiment?

Let us restate the theories between which we must decide as for Piaget believes, that egocentric speech stems from the insufficient socialisation of speech and that its only development is decrease and eventual death. Its culmination lies in the past. Inner speech is something new brought in from the outside along with socialisation. We demonstrated that in egocentric speech stems from the insufficient individualisation of primary social speech. Its culmination lies in the future. It develops into inner speech.

To obtain evidence for one or the other view, we must place the child alternately in experimental situations encouraging social speech and in situations discouraging it, and see how these changes affect egocentric speech. We consider this an experimentum crucis for the following reasons.

If the child’s egocentric talk results from the egocentrism of his thinking and its insufficient socialisation, then any weakening of the social elements in the experimental setup, any factor contributing to the child’s isolation from the group, must lead to a sudden increase in egocentric speech. But if the latter results from an insufficient differentiation of speech for oneself from speech for others, then the same changes must cause it to decrease.

We took as the starting point of our experiment three of Piaget’s own observations: (1) Egocentric speech occurs only in the presence of other children engaged in the same activity, and not when the child is alone; i.e., it is a collective monologue. (2) The child is under the illusion that his egocentric talk, directed to nobody, is understood by those who surround him. (3) Egocentric speech has the character of external speech: It is audible or whispered. These are certainly not chance peculiarities. From the child’s own point of view, egocentric speech is not yet separated from social speech. It occurs under the subjective and objective conditions of social speech and may be considered a correlate of the insufficient isolation of the child’s individual consciousness from the social whole.

In our first series of experiments, we tried to destroy the illusion of being understood. After measuring the child’s coefficient of egocentric speech in a situation similar to that of Piaget’s experiments, we put him into a new situation: Either with deaf-mute children or with children speaking a foreign language. In all other respects the setup remained the same. The coefficient of egocentric speech dropped to zero in the majority of cases, and in the rest to one-eighth of the previous figure, on the average. This proves that the illusion of being understood is not a mere epiphenomenon of egocentric speech but is functionally connected with it. Our results must seem paradoxical from the point of view of Piaget’s theory: The weaker the child’s contact is with the group – amounting to less of the social situation forces’ him to adjust his thoughts to others and to use social speech – that there is more as freely should be the egocentrism of his thinking and speech manifest itself. But from the point of view of our hypothesis, the meaning of these findings is clear: Egocentric speech, springing from the lack of differentiation of speech for oneself from speech for others, disappears when the feeling of being understood, essential for social speech, is absent.

In the second series of experiments, the variable factor was the possibility of some collective monologue. Having measured the child’s coefficient of egocentric speech in a situation permitting collective monologue, we put him into a situation excluding it - in a group of children who were strangers to him, or by his being of self, at which point, a separate table in a corner of the room, for which he worked entirely alone, even the experimenter leaving the room. The results of this series agreed with the first results. The exclusion of the group monologue caused a drop in the coefficient of egocentric speech, though not such a striking one as in the first case - seldom to zero and, on the average, to one-sixth of the original figure. The different methods of precluding a collective characterize monologues that were not equally effective in reducing the coefficient of egocentric speech. The trend, however, was obvious in all the variations of the experiment. The exclusion of the collective factor, instead of giving full freedom to egocentric speech, depressed it. Our hypothesis was once more confirmed.

In the third series of experiments, the variable factor was the vocal quality of egocentric speech. Just outside the laboratory where the experiment was in progress, an orchestra played so loudly, or so much noise was made, that it drowned out not only the voices of others but the child’s own; in a variant of the experiment, the child was expressly forbidden to talk loudly and allowed to talk only in whispers. Once again the coefficient of egocentric speech went down, the relation to the original unit being the different methods were not equally effective, but the basic trend was invariably present.

The purpose of all three series of experiments was to eliminate those characteristics of egocentric speech that bring it close to social speech. We found that this always led to the dwindling of egocentric speech. It is logical, then, to assume that egocentric speech is a form developing out of social speech and not yet separated from it in its manifestation, though already distinct in function and structure.

No comments:

Post a Comment