February 10, 2010

-page 123-

We could derive a scientific understanding of these ideas with the aid of precise deduction, as Descartes continued his claim that we could lay the contours of physical reality out in three-dimensional co-ordinates. Following the publication of Isaac Newton’s ‘Principia Mathematica’ in 1687, reductionism and mathematical modeling became the most powerful tools of modern science. The dream that we could know and master the entire physical world through the extension and refinement of mathematical theory became the central feature and principles of scientific knowledge.


The radical separation between mind and nature formalized by Descartes served over time to allow scientists to concentrate on developing mathematical descriptions of matter as pure mechanisms without any concern about its spiritual dimensions or ontological foundations. Meanwhile, attempts to rationalize, reconcile or eliminate Descartes’s merging division between mind and matter became the most central feature of Western intellectual life.

However, the Americans dissolved the distinction between mind and natter with an appeal to ontological monism and alleged that mind could free itself from all the constraint of assuming that by some sorted limitation of matter, in which such states have of them, some mystical awareness. It seems a strong possibility that Plotonic and Whitehead connect upon the issue of the creation of the sensible world may by looking at actual entities as aspects of nature’s contemplation. The contemplation of nature is obviously an immensely intricate affair, involving a myriad of possibilities, therefore one can look at actual entities as, in some sense, the basic elements of a vast and expansive process.

We could derive a scientific understanding of these ideas with the aid of precise deduction, as Descartes continued his claim that we could lay the contours of physical reality out in three-dimensional co-ordinates. Following the publication of Isaac Newton’s ‘Principia Mathematica’ in 1687, reductionism and mathematical modeling became the most powerful tools of modern science. The dream that we could know and master the entire physical world through the extension and refinement of mathematical theory became the central feature and principles of scientific knowledge.

For many sceptics have traditionally held that knowledge requires certainty, artistry. And, of course, they claim that assured knowledge is not possible. In part, nonetheless, of the principle that every effect it’s a consequence of an antecedent cause or causes. For causality to be true it is not necessary for an effect to be predictable as the antecedent causes may be numerous, too complicated, or too interrelated for analysis. Nevertheless, in order to avoid scepticism, this participating sceptic has generally held that knowledge does not require certainty. Except for alleged cases of things that are evident for one just by being true, it has often been thought, that any thing known must satisfy certain criteria as well for being true. It is often taught that anything is known must satisfy certain standards. In so saying, that by ‘deduction’ or ‘induction’, there will be criteria specifying when it is. As these alleged cases of self-evident truths, the general principle specifying the sort of consideration that will make such standard in the apparent or justly conclude in accepting it warranted to some degree.

Besides, there is another view ~ the absolute globular view that we do not have any knowledge whatsoever. In whatever manner, it is doubtful that any philosopher seriously entertains of absolute scepticism. Even the Pyrrhonist sceptics, who held that we should refrain from accenting to any non-evident standards that no such hesitancy about asserting to ‘the evident’, the non-evident are any belief that requires evidences because it is warranted.

René Descartes (1596-1650), in his sceptical guise, never doubted the content of his own ideas. It’s challenging logic, inasmuch as of whether they ‘corresponded’ to anything beyond ideas.

All the same, Pyrrhonism and Cartesian form of virtual global scepticism, in having been held and defended, that of assuming that knowledge is some form of true, sufficiently warranted belief, it is the warranted condition that provides the truth or belief conditions, in that of providing the grist for the sceptic’s mill about. The Pyrrhonist will suggest that non-evident, empirically deferring the sufficiency of giving in but warranted. Whereas, a Cartesian sceptic will agree that no empirical standard about anything other than one’s own mind and its contents is sufficiently warranted, because there are always legitimate grounds for doubting it. Nonetheless, the essential difference between the two views concerns the stringency of the requirements for a belief being sufficiently warranted to take account of as knowledge.

A Cartesian requires certainty, but a Pyrrhonist merely requires that the standards in case are more warranted then its negation.

Cartesian scepticism was unduly an in fluence with which Descartes agues for scepticism, than his reply holds, in that we do not have any knowledge of any empirical standards, in that of anything beyond the contents of our own minds. The reason is roughly in the position that there is a legitimate doubt about all such standards, only because there is no way to justifiably deny that our senses are being stimulated by some sense, for which it is radically different from the objects which we normally think, in whatever manner they affect our senses. Therefrom, if the Pyrrhonist is the agnostic, the Cartesian sceptic is the atheist.

Because the Pyrrhonist requires much less of a belief in order for it to be confirmed as knowledge than do the Cartesian, the argument for Pyrrhonism are much more difficult to construct. A Pyrrhonist must show that there is no better set of reasons for believing to any standards, of which are in case that any knowledge learnt of the mind is understood by some of its forms, that has to require certainty.

The underlying latencies that are given among the many derivative contributions as awaiting their presence to the future that of specifying to the theory of knowledge, is, but, nonetheless, the possibility to identify a set of shared doctrines, however, identity to discern two broad styles of instances to discern, in like manners, these two styles of pragmatism, clarify the innovation that a Cartesian approval is fundamentally flawed, nonetheless, of responding very differently but not fordone.

Repudiating the requirements of absolute certainty or knowledge, in the connection of knowledge with activity, as, the pragmatist as of yet, is a disturbing but legitimate sort of given, for which knowledge has of relating to traditional questions. That to express our doubt about the truth non-conduciveness, especially about of our cognizant applicability, in that we pattern our living habitat on or upon the significant ideas that something conveys to the mind the meaning of some endless debate, for which we ill-treat in its uses by some outrageous deformation, as justly to infer the practice that admits as valid in the conception of truths arousal for the ill-favoured objectivity. In that, there is enough to give those questions that undergo of gathering in their own purposive latencies ae yet that we are given to the spoken word for which a dialectic awareness sparks from the ambers too aflame the awareness of what or who we are.

Pragmatism of a determinant revolution, by contrast, relinquishing the objectivity of youth, acknowledges no legitimate epistemological questions over and above those that are naturally kindred of our current cognitive conviction.

It seems clear that certainty is a property that can be assembled to either a person or a belief. We can say that a person, ‘S’ are certain, or we can say that its descendable alinement is aligned as of ‘p’, are certain. The two uses can be connected by saying that ‘S’ has the right to be certain just in case the value of ‘p’ is sufficiently verified.

In defining certainty, it is crucial to note that the term has both an absolute and relative sense. More or less, we take a proposition to be certain when we have no doubt about its truth. We may do this in error or unreasonably, but objectively a proposition is certain when such absence of doubt is justifiable. The sceptical tradition in philosophy denies that objective certainty is often possible, or ever possible, either for any proposition at all, or for any proposition at all, or for any proposition from some suspect family (ethics, theory, memory, empirical judgement etc.) a major sceptical weapon is the possibility of upsetting events that Can cast doubt back onto what was hitherto taken to be certainty. Others include reminders of the divergence of human opinion, and the fallible source of our confidence. Fundamentalist approaches to knowledge look for a basis of certainty, upon which the structure of our system is built. Others reject the metaphor, looking for mutual support and coherence, without foundation.

However, in moral theory, the views that there are inviolable moral standards or absolute variable human desires or policies or prescriptions are that we possess a moral sense partially akin to the other senses, that disposes us to the approvals and disapprovals that we feel. The ‘sense’ in question is certainly to be distinguished from any exercise of rationality, but seems unlike sight or hearing in that its output is an attitude: It is more like a sense of humour or sense of balance. Francis Hutcheson (1694-1746), the moral and political philosopher was part of the ‘sentimentalist’ school, initiated by the 3rd Earl of Shaftesbury, and opposed to such rationalists as Richard Price (1723-91) and Samuel Clarke (1675-1729). Although Hume refers approvingly to the moral sense, he develops his own version of morality as based on the passions without using the concept at all: Here he anticipates Adam Smith (1723-90), an economist, theorist and moral philosopher, who finds it unnecessary to posit any extra sense. Instead, moralizing is explained by the normal operation of other sentiments, such as sympathy, regulated by various mechanisms.

In spite of the notorious difficulty of reading Kantian ethics, a hypothetical imperative embeds a command which is in place only to put into the given antecedent desire or project: ‘If you want to look wise, stay quiet’. The injunction to stay quiet only proves applicable to those with the antecedent desire or inclination. If one has no desire to look wise, the injunction cannot be so avoided: It is a requirement that binds anybody, regardless of their inclination. It could be represented as, for example, ‘tell the truth (regardless of whether you want to or not)’. The distinction is not always signalled by presence or absence of the conditional or hypothetical form: ‘If you crave drink, don’t become a bartender’ may be regarded as an absolute injunction applying to anyone, though it can only be activated in cases of those with the stated desire.

In Grundlegung zur Metaphsik der Sitten (1785), Kant discussed five forms of the categorical imperative: (1) The formula of universal law: ‘act only on that maxim through which you can at the same times will that it should become universal law: (2) The formula of the law of nature: ‘Act as if the maxim of your action were to become throughly willing as a universal law of nature’: (3) The formula of the end-in-itself: ‘Act within a willing nature through its interaction of its desire to act, whether in your own person or in the person of any other, never simply as a means, but always at the same time as an end’: (4) The formula of autonomy, or considering ‘the will of every rational being as a will which makes universal law’: (5) The formula of the ‘Kingdom of Ends’, which provides a model for the systematic union of different rational beings under common laws.

Even so, a proposition that is not a conditional ‘p’, that, moreover, the affirmative and negative, modern opinion is wary of this distinction, since what appears categorical may vary notation. Apparently, categorical propositions may also turn out to be disguised conditionals: ‘X’ is intelligent (categorical?) If ‘X’ is given a range of tasks she performs them better than many people (conditional?) The problem. Nonetheless, is not merely one of classification, since deep metaphysical questions arise when facts that seem to be categorical and therefore solid, come to seem by contrast conditional, or purely hypothetical or potential.

A limited area of knowledge or endeavour to which pursuits, activities and interests are a central representation held to a concept of physical theory. In this way, a field is defined by the distribution of a physical quantity, such as temperature, mass density, or potential energy, at different points in space. In the particularly important example of force fields, such as gravitational, electrical, and magnetic fields, the field value at a point is the force which a test particle would experience if it were located at that point. The philosophical problem is whether a force field is to be thought of as purely potential, so the presence of a field merely describes the propensity of masses to move relative to each other, or whether it should be thought of in terms of the physically real modifications of a medium, whose properties result in such powers that are, force fields that are potentially purer and fully characterized by dispositional statements or conditionals, or are they categorical or actual? The former option seems to require within ungrounded dispositions, or regions of space that differ only in what happens if an object is placed there. The law-like shape of these dispositions, apparent for example in the curved lines of force of the magnetic field, may then seem quite inexplicable. To atomists, such as Newton it would represent a return to Aristotelian entelechies, or quasi-psychological affinities between things, which are responsible for their motions. The latter option requires understanding of how forces of attraction and repulsion can be ‘grounded’ in the properties of the medium.

The basic idea of a field is arguably present in Leibniz, who was certainly hostile to Newtonian atomism. Although his equal hostility to ‘action at a distance’ muddies the water, it is usually credited to the Jesuit mathematician and scientist Joseph Boscovich (1711-87), as well, as, Immanuel Kant (1724-1804), both of whom influenced Faraday, with whose work the physical notion became established. In his paper ‘On the Physical Character of the Lines of Magnetic Force’ (1852), Faraday was to suggest several criteria for assessing the physical reality of lines of force, such as whether they are affected by an intervening material medium, whether the motion depends on the nature of what is placed at the receiving end. As far as electromagnetic fields go, Faraday himself inclined to the view that the mathematical similarity between heat flow, currents, and electromagnetic lines of force was evidence for the physical reality of the intervening medium.

Once, again, our mentioning recognition for which its case value, whereby its view is especially associated the American psychologist and philosopher William James (1842-1910), that the truth of a statement can be defined in terms of a ‘utility’ of accepting it. Communicated, so much as a dispiriting position for which its place of valuation may be viewed as an objection, since there are things that are false, as it may be useful to take or sustain without protest or repining and conversely there are things that are true and that it may be damaging to acceptation. Nevertheless, there are deep connections between the idea that a representation system is accorded, and the likely success of the projects in progressive formality, by its possession. The evolution of a system of representation either perceptual or linguistic, seems bounded to connect successes with everything adapting or with utility in the modest sense. The Wittgenstein doctrine stipulates the meaning of use that upon the nature of belief and its relations with human attitude, emotion and the idea that belief in the truth on one hand, the action of the other. One way of binding with cement, wherefore the connection is found in the idea that natural selection becomes much as much in adapting us to the cognitive creatures, because beliefs have ineffect, especially how its influence sways upon how they work. Pragmatism can be found in Kant’s doctrine, and continued to play an influencing role in the theory of meaning and truth.

James, (1842-1910), although with characteristic generosity exaggerated in his debt to Charles S. Peirce (1839-1914), he charted that the method of doubt encouraged people to pretend to doubt what they did not doubt in their hearts, and criticize its individualist’s insistence, that the ultimate test of certainty is to be found in the individuals personalized consciousness.

From his earliest writings, James understood cognitive processes in teleological terms. Though, he held, assisted us in the satisfactory interests. His will to Believe doctrine, the view that we are sometimes justified in believing beyond the evidential relics upon the notion that a belief’s benefits are relevant to its justification. His pragmatic method of analysing philosophical problems, for which requires that we find the meaning of terms by examining their application to objects in experimental situations, similarly reflects the teleological approach in its attention to consequences.

Such an approach, is, nonetheless, of what sets’ James’ theory of meaning apart from verification, substantiation, and the dismissive metaphysics, and highly unlike are the verificationalists, who takes cognitive meaning to be a matter only of consequences in sensory experience. James’ took pragmatic meaning to include emotional and matter responses. Moreover, his, metaphysical standard of value, not a way of dismissing them as meaningless, but, it should also be noted that in a greater extent, circumspective moments’ James did not hold that even his broad sets of consequences were exhaustive of some terms meaning. ‘Theism’, for example, he took to have antecedently, definitional meaning, in addition to its varying degree of importance and chance upon an important pragmatic meaning.

James’ theory of truth reflects upon his teleological conception of cognition, by considering a true belief to be one which is compatible with our existing system of beliefs, and leads us to satisfactory interaction with the world.

However, Peirce’s famous pragmatist principle is a rule of logic employed in clarifying our concepts and ideas. Consider the claim the liquid in a flask is an acid, if, we believe this, we except that it would turn red: We accept an action of ours to have certain experimental results. The pragmatic principle holds that listing the conditional expectations of this kind, in that we associate such immediacy with applications of a conceptual representation that provides a complete and orderly sets clarification of the concept. This is relevant to the logic of abduction: Clarificationists using the pragmatic principle provides all the information about the content of a hypothesis that is relevantly to decide whether it is worth testing requires that ‘would-bees’ are objective and, of course, real.

Other opponents deny that the entities posited by the appropriate discourse that exists or at least exists: The standard example is ‘idealism’ that a reality is somehow mind-curative or mind-co-ordinated ~ that real object comprising the ‘external worlds’ are dependently of eloping minds, but only exist as in some way correlative to the mental operations. The doctrine assembled of ‘idealism’ enters on the conceptual note that reality as we understand this as meaningful and reflects the working of mindful purposes. And it construes this as meaning that the inquiring mind itself makes of some formative constellations and not of any mere understanding nature of the ‘real’ moments with which the resulting placement is charged, as this that we confirm the attribution to it.

Wherefore, the term for which it is most directly used, for when the qualifying values along with another linguistic form in the style of the Grammatik: A real ‘x’ may be contrasted with a fake, a failed ‘x’, a near ‘x’, and so on. To trat something as real, without qualification, is to suppose it to be part of the actualized world. To reify something is to suppose that we have committed by some indoctrinated treatise, as that of a theory. The central error in thinking of reality and the totality of existence is to think of the ‘unreal’ as a separate domain of things, perhaps, unfairly to that of the benefits of existence.

Such that non-existence of all things, as the product of logical confusion of treating the term ‘nothing’ as itself a referring expression instead of a ‘quantifier’. (Stating informally as a quantifier is an expression that reports of a quantity of times that a predicate is satisfied in some class of things, i.e., in a domain.) This confusion leads the unsuspecting to think that a sentence such as ‘Nothing is all around us’ talks of a special kind of thing that is all around us, when in fact it merely denies that the predicate ‘is all around us’ have appreciations. The feelings that lad some philosophers and theologians, notably Heidegger, to talk of the experience of nothing, is not properly the experience of anything, but rather the failure of a hope or expectations that there would be something of some kind at some point. This may arise in quite everyday cases, as when one finds that the article of functions one expected to see as usual, in the corner has disappeared. The difference between ‘existentialist’‘ and ‘analytic philosophy’, on the point of what, whereas the former is afraid of nothing, and the latter think that there is nothing to be afraid of.

A rather different set of concerns arises when actions are specified in terms of doing nothing, saying nothing may be an admission of guilt, and doing nothing in some circumstances may be tantamount to murder. Still, other substitution problems arise over conceptualizing empty space and time.

Whereas, the standard opposition between those who affirm and those who deny, the real existence of some kind of thing or some kind of fact or state of affairs. Almost any area of discourse may be the focus of this dispute: The external world, the past and future, other minds, mathematical objects, possibilities, universals, moral or aesthetic properties are examples. There be to one influential suggestion, as associated with the British philosopher of logic and language, and the most determinative of philosophers centred round Anthony Dummett (1925), to which is borrowed from the ‘intuitivistic’ critique of classical mathematics, and suggested that the unrestricted use of the ‘principle of bivalence’ is the trademark of ‘realism’. However, this has to overcome counter-examples in both ways: Although Aquinas was a moral ‘realist’, he held that moral really was not sufficiently structured to make true or false every moral claim. Unlike Kant who believed that he could use the law of bivalence happily in mathematics, precisely because it was only our own construction. Realism can itself be subdivided: Kant, for example, combines empirical realism (within the phenomenal world whereas the realist says the right things ~ encompassing the objects that really exist and independent of us and our mental stares) with transcendental idealism (the phenomenal world as a whole reflects the structures imposed on it by the activity of our minds as they render it intelligible to us). In modern philosophy the orthodox oppositions to realism have been from a philosopher such as Goodman, who, impressed by the extent to which we perceive the world through conceptual and linguistic lenses of our own making.

Assigned to the modern treatment of existence in the theory of ‘quantification’ is sometimes put by saying that existence is not a predicate. The idea is that the existential quantify themselves is an operator on a predicate, indicating that the property it expresses has instances. Existence is therefore treated as a second-order property, or a property of properties. It is fitting to say, that in this it is in like number, for when we say that these things of a kind, we do not describe the thing (ad we would if we said there are red things of the kind), but instead attribute a property to the kind itself. The parallelled numbers are exploited by the German mathematician and philosopher of mathematics Gottlob Frége in the dictum that affirmation of existence is merely denied of the number nought. A problem, nevertheless, proves accountable for it’s crated by sentences like ‘This exists’, where some particular thing is undirected, such that a sentence seems to express a contingent truth (for this insight has not existed), yet no other predicate is involved. ‘This exists’ is. Therefore, unlike ‘Tamed tigers exist’, where a property is said to have an instance, for the word ‘this’ and does not locate a property, but only and individual.

Possible worlds seem able to differ from each other purely in the presence or absence of individuals, and not merely in the distribution of exemplification of properties.

The philosophical ponderosity over which to set upon the unreal, as belonging to the domain of Being. Nonetheless, there is little for us that can be said with the philosopher’s study. So it is not apparent that there can be such a subject for being by itself. Nevertheless, the concept had a central place in philosophy from Parmenides to Heidegger. The essential question of ‘why is there something and not of nothing’? Prompting over logical reflection on what it is for a universal to have an instance, nd as long history of attempts to explain contingent existence, by which is to reference a frame of a necessary ground.

In the transition, ever since Plato, this ground becomes a self-sufficient, perfect, unchanging, and external something, identified with the Good, Well or God, except whose relation with the everyday world remains clouded. The celebrated argument for the existence of God was first announced by Anselm in his Proslogin. The argument by defining God as ‘something than which nothing greater can be conceived’. God then exists in the understanding since we understand this concept. However, if he only existed in the understanding something greater could be conceived, for a being that exists in reality is greater than one that exists in the understanding. Bu then, we can conceive of something greater than that than which nothing greater can be conceived, which is contradictory. Therefore, God cannot exist on the understanding, but exists in reality.

An influential argument (or family of arguments) for the existence of God, finding its premisses are that all natural things are dependent for their existence on or upon something else. The totality of dependent different acquaintances is gainfully acquired to many differentiated states, such that ours must then depend on or upon a non-dependent, or necessarily existent resources as something to which is differently or needed of substance, driven descriptively to obtain those desired or required of choice’s. Thereof the characterlogical definition with which these kinds are a basic mystification beyond our wonder, in that these sorts are in of which is God. Like the argument to design, the cosmological argument was attacked by the Scottish philosopher and historian David Hume (1711-76) and Immanuel Kant.

Its main problem, nonetheless, is that it requires us to make sense of the notion of necessary existence. For if the answer to the question of why anything exists is that some other thing of a similar kind exists, the doubtfulness merely revivifies of renewing them gain. So, those that having accredited that ‘God’ who ends the question must exist necessarily: It must not be an entity of which the same kinds of questions can be raised. The other problem with the argument is attributing concern and care to the deity, not for connecting the necessarily existent being it derives with human values and aspirations.

The ontological argument has been treated by modern theologians such as Barth, following Hegel, not so much as a proof with which to confront the unconverted, but as an explanation of the deep meaning of religious belief. Collingwood, regards the argument s proving not that because our idea of God is that of, ‘id quo maius cogitare viequit’, therefore God exists, but proving that because this is our idea of God, we stand committed to belief in its existence. Its existence is a metaphysical point or absolute pre-supposition of certain forms of thought.

In the 20th century, modal versions of the ontological argument have been propounded by the American philosophers Charles Hertshorne, Norman Malcolm, and Alvin Plantinga. One version is to define something as unsurmountably great, if it exists and is perfect in every ‘possible world’. Then, to allow that it is at least possible that an unsurpassable great being existing. This means that there is a possible world in which such a being exists. However, if it exists in one world, it exists in all (for the fact that such a being exists in a world that entails, in at least, it exists and is perfect in every world), so, it exists necessarily. The correct response to this argument is to disallow the apparently reasonable concession that it is possible that such a being exists. This concession is much more dangerous than it looks, since in the modal logic, involved from possibly necessarily ‘p’, we can ploy devices as some mechanistic perforce as a function or effects in the desired end that a turn of events will employ the necessity of ‘p’. A symmetrical proof starting from the assumption that it is possibly that such a being does not exist would derive that it is impossible that it exists.

The doctrine that it makes an ethical difference of whether an agent actively intervenes to bring about a result, or omits to act in circumstances in which it is foreseen, that as a result of something omitted at and away from inclusion, such wanting has the same resultant and justly happens to take place. Thus, suppose that I wish you dead. If I act to bring about your death, I am a murderer, however, if I happily discover you in danger of death, and fail to act to save you, I am not acting, and therefore, according to the doctrine of acts and omissions not a murderer. Critics implore that omissions can be as deliberate and immorality for which I am responsible for your food and fact to feed you. Only omission is surely a killing, ‘Doing nothing’ can be a way of doing something, or in other worlds, absence of bodily movement can also constitute acting negligently, or deliberately, and defending on the context, may be a way of deceiving, betraying, or killing. Nonetheless, criminal law offers to find its conveniences, from which to distinguish discontinuous intervention, for which is permissible, from bringing about a result, which may not be, if, for instance, the result is death of a patient. The question is whether the difference, if there is one, is, between acting and omitting to act be discernibly or defined in a way that bars a general moral might.

The double effect of a principle attempting to define when an action that had both good and bad results are morally permissible. I one formation such an action is permissible if (1) The action is not wrong, (2) the bad consequence is not that which is intended (3) the good is not itself a result of the bad consequences, and (4) the two consequential effects are commensurate. Thus, for instance, I might justifiably bomb an enemy factory, foreseeing but intending that the death of nearby civilians, whereas bombing the death of nearby civilians intentionally would be disallowed. The principle has its roots in Thomist moral philosophy, accordingly. St. Thomas Aquinas (1225-74), held that it is meaningless to ask whether a human being is two things (soul and body) or, only just as it is meaningless to ask whether the wax and the shape given to it by the stamp are one: On this analogy the sound is ye form of the body. Life after death is possible only because a form itself does not perish (pricking is a loss of form).

And therefore, in some sense available to reactivate a new body, insofar as that I am not the one who survived bodily death, but I may be resurrected in the same personalized body that becomes reanimated by the same form, that which Aquinas’s account, as a person has no privileged self-understanding, we understand ourselves as we do everything else, by way of sense experience and abstraction, and knowing the principle of our own lives is an achievement, not as a given. Difficulty at this point led the logical positivist to abandon the notion of an epistemological foundation together, and to flirt with the coherence theory of truth, it is widely accepted that trying to make the connection between thought and experience through basic sentence s depends on an untenable ‘myth of the given

The special way that we each have of knowing our own thoughts, intentions, and sensationalist have brought in the many philosophical ‘behaviorist and functionalist tendencies, that have found it important to deny that there is such a special way, arguing the way that I know of my own mind inasmuch as the way that I know of yours, e.g., by seeing what I say when asked. Others, however, point out that the behaviour of reporting the result of introspection in a particular and legitimate kind of behavioural access that deserves notice in any account of historically human psychology. The historical philosophy of reflection upon the astute of history, or of historical, thinking, finds the term was used in the 18th century, e.g., by Volante was to mean critical historical thinking as opposed to the mere collection and repetition of stories about the past. In Hegelian, particularly by conflicting elements trajectorized within his own orbital system, however, it came to man universal or world history. The Enlightenment confidence was being replaced by science, reason, and understanding that gave history a progressive moral thread, and under the influence of the German philosopher, whom is in spreading Romanticism, is that of Gottfried Herder (1744-1803), and, Immanuel Kant, this idea took it further to hold, so that philosophy of history cannot be the detecting of a grand system, the unfolding of the evolution of human nature as witnessed in successive sages (the progress of rationality or of Spirit). This essential speculative philosophy of history is given an extra Kantian twist in the German idealist Johann Fichte, in whom the extra association of temporal succession with logical implication introduces the idea that concepts themselves are the dynamic engines of historical change. The idea is readily intelligibly in that their world of nature and of the act or process of thinking, which immersed in deep thought may characterize the prehensions of ruminative speculation, especially if thought becomes identifiable. The work of Herder, Kant, Flichte and Schelling is synthesized by Hegel: History has a plot, as too, this to the moral development of man equates with freedom within the state, this in turn is the development of thought, or a logical development in which various necessary moment in the life of the concept are successively achieved and improved upon. Hegel’s method is at it’s most successful, when the object is the history of ideas, and the evolution of thinking may march in steps with logical oppositions and their resolution encounters red by various systems of thought.

Within the revolutionary communism, Karl Marx (1818-83) and the German social philosopher Friedrich Engels (1820-95), there emerges a rather different kind of story, based upon Hefl’s progressive structure not laying the achievement of the goal of history to a future in which the political condition for freedom comes to exist, so that economic and political fears than ‘reason’ is in the engine room. Although, it is such that speculations upon the history may that it is continued to be written, notably: Late examples, by the belated 19th century, large-scale speculation of this kind with the nature of historical understanding, and in particular with a comparison between the methos of natural science and with the historians. For writers such as the German neo-Kantian Wilhelm Windelband and the German philosopher and literary critic and historian Wilhelm Dilthey, it is important to show that the human sciences such. As histories are objective and legitimate, nonetheless they are in some way deferent from the enquiry of the scientist. Since the subjective-matter is the past thought and actions of human brings, what is needed and actions of human beings, past thought and actions of human beings, what is needed is an ability to re-live that past thought, knowing the deliberations of past agents, as if they were the historian’s own. The most influential British writer on this theme was the philosopher and historian George Collingwood (1889-1943) whose, The Idea of History (1946), contains an extensive defence of the verstehe approach, but it is nonetheless, the explanation from their actions, however, by re-living the situation as our understanding that understanding others is not gained by the tactic use of a ‘theory’, enabling us to infer what thoughts or intentionality experienced, again, the matter to which the subjective-matters of past thoughts and actions, as I have a human ability of knowing the deliberations of past agents as if they were the historian’s own. The immediate question of the form of historical explanation, and the fact that general laws have other than no place or any apprentices in the order of a minor place in the human sciences, it is also prominent in thoughts about distinctiveness as to regain their actions, but by re-living the situation in or thereby an understanding of what they experience and thought.

The view that everyday attributions of intention, belief and meaning to other persons proceeded via tacit use of a theory that enables ne to construct these interpretations as explanations of their doings. The view is commonly hld along with functionalism, according to which psychological states theoretical entities, identified by the network of their causes and effects. The theory-theory had different implications, depending on which feature of theories is being stressed. Theories may be though of as capable of formalization, as yielding predications and explanations, as achieved by a process of theorizing, as achieved by predictions and explanations, as achieved by a process of theorizing, as answering to empirically evince that is in principle describable without them, as liable to be overturned by newer and better theories, and o on. The main problem with seeing our understanding of others as the outcome of a piece of theorizing is the non-existence of a medium in which this theory can be couched, as the child learns simultaneously he minds of others and the meaning of terms in its native language.

Our understanding of others is not gained by the tacit use of a ‘theory’. Enabling us to infer what thoughts or intentions explain their actions, however, by re-living the situation ‘in their moccasins’, or from their point of view, and thereby understanding what they experienced and thought, and therefore expressed. Understanding others is achieved when we can ourselves deliberate as they did, and hear their words as if they were our own. The suggestion is a modern development of the ‘Verstehen’ tradition associated with Dilthey, Weber and Collngwood.

Much as much, it is therefore, in some sense available to reactivate a new body, however, not that I, who survives bodily death, but I may be resurrected in the same body that becomes reanimated by the same form, in that of Aquinas’s account, a person has no privileged position of self-understanding. We understand ourselves, just as we do everything else, that through the sense experience, in that of an abstraction, may justly be of knowing the principle of our own lives, is to obtainably achieve, and not as a given. In the theory of knowledge that knowing Aquinas holds the Aristotelian doctrine that knowing entails some similarities between the Knower and what there is to be known: A human’s corporal nature, therefore, requires that knowledge start with sense perception. As yet, the same limitations that do not apply of bringing further the levelling stabilities that are contained within the hierarchical mosaic, such as the celestial heavens that open in bringing forth to angles.

In the domain of theology Aquinas deploys the distraction emphasized by Eringena, between the existence of God in understanding the significance; of five arguments, which is (1) Motion is only explicable if there exists an unmoved, a first mover (2) the chain of efficient causes demands a first cause (3) the contingent character of existing things in the wold demands a different order of existence, or in other words as something that has a necessary existence (4) the gradations of value in things in the world require the existence of something that is most valuable, or perfect, and (5) the orderly character of events points to a final cause, or end to which all things are directed, and the existence of this end demands a being that ordained it. All the arguments are physico-theological arguments, in that between reason and faith, Aquinas lays out proofs of the existence of God.

He readily recognizes that there are doctrines such that are the Incarnation and the nature of the Trinity, know only through revelations, and whose acceptance is more a matter of moral will. God’s essence is identified with his existence, as pure activity. God is simple, containing no potential. No matter how, we cannot obtain knowledge of what God is (his quiddity), perhaps, doing the same work as the principle of charity, but suggesting that we regulate our procedures of interpretation by maximizing the extent to which we see the subject s humanly reasonable, than the extent to which we see the subject as right about things. Whereby remaining content with descriptions that apply to him partly by way of analogy, God reveals of him are not themselves.

The immediate problem availed of ethics is posed b y the English philosopher Phillippa Foot, in her ‘The Problem of Abortion and the Doctrine of the Double Effect’ (1967). A run-a-way train, or trolley comes to a section in the track that is under construction and impassable. One person is working on one part and five on the other, and the trolley will put an end to anyone working on the branch it enters. Clearly, to most minds, the driver should steer for the fewest populated branch. But now suppose that, left to itself it will enter the branch with its five employ that is there, that you as a bystander can intervene, altering the points so that it veers through the other. Is it right or obligors, or even permissible for you to do this, thereby, apparently involving you in ways that responsibility ends in a death of one person? After all, who have you wronged if you leave it to go its own way? The situation is similarly standardized of others in which utilitarian reasoning seems to lead to one course of action, but a person’s integrity or principles may oppose it.

Describing events that haphazardly happen does not of themselves permits us to talk of rationality and intention, which are the categories we may apply if we conceive of them as action. We think of ourselves not only passively, as creatures that make things happen. Understanding this distinction gives forth of its many major problems concerning the nature of an agency for the causation of bodily events by mental events, and of understanding the ‘will’ and ‘free will’. Other problems in the theory of action include drawing the distinction between an action and its consequence, and describing the structure involved when we do one thing ‘by’ doing another thing. Even the planning and dating where someone shoots someone on one day and in one place, whereby the victim then dies on another day and in another place. Where and when did the murderous act take place?

Causation, least of mention, is not clear that only events are created by and for themselves. Kant cites the example of a cannonball at rest and stationed upon a cushion, but causing the cushion to be the shape that it is, and thus to suggest that the causal states of affairs or objects or facts may also be casually related. All of which, the central problem is to understand the elements of necessitation or determinacy of the future. Events, Hume thought, are in themselves ‘loose and separate’: How then are we to conceive of others? The relationship seems not too perceptible, for all that perception gives us (Hume argues) is knowledge of the patterns that events do, actually falling into than any acquaintance with the connections determining the pattern. It is, however, clear that our conception of everyday objects is largely determined by their casual powers, and all our action is based on the belief that these causal powers are stable and reliable. Although scientific investigation can give us wider, deeper and dependable patterns, it seems incapable of bringing us any nearer to the ‘must’ of causal necessitation. Particular examples’ of puzzles with causalities are quite apart from general problems of forming any conception of what it is: How are we to understand the casual interaction between mind and body? How can the present, which exists, or its existence to a past that no longer exists? How is the stability of the casual order to be understood? Is backward causality possible? Is causation a concept needed in science, or dispensable?

The news concerning free-will, is nonetheless, a problem for which is to reconcile our everyday consciousness of ourselves as agent, with the best view of what science tells us that we are. Determinism is one part of the problem. It may be defined as the doctrine that every event has a cause. More precisely, for any event ‘C’, there will be one antecedent state of nature ‘N’, and a law of nature ‘L’, such that given ‘L’, ‘N’ will be followed by ‘C’. But if this is true of every event, it is true of events such as my doing something or choosing to do something. So my choosing or doing something is fixed by some antecedent state ‘N’ and the laws. Since determinism is universal, as these in turn are fixed, and so backwards to events, for which I am clearly not responsible, events before my birth, for example. So, no events can be voluntary or free, where that means that they come about purely because of my willing them I could have done otherwise. If determinism is true, then there will be antecedent states and laws already determining such events: How then can I truly be said to be their author, or be responsible for them?

Reactions to this problem are commonly classified as: (1) Hard determinism. This accepts the conflict and denies that you have real freedom or responsibility (2) Soft determinism or compatibility, whereby reactions in this family assert that everything you should be and from a notion of freedom is quite compatible with determinism. In particular, if your actions are caused, it can often be true of you that you could have done otherwise if you had chosen, and this may be enough to render you liable to be held unacceptable (the fact that previous events will have caused you to choose as you did, and so deemed irrelevantly on this option). (3) Libertarianism, as this is the view that while compatibilism is only an evasion, there is more substantiative, a real notion of freedom that can yet be preserved in the face of determinism (or, of indeterminism). In Kant, while the empirical or phenomenal self is determined and not free, whereas the noumenal or rational self is capable of being rational, free action. However, the noumeal self exists outside the categorical priorities of space and time, as this freedom seems to be of a doubtful value as other libertarian avenues do include of suggesting that the problem is badly framed, for instance, because the definition of determinism breaks down, or postulates by its suggesting that there are two independent but consistent ways of looking at an agent, the scientific and the humanistic, wherefore it is only through confusing them that the problem seems urgent. Nevertheless, these avenues have gained general popularity, as an error to confuse determinism and fatalism.

The dilemma for which determinism is for itself often supposes of an action that seems as the end of a causal chain, or, perhaps, by some hieratical set of suppositional actions that would stretch back in time to events for which an agent has no conceivable responsibility, then the agent is not responsible for the action.

Once, again, the dilemma adds that if an action is not the end of such a chain, that reach of the other or one of its causes occurs at random, in that no antecedent events brought it about, and in that case nobody is responsible for it’s ever to occur. So, whether or not determinism is true, responsibility is shown to be illusory.

Still, there is to say, to have a will is to be able to desire an outcome and to purpose to bring it about. Strength of will, or firmness of purpose, is supposed to be good and weakness of will or akrasia badly.

A mental act of willing or trying whose presence is sometimes supposed to make the difference between intentional and voluntary action, as well of mere behaviour. The theories that there are such acts are problematic, and the idea that they make the required difference is a case of explaining a phenomenon by citing another that raises exactly the same problem, since the intentional or voluntary nature of the set of volition now needs explanation. For determinism to act in accordance with the law of autonomy or freedom, is that in ascendance with universal moral law and regardless of selfish advantage.

A categorical notion in the work as contrasted in Kantian ethics show of a hypothetical imperative that embeds of a commentary which is in place only to provide by or as by formal action as ‘given’ to some antecedent desire or project. ‘If you want to look wise, stay quiet’. The injunction to stay quiet is only applicable to those with the antecedent desire or inclination: If one has no desire to look wise, the ordering or cautionary recommendations lapse. A categorical imperative cannot be so avoided, it is a requirement that binds anybody, regardless of their inclination. It could be repressed as, for example, ‘Tell the truth (regardless of whether you want to or not)’. The distinction is not always mistakably presumed or absence of the conditional or hypothetical form: ‘If you crave drink, don’t become a bartender’ may be regarded as an absolute injunction applying to anyone, although only activated in the case of those with the stated desire.

In Grundlegung zur Metaphsik der Sitten (1785), Kant discussed some of the given forms of categorical imperatives, such that of (1) The formula of universal law: ‘Act only on that maxim through which you can at the same time will that it should become universal law’, (2) the formula of the law of nature: ‘Act as if the maxim of your action were to induce to come into being unhinged or aroused from your ‘will’, as a universal law of nature’, (3) the formula of the end-in-itself, ‘Act in such a way that you always trat humanity of whether in your own person or in the person of any other, never simply as an end, but always at the same time as an end’, (4) the formula of autonomy, or consideration; ‘Our Will’, for all agreeable reasons, offers a rational explanation for which of sensibilities has an immaculately dynamic, but absolute ‘Will’, from which we bequeath as a universal law, and (5) the formula of the Kingdom of Ends, which provides a model for systematic union of different rational beings under common laws.

A central object in the study of Kant’s ethics is to understand the expressions of the inescapable, binding requirements of their categorical importance, and to understand whether they are equivalent at some deep level. Kant’s own application of the notion is always convincing: One cause of confusion is relating Kant’s ethical values to theories such as; expressionism’ in that it is easy but imperatively must that it cannot be the expression of a sentiment, yet, it must derive from something ‘unconditional’ or necessary’ such as the voice of reason. The standard mood of sentences used to issue request and commands are their imperative needs to issue as basic the need to communicate information, and as such to animals signalling systems may as often be interpreted either way, and understanding the relationship between commands and other action-guiding uses of language, such as ethical discourse. The ethical theory of ‘prescriptivism’ in fact equates the two functions. A further question is whether there is an imperative logic. ‘Hump that bale’ seems to follow from ‘Tote that barge and hump that bale’, follows from ‘Its windy and its raining’: .But it is harder to say how to include other forms, does ‘Shut the door or shut the window’ follow from ‘Shut the window’, for example? The usual way to develop an imperative logic is to work in terms of the possibility of satisfying the other one command without satisfying the other, thereby turning it into a variation of ordinary deductive logic.

Despite the fact that the morality of people and their ethics amount to the same thing, there is a usage in that morality as such has that of the Kantian base, that on given notions as duty, obligation, and principles of conduct, reserving ethics for the more Aristotelian approach to practical reasoning as based on the valuing notions that are characterized by their particular virtue, and generally avoiding the separation of ‘moral’ considerations from other practical considerations. The scholarly issues are complicated and complex, with some writers seeing Kant as more Aristotelian. And Aristotle was intricately involved with a separate sphere of responsibility and duty, than the simple contrast suggests.

The Cartesian doubt is the method of investigating how much knowledge and its basis in reason or experience as used by Descartes in the first two Medications. It attempted to put knowledge upon secure foundation by first inviting us to suspend judgements on any proportion whose truth can be doubted, even as a bare possibility. The standards of acceptance are gradually raised as we are asked to doubt the deliverance of memory, the senses, and eve n reason, all of which are in principle capable of letting us down. This is eventually founded in the celebrated ‘Cogito ergo sum’: I think, therefore I am. By locating the point of certainty in my awareness of my own self, Descartes gives a first-person twist to the theory of knowledge that dominated the following centuries in spite of some various anticipated hesitations on behalf of social and public starting-points. The metaphysics associated with this priority are the Cartesian dualism, or separation of mind and matter into two classifying and distinctly different but interacting substances. Descartes rigorously and rightly sees that it takes divine dispensation to certify any relationship between the two realms thus divided, and to prove the reliability of the senses invokes a ‘clear and distinct perception’ of highly dubious proofs of the existence of a benevolent deity. This has not met general acceptance: A Hume drily puts it, ‘to have recourse to the veracity of the supreme Being, in order to prove the veracity of our senses, is surely making a very unexpected circuit.’

By dissimilarity, Descartes’s notorious denial that non-human animals are conscious is a stark illustration of dissimulation. In his conception of matter Descartes also gives preference to rational cogitation over anything from the senses. Since we can conceive of the matter of a ball of wax, surviving changes to its sensible qualities, matter is not an empirical concept, but eventually an entirely geometrical one, with extension and motion as its only physical nature.

Although the structure of Descartes’s epistemology, theory of mind and theory of matter have been rejected many times, their relentless exposure of the hardest issues, their exemplary clarity and even their initial plausibility, all contrives to make him the central point of reference for modern philosophy.

The term instinct (Lat., instinctus, impulse or urge) implies innately determined behaviour, flexible to change in circumstance outside the control of deliberation and reason. The view that animals accomplish even complex tasks not by reason was common to Aristotle and the Stoics, and the inflexibility of their outline was used in defence of this position as early as Avicennia. A continuity between animal and human reason was proposed by Hume, and followed by sensationalist such as the naturalist Erasmus Darwin (1731-1802). The theory of evolution prompted various views of the emergence of stereotypical behaviour, and the idea that innate determinants of behaviour are fostered by specific environments is a guiding principle of ethology. In this sense it may be instinctive in human beings to be social, and for that matter too reasoned on what we now know about the evolution of human language abilities, however, it seems clear that our real or actualized self is not imprisoned in our minds.

It is implicitly a part of the larger whole of biological life, human observers its existence from embedded relations to this whole, and constructs its reality as based on evolved mechanisms that exist in all human brains. This suggests that any sense of the ‘otherness’ of self and world be is an illusion, in that disguises of its own actualization are to find all its relations between the part that are of their own characterization. Its self as related to the temporality of being whole is that of a biological reality. It can be viewed, of course, that a proper definition of this whole must not include the evolution of the larger indivisible whole. Yet, the cosmos and unbroken evolution of all life, by that of the first self-replication molecule that was the ancestor of DNA. It should include the complex interactions that have proven that among all the parts in biological reality that any resultant of emerging is self-regulating. This, of course, is responsible to properties owing to the whole of what might be to sustain the existence of the parts.

Founded on complications and complex coordinate systems in ordinary language may be conditioned as to establish some developments have been descriptively made by its physical reality and metaphysical concerns. That is, that it is in the history of mathematics and that the exchanges between the mega-narratives and frame tales of religion and science were critical factors in the minds of those who contributed. The first scientific revolution of the seventeenth century, allowed scientists to better them in the understudy of how the classical paradigm in physical reality has marked results in the stark Cartesian division between mind and world that became one of the most characteristic features of Western thought. This is not, however, another strident and ill-mannered diatribe against our misunderstandings, but drawn upon equivalent self realization and undivided wholeness or predicted characterlogic principles of physical reality and the epistemological foundations of physical theory.

The subjectivity of our mind affects our perceptions of the world that is held to be objective by natural science. Create both aspects of mind and matter as individualized forms that belong to the same underlying reality.

Our everyday experience confirms the apparent fact that there is a dual-valued world as subject and objects. We as having consciousness, as personality and as experiencing beings are the subjects, whereas for everything for which we can come up with a name or designation, seems to be the object, that which is opposed to us as a subject. Physical objects are only part of the object-world. There are also mental objects, and objects of our emotions, abstract objects, religious objects etc. languages objectivise our experience. Experiences per se are purely sensational experienced that do not make a distinction between object and subject. Only verbalized thought reifies the sensations by conceptualizing them and pigeonholing them into the given entities of language.

Some thinkers maintain, that subject and object are only different aspects of experience. I can experience myself as subject, and in the act of self-reflection. The fallacy of this argument is obvious: Being a subject implies having an object. We cannot experience something consciously without the mediation of understanding and mind. Our experience is already conceptualized at the time it comes into our consciousness. Our experience is negative insofar as it destroys the original pure experience. In a dialectical process of synthesis, the original pure experience becomes an object for us. The common state of our mind is only capable of apperceiving objects. Objects are reified negative experience. The same is true for the objective aspect of this theory: by objectifying myself I do not dispense with the subject, but the subject is causally and apodictically linked to the object. As soon as I make an object of anything, I have to realize, that it is the subject, which objectivise something. It is only the subject who can do that. Without the subject there are no objects, and without objects there is no subject. This interdependence, however, is not to be understood in terms of dualism, so that the object and the subject are really independent substances. Since the object is only created by the activity of the subject, and the subject is not a physical entity, but a mental one, we have to conclude then, that the subject-object dualism is purely mentalistic.

The Cartesian dualism posits the subject and the object as separate, independent and real substances, both of which have their ground and origin in the highest substance of God. Cartesian dualism, however, contradicts itself: The very fact, which Descartes posits the ‘I,’ that is the subject, as the only certainty, he defied materialism, and thus the concept of some ‘res extensa.’ The physical thing is only probable in its existence, whereas the mental thing is absolutely and necessarily certain. The subject is superior to the object. The object is only derived, but the subject is the original. This makes the object not only inferior in its substantive quality and in its essence, but relegates it to a level of dependence on the subject. The subject recognizes that the object is a ‘res’ extensa’ and this means, that the object cannot have essence or existence without the acknowledgment through the subject. The subject posits the world in the first place and the subject is posited by God. Apart from the problem of interaction between these two different substances, Cartesian dualism is not eligible for explaining and understanding the subject-object relation.

By denying Cartesian dualism and resorting to monistic theories such as extreme idealism, materialism or positivism, the problem is not resolved either. What the positivists did, was just verbalizing the subject-object relation by linguistic forms. It was no longer a metaphysical problem, but only a linguistic problem. Our language has formed this object-subject dualism. These thinkers are very superficial and shallow thinkers, because they do not see that in the very act of their analysis they inevitably think in the mind-set of subject and object. By relativizing the object and subject in terms of language and analytical philosophy, they avoid the elusive and problematical aporia of subject-object, which has been the fundamental question in philosophy ever since. Shunning these metaphysical questions is no solution. Excluding something, by reducing it to more material and verifiable level, is not only pseudo-philosophy but actually a depreciation and decadence of the great philosophical ideas of mankind.

Therefore, we have to come to grips with idea of subject-object in a new manner. We experience this dualism as a fact in our everyday lives. Every experience is subject to this dualistic pattern. The question, however, is, whether this underlying pattern of subject-object dualism is real or only mental. Science assumes it to be real. This assumption does not prove the reality of our experience, but only that with this method science is most successful in explaining our empirical facts. Mysticism, on the other hand, believes that there is an original unity of subject and objects. To attain this unity is the goal of religion and mysticism. Man has fallen from this unity by disgrace and by sinful behaviour. Now the task of man is to get back on track again and strive toward this highest fulfilment. Again, are we not, on the conclusion made above, forced to admit, that also the mystic way of thinking is only a pattern of the mind and, as the scientists, that they have their own frame of reference and methodology to explain the supra-sensible facts most successfully?

If we assume mind to be the originator of the subject-object dualism, then we cannot confer more reality on the physical or the mental aspect, as well as we cannot deny the one in terms of the other.

The crude language of the earliest users of symbolics must have been considerably gestured and nonsymbiotic vocalizations. Their spoken language probably became reactively independent and a closed cooperative system. Only after the emergence of hominids were to use symbolic communication evolved, symbolic forms progressively took over functions served by non-vocal symbolic forms. This is reflected in modern languages. The structure of syntax in these languages often reveals its origins in pointing gestures, in the manipulation and exchange of objects, and in more primitive constructions of spatial and temporal relationships. We still use nonverbal vocalizations and gestures to complement meaning in spoken language.

The general idea is very powerful, however, the relevance of spatiality to self-consciousness comes about not merely because the world is spatial but also because the self-conscious subject is a spatial element of the world. One cannot be self-conscious without being aware that one is a spatial element of the world, and one cannot be ware that one is a spatial element of the world without a grasp of the spatial nature of the world. Face to face, the idea of a perceivable, objective spatial world that causes ideas too subjectively becoming to denote in the wold. During which time, his perceptions as they have of changing position within the world and to the more or less stable way the world is. The idea that there is an objective world and the idea that the subject is somewhere, and where he is given by what he can perceive.

Research, however distant, are those that neuroscience reveals in that the human brain is a massive parallel system which language processing is widely distributed. Computers generated images of human brains engaged in language processing reveals a hierarchal organization consisting of complicated clusters of brain areas that process different component functions in controlled time sequences. And it is now clear that language processing is not accomplished by stand-alone or unitary modules that evolved with the addition of separate modules that were eventually wired together on some neutral circuit board.

While the brain that evolved this capacity was obviously a product of Darwinian evolution, the most critical precondition for the evolution of this brain cannot be simply explained in these terms. Darwinian evolution can explain why the creation of stone tools altered conditions for survival in a new ecological niche in which group living, pair bonding, and more complex social structures were critical to survival. And Darwinian evolution can also explain why selective pressures in this new ecological niche favoured pre-adaptive changes required for symbolic communication. All the same, this communication resulted directly through its passing an increasingly atypically structural complex and intensively condensed behaviour. Social evolution began to take precedence over physical evolution in the sense that mutations resulting in enhanced social behaviour became selectively advantageously within the context of the social behaviour of hominids.

Because this communication was based on symbolic vocalization that required the evolution of neural mechanisms and processes that did not evolve in any other species. As this marked the emergence of a mental realm that would increasingly appear as separate and distinct from the external material realm.

If the emergent reality in this mental realm cannot be reduced to, or entirely explained as for, the sum of its parts, it seems reasonable to conclude that this reality is greater than the sum of its parts. For example, a complete proceeding of the manner in which light in particular wave lengths has ben advancing by the human brain to generate a particular colour says nothing about the experience of colour. In other words, a complete scientific description of all the mechanisms involved in processing the colour blue does not correspond with the colour blue as perceived in human consciousness. And no scientific description of the physical substrate of a thought or feeling, no matter how accomplish it can but be accounted for in actualized experience, especially of a thought or feeling, as an emergent aspect of global brain function.

If we could, for example, define all of the neural mechanisms involved in generating a particular word symbol, this would reveal nothing about the experience of the word symbol as an idea in human consciousness. Conversely, the experience of the word symbol as an idea would reveal nothing about the neuronal processes involved. And while one mode of understanding the situation necessarily displaces the other, both are required to achieve a complete understanding of the situation.

Even if we are to include two aspects of biological reality, finding to a more complex order in biological reality is associated with the emergence of new wholes that are greater than the orbital parts. Yet, the entire biosphere is of a whole that displays self-regulating behaviour that is greater than the sum of its parts. The emergence of a symbolic universe based on a complex language system could be viewed as another stage in the evolution of more complicated and complex systems. As marked and noted by the appearance of a new profound complementarity in relationships between parts and wholes. This does not allow us to assume that human consciousness was in any sense preordained or predestined by natural process. But it does make it possible, in philosophical terms at least, to argue that this consciousness is an emergent aspect of the self-organizing properties of biological life.

If we also concede that an indivisible whole contains, by definition, no separate parts and that a phenomenon can be assumed to be ‘real’ only when it is ‘observed’ phenomenon, we are led to more interesting conclusions. The indivisible whole whose existence is inferred in the results of the aspectual experiments that cannot in principle is itself the subject of scientific investigation. There is a simple reason why this is the case. Science can claim knowledge of physical reality only when the predictions of a physical theory are validated by experiment. Since the indivisible whole cannot be measured or observed, we come to grips with the ‘event horizon’ or knowledge where science can say nothing about the actual character of this reality. Why this is so, is a property of the entire universe, then we must also conclude that undivided wholeness exists on the most primary and basic level in all aspects of physical reality. What we are dealing within science per se, however, are manifestations of tis reality, which are invoked or ‘actualized’ in making acts of observation or measurement. Since the reality that exists between the space-like separated regions is a whole whose existence can only be inferred in experience. As opposed to proven experiment, the correlations between the particles, and the sum of these parts, do not constitute the ‘indivisible’ whole. Physical theory allows us to understand why the correlations occur. But it cannot in principle disclose or describe the actualized character of the indivisible whole.

The scientific implications to this extraordinary relationship between parts (Qualia) and indivisible whole (the universe) are quite staggering. Our primary concern, however, is a new view of the relationship between mind and world that carries even larger implications in human terms. When factors into our understanding of the relationship between parts and wholes in physics and biology, then mind, or human consciousness, must be viewed as an emergent phenomenon in a seamlessly interconnected whole called the cosmos.

All that is required to embrace the alternative view of the relationship between mind and world that are consistent with our most advanced scientific knowledge is a commitment to metaphysical and epistemological realism and a willingness to follow arguments to their logical conclusions. Metaphysical realism assumes that physical reality or has an actual existence independent of human observers or any act of observation, epistemological realism assumes that progress in science requires strict adherence to scientific mythology, or to the rules and procedures for doing science. If one can accept these assumptions, most of the conclusions drawn should appear fairly self-evident in logical and philosophical terms. And it is also not necessary to attribute any extra-scientific properties to the whole to understand and embrace the new relationship between part and whole and the alternative view of human consciousness that is consistent with this relationship. This is, in this that our distinguishing character between what can be ‘proven’ in scientific terms and what can be reasonably ‘inferred’ in philosophical terms based on the scientific evidence.

Moreover, advances in scientific knowledge rapidly became the basis for the creation of a host of new technologies. As yet, it may be found that in whose responsibility for evaluating the benefits and risks associated with the use of these technologies, much less their potential impact on human needs and values, normally had expertise on only one side of a two-culture divide. Perhaps, more important, many of the potential threats to the human future ~ such as, to, environmental pollution, arms development, overpopulation, and spread of infectious diseases, poverty, and starvation ~ can be effectively solved only by integrating scientific knowledge with knowledge from the social sciences and humanities. What we have not done so far, of a simple reason ~ the implications of the amazing new fact of nature call on non-locality cannot be properly understood without some familiarity wit the actual history of scientific thought. The intent is to suggest that what is most important about this background can be understood in its absence. Those who do not wish to struggle with the small and perhaps, the fewer quantities of background implications should feel free to ignore it. But this material will be no more challenging as such, that the hope is that from those which will find a common ground for understanding and that will meet again on these commonly functional affordance as to close the circle for which only of resolving the equations of eternity and complete the universe to obtainably gain of its unifying hold on or upon ‘wholeness’.

A major topic of philosophical inquiry, especially in Aristotle, and subsequently since the 17th and 18th centuries, when the ‘science of man’ began to probe into human motivation and emotion. For such as these, the French moralistes, or Hutcheson, Hume, Smith and Kant, a prime task as to delineate the variety of human reactions and motivations. Such an inquiry would locate our propensity for moral thinking among other faculties, such as perception and reason, and other tendencies as empathy, sympathy or self-interest. The task continues especially in the light of a post-Darwinian understanding of us.

In some moral systems, notably that of Immanuel Kant, ‘real’ moral worth comes corresponding to known facts that bring one or more of which there exist no other, such are inimitably unequalled with interactivity. For the reason that causing considerations are right. However, if you do what is purposely becoming, equitable, but from some other equitable motive, such as the fear or prudence, no moral merit accrues to you. Yet, that in turn seems to discount other admirable motivations, as acting from main-sheet benevolence, or ‘sympathy’. The question is how to balance these opposing ideas and how to understand acting from a sense of obligation without duty or rightness, through which their beginning to seem a kind of fetish. It thus stands opposed to ethics and relying on highly general and abstractive principles, particularly, but those associated with the Kantian categorical imperatives. The view may go as far back as to say that taken in its own, no consideration point, for that which of any particular way of life, that, least of mention, the contributing steps so taken as forwarded by reason or be to an understanding estimate that can only proceed by identifying salient features of a situation that weighs on one’s side or another.

As random moral dilemmas set out with intense concern, inasmuch as philosophical matters that exert a profound but influential defence of common sense. Situations, in which each possible course of action breeches some otherwise binding moral principle, are, nonetheless, serious dilemmas making the stuff of many tragedies. The conflict can be described in different was. One suggestion is that whichever action the subject undertakes, that he or she does something wrong. Another is that his is not so, for the dilemma means that in the circumstances for what she or he did was right as any alternate. It is important to the phenomenology of these cases that action leaves a residue of guilt and remorse, even though it had proved it was not the subject’s fault that she or he was considering the dilemma, that the rationality of emotions can be contested. Any normality with more than one fundamental principle seems capable of generating dilemmas, however, dilemmas exist, such as where a mother must decide which of two children to sacrifice, least of mention, no principles are pitted against each other, only if we accept that dilemmas from principles are real and important, this fact can then be used to approach in them, such as of ‘utilitarianism’, to espouse various kinds may, perhaps, be centred upon the possibility of relating to independent feelings, liken to recognize only one sovereign principle. Alternatively, of regretting the existence of dilemmas and the unordered jumble of furthering principles, in that of creating several of them, a theorist may use their occurrences to encounter upon that which it is to argue for the desirability of locating and promoting a single sovereign principle.

Nevertheless, some theories into ethics see the subject in terms of a number of laws (as in the Ten Commandments). The status of these laws may be that they are the edicts of a divine lawmaker, or that they are truths of reason, given to its situational ethics, virtue ethics, regarding them as at best rules-of-thumb, and, frequently disguising the great complexity of practical representations that for reason has placed the Kantian notions of their moral law.

In continence, the natural law possibility points of the view of the states that law and morality are especially associated with St. Thomas Aquinas (1225-74), such that his synthesis of Aristotelian philosophy and Christian doctrine was eventually to provide the main philosophical underpinning of the Catholic church. Nevertheless, to a greater extent of any attempt to cement the moral and legal order and together within the nature of the cosmos or the nature of human beings, in which sense it found in some Protestant writings, under which had arguably derived functions. From a Platonic view of ethics and its agedly implicit advance of Stoicism. Its law stands above and apart from the activities of human lawmakers: It constitutes an objective set of principles that can be seen as in and for themselves by means of ‘natural usages’ or by reason itself, additionally, (in religious verses of them), that express of God’s will for creation. Non-religions versions of the theory substitute objective conditions for humans flourishing as the source of constraints, upon permissible actions and social arrangements within the natural law tradition. Different views have been held about the relationship between the rule of the law and God’s will. Grothius, for instance, side with the view that the content of natural law is independent of any will, including that of God.

While the German natural theorist and historian Samuel von Pufendorf (1632-94) takes the opposite view. His great work was, De Jure Naturae et Gentium, 1672, and its English translation are ‘Of the Law of Nature and Nations, 1710. Pufendorf was influenced by Descartes, Hobbes and the scientific revolution of the 17th century, his ambition was to introduce a newly scientific ‘mathematical’ treatment on ethics and law, free from the tainted Aristotelian underpinning of ‘scholasticism’. Like that of his contemporary ~ Locke. His conception of natural laws includes rational and religious principles, making it only a partial forerunner of more resolutely empiricist and political treatment in the Enlightenment.

Pufendorf launched his explorations in Plato’s dialogue ‘Euthyphro’, with whom the pious things are pious because the gods love them, or do the gods love them because they are pious? The dilemma poses the question of whether value can be conceived as the upshot o the choice of any mind, even a divine one. On the fist option the choice of the gods crates goodness and value. Even if this is intelligible, it seems to make it impossible to praise the gods, for it is then vacuously true that they choose the good. On the second option we have to understand a source of value lying behind or beyond the will even of the gods, and by which they can be evaluated. The elegant solution of Aquinas is and is therefore distinct form is willing, but not distinct from him.

The dilemma arises whatever the source of authority is supposed to be. Do we care about the good because it is good, or do we just bring of convening the good those things that we care about? It also generalizes to affect our understanding of the authority of other things: Mathematics, or inescapable Truths, for example, are truths necessary because we deem them to be so, or do we deem them to be so because they are necessary?

The natural law tradition may either assume a stranger form, in which it is claimed that various fact’s entail of primary and secondary qualities, any of which is claimed that various facts entail values, reason by itself is capable of discerning moral requirements. As in the ethics of Kant, these requirements are supposed binding on all human beings, regardless of their desires.

The supposed natural or innate abilities of the mind to know the first principle of ethics and moral reasoning, wherein, those expressions are assigned and related to those that distinctions are which make in terms contribution to the function of the whole, as completed definitions of them, their phraseological impression is termed ‘synderesis’ (or, syntetesis) although traced to Aristotle, the phrase came to the modern era through St. Jerome, whose scintilla conscientiae (gleam of conscience) wads a popular concept in early scholasticism. Nonetheless, it is mainly associated in Aquinas as an infallible natural, simple and immediate apprehension of understanding the first moral principles. Conscience, by contrast, is, more concerned with particular instances of right and wrong, and can be in error, under which the assertion that is taken as fundamental, at least for the purposes of the branch of enquiry in hand.

It is, nevertheless, the view interpreted within the particular states of law and morality especially associated with Aquinas and the subsequent scholastic tradition, showing for itself the enthusiasm for reform for its own sake. Or for ‘rational’ schemes thought up by managers and theorists, is therefore entirely misplaced. Major exponents of this theme include the British absolute idealist Herbert Francis Bradley (1846-1924) and Austrian economist and philosopher Friedrich Hayek. The notable idealism of Bradley, there is the same doctrine that change is contradictory and consequently unreal: The Absolute is changeless. A way of sympathizing a little with his idea is to reflect that any scientific explanation of change will proceed by finding an unchanging law operating, or an unchanging quantity conserved in the change, so that explanation of change always proceeds by finding that which is unchanged. The metaphysical problem of change is to shake off the idea that each moment is created afresh, and to obtain a conception of events or processes as having a genuinely historical reality, Really extended and unfolding in time, as opposed to being composites of discrete temporal atoms. A step toward this end may be to see time itself not as an infinite container within which discrete events are located, bu as a kind of logical construction from the flux of events. This relational view of time was advocated by Leibniz and a subject of the debate between him and Newton’s Absolutist pupil, Clarke.

Generally, nature is an indefinitely mutable term, changing as our scientific conception of the world changes, and often best seen as signifying a contrast with something considered not part of nature. The term applies both to individual species (it is the nature of gold to be dense or of dogs to be friendly), and also to the natural world as a whole. The sense in which approvable species are such, which quickly link up with ethical and aesthetic ideals: A thing ought to realize its nature, what is natural is what it is good for a thing to become, it is natural for humans to be healthy or two-legged, and departure from this is a misfortune or deformity. The association of what is natural with what it is good to become is visible in Plato, and is the central idea of Aristotle’s philosophy of nature. Unfortunately, the pinnacle of nature in this sense is the mature adult male citizen, with the rest of that we would call the natural world, including women, slaves, children and other species, not quite making it.

Nature in general can, however, function as a foil to any idea inasmuch as a source of ideals: In this sense fallen nature is contrasted with a supposed celestial realization of the ‘forms’. The theory of ‘forms’ is probably the most characteristic, and most contested of the doctrines of Plato. In the background ie the Pythagorean conception of form as the key to physical nature, bu also the sceptical doctrine associated with the Greek philosopher Cratylus, and is sometimes thought to have been a teacher of Plato before Socrates. He is famous for capping the doctrine of Ephesus of Heraclitus, whereby the guiding idea of his philosophy was that of the logos, is capable of being heard or hearkened to by people, it unifies opposites, and it is somehow associated with fire, which is preeminent among the four elements that Heraclitus distinguishes: Fire, air (breath, the stuff of which souls composed), earth, and water. Although he is principally remembered for the doctrine of the ‘flux’ of all things, and the famous statement that you cannot step into the same river twice, for new waters are ever flowing in upon you. The more extreme implication of the doctrine of flux, e.g., the impossibility of categorizing things truly, do not seem consistent with his general epistemology and views of meaning, and were to his follower Cratylus, although the proper conclusion of his views was that the flux cannot be captured in words. According to Aristotle, he eventually held that since ‘regarding that which everywhere in every respect is changing nothing ids just to stay silent and wag one’s finger. Plato ‘s theory of forms can be seen in part as an action against the impasse to which Cratylus was driven.

The Galilean world view might have been expected to drain nature of its ethical content, however, the term is to the greater extent, too seldom to lose its normative force, and the belief in universal natural laws provided its own set of ideals. In the 18th century for example, a painter or writer could be praised as natural, where the qualities expected would include normal [universal] topics treated with simplicity, economy, regularity and harmony. Later on, nature becomes an equably potent emblem of irregularity, wildness, and fertile diversity, but also associated with progress of human history, its incurring definition that has been taken to fit many things as well as transformation, including ordinary human self-consciousness. Nature, being in contrast within an integrated phenomenon may include (1) that which is deformed or grotesque or fails to achieve its proper form or function or just the statistically uncommon or unfamiliar, (2) the supernatural, or the world of gods and invisible agencies, (3) the world of rationality and unintelligence, conceived as distinct from the biological and physical order, or the product of human intervention, and (5) related to that, the world of convention and artifice.

Different conceptualized traits as founded within the natures continuous overtures that play ethically, for example, the conception of ‘nature red in tooth and claw’ often provides a justification for aggressive personal and political relations, or the idea that it is women’s nature to be one thing or another is taken to be a justification for differential social expectations. The term functions as a fig-leaf for a particular set of stereotypes, and is a proper target in as much of the feminist writings. Feminist epistemology has asked whether different ways of knowing for instance with different criteria of justification, and different emphases on logic and imagination, characterize male and female attempts to understand the world. Such concerns include awareness of the ‘masculine’ self-image, itself a social variable and potentially distorting pictures of what thought and action should be. Again, there is a spectrum of concerns from the highly theoretical to the relatively practical. In this latter area particular attention is given to the institutional biases that stand in the way of equal opportunities in science and other academic pursuits, or the ideologies that stand in the way of women seeing themselves as leading contributors to various disciplines. However, to more radical feminists such concerns merely exhibit women wanting for themselves the same power and rights over others that men have claimed, and failing to confront the real problem, which is how to live without such symmetrical powers and rights.

In biological determinism, not only influences but constraints and makes inevitable our development as persons with a variety of traits. At its silliest the view postulates such entities as a gene predisposing people to poverty, and it is the particular enemy of thinkers stressing the parental, social, and political determinants of the way we are.

The philosophy of social science is more heavily intertwined with actual social science than in the case of other subjects such as physics or mathematics, since its question is centrally whether there can be such a thing as sociology. The idea of a ‘science of man’, devoted to uncovering scientific laws determining the basic dynamic s of human interactions was a cherished ideal of the Enlightenment and reached its heyday with the positivism of writers such as the French philosopher and social theorist Auguste Comte (1798-1957), and the historical materialism of Marx and his followers. Sceptics point out that what happens in society is determined by peoples’ own ideas of what should happen, and like fashions those ideas change in unpredictable ways as self-consciousness is susceptible to change by any number of external event s: Unlike the solar system of celestial mechanics a society is not at all a closed system evolving in accordance with a purely internal dynamic, but constantly responsive to shocks from outside.

The sociological approach to human behaviour is based on the premise that all social behaviour has a biological basis, and seeks to understand that basis in terms of genetic encoding for features that are then selected for through evolutionary history. The philosophical problem is essentially one of methodology: Of finding criteria for identifying features that can usefully be explained in this way, and for finding criteria for assessing various genetic stories that might provide useful explanations.

Among the features that are proposed for this kind of explanation are such things as male dominance, male promiscuity versus female fidelity, propensities to sympathy and other emotions, and the limited altruism characteristic of human beings. The strategy has proved unnecessarily controversial, with proponents accused of ignoring the influence of environmental and social factors in moulding people’s characteristics, e.g., at the limit of silliness, by postulating a ‘gene for poverty’, however, there is no need for the approach to commit such errors, since the feature explained sociobiological may be indexed to environment: For instance, it may be a propensity to develop some feature in some other environments (for even a propensity to develop propensities . . .) The main problem is to separate genuine explanation from speculative, just so stories which may or may not identify as really selective mechanisms.

Subsequently, in the 19th century attempts were made to base ethical reasoning on the presumed facts about evolution. The movement is particularly associated with the English philosopher of evolution Herbert Spencer (1820-1903). His first major work was the book Social Statics (1851), which is advocated by extreme political libertarianism. The Principles of Psychology was published in 1855, and his very influential Education advocating natural development of intelligence, the creation of pleasurable interest, and the importance of science in the curriculum, appeared in 1861. His First Principles (1862) was followed over the succeeding years by volumes on the Principles of biology and psychology, sociology and ethics. Although he attracted a large public following and attained the stature of a sage, his speculative work has not lasted well, and in his own time there were dissident voices. T.H. Huxley said that Spencer’s definition of a tragedy was a deduction killed by a fact. Writer and social prophet Thomas Carlyle (1795-1881) called him a perfect vacuum, and the American psychologist and philosopher William James (1842-1910) wondered why half of England wanted to bury him in Westminister Abbey, and talked of the ‘hurdy-gurdy’ monotony of him, his whole wooden system, as if knocked together out of cracked hemlock.

The premises regarded by some later elements in an evolutionary path are better than earlier ones, the application of this principle then requires seeing western society, laissez-faire capitalism, or some other object of approval, as more evolved than more ‘primitive’ social forms. Neither the principle nor the applications command much respect. The version of evolutionary ethics called ‘social Darwinism’ emphasizes the struggle for natural selection, and drawn the conclusion that we should glorify such struggles, usually by enhancing competitive and aggressive relations between people in society or between societies themselves. More recently the relation between evolution and ethics has been re-thought in the light of biological discoveries concerning altruism and kin-selection.

In that, the study of the say in which a variety of higher mental functions may be adaptively applicable of a psychology of evolution, in so of a developing response to selection pressures on human populations through evolutionary time. Candidates for such theorizing include material and paternal motivations, capabilities for love and friendship, the development of language as a signalling system, cooperative and aggressive tendencies, our emotional repertoires, our moral reaction, including the disposition to direct and punish those who cheat on agreement or free-ride on the work of others, our cognitive structure and many others. Evolutionary psychology goes hand-in-hand with neurophysiological evidence about the underlying circuitry in the brain which subserves the psychological mechanisms it claims to identifying.

For all that, an essential part of the British absolute idealist Herbert Bradley (1846-1924) was largely on the ground s that the self-sufficiency individualized through community and oneself may contribute too socially and other ideals. However, truth as formulated in language is always partial, and dependent upon categories that they are inadequate to the harmonious whole. Nevertheless, these self-contradictory elements somehow contribute to the harmonious whole, or Absolute, lying beyond categorization. Although absolute idealism maintains a few adherents today, Bradley’s general dissent from empiricism, his holism, and the brilliance and style of his writing continues to make him the most interesting of the late 19th century writers influenced by the German philosopher Friedrich Hegel (1770-1831).

Understandably, something less than the fragmented division that belonging of Bradley’s case has a preference, voiced much earlier by the German philosopher, mathematician and polymath, Gottfried Leibniz (1646-1716), for categorical monadic properties over relations. He was particularly troubled by the relation between that which is known and the more that knows it. In philosophy, the Romantics took from the German philosopher and founder of critical philosophy Immanuel Kant (1724-1804) both the emphasis on free-will and the doctrine that reality is ultimately spiritual, with nature itself a mirror of the human soul. To fix upon one among alternatives as the one to be taken. Friedrich Schelling (1775-1854), the principal philosopher of German Romanticism, whose work particularly the System des transzendentalan idealism us (1800), stresses force, self-consciousness, the unfolding dynamic spirit inherent in all things, and the moral striving after unattainable ideals, It is in the emphasis on art and aesthetics that Schelling, directed toward that of a higher degree, finds within him of what is most impassioned: It is in art alone that abstraction is put aside, nature and history reconciled, and full self-consciousness attained in Vorlesungen über die Methode des akademischen Stadiums (1803), holds the absolute identity of nature and intelligence, known and unknown, and is an important bridge between Kant and Fichre on the one hand, and Hegel on the other. In the final phase of his life occupied a certain area from which is voiced of a mystical, personal, and sombre philosophy recognized as anticipating similar notes in ‘existentialism’. Nonetheless, a movement of more general too naturalized imperatively. Romanticism drew on the same intellectual and emotional resources as German idealism was increasingly culminating in the philosophy of Hegel (1770-1831) and of absolute idealism.

Being such in comparison with nature may include (1) that which is deformed or grotesque, or fails to achieve its proper form or function, or just the statistically uncommon or unfamiliar, (2) the supernatural, or the world of gods and invisible agencies, (3) the world of rationality and intelligence, conceived of as distinct from the biological and physical order, (4) that which is manufactured and artefactual, or the product of human invention, and (5) related to it, the world of convention and artifice.

Different conceptions of nature continue to have ethical overtones, for example, the conception of ‘nature red in tooth and claw’ often provides a justification for aggressive personal and political relations, or the idea that it is a woman’s nature to be one thing or another, as taken to be a justification for differential social expectations. The term functions as a fig-leaf for a particular set of a stereotype, and is a proper target of much ‘feminist’ writing.

This brings to question, that most of all ethics are contributively distributed as an understanding for which a dynamic function in and among the problems that are affiliated with human desire and needs the achievements of happiness, or the distribution of goods. The central problem specific to thinking about the environment is the independent value to place on ‘such-things’ as preservation of species, or protection of the wilderness. Such protection can be supported as a man is consciously aimed to ordinary human ends, for instance, when animals are regarded as future sources of medicines or other benefits. Nonetheless, many would want to claim a non-utilitarian, absolute value for the existence of wild things and wild places. It is in their value that things consist. They put u in our proper place, and failure to appreciate this value is not only an aesthetic failure but one of due humility and reverence, a moral disability. The problem is one of expressing this value, and mobilizing it against utilitarian agents for developing natural areas and causing to end the species, terminating them in more or less at will.

Many concerns and disputed clusters around the idea associated with the term ‘substance’. The substance may be considered in: (1) It’s essence, or that which makes it what it is. This will ensure that the substance of a thing is that which remains through change in properties. Again, in Aristotle, this essence becomes more than just the matter, but a unity of matter and form. (2) That which can exist by itself, or does not need a subject for existence, in the way that properties need objects, hence (3) that which bears properties, as a substance is then the subject of predication, that about which things are said as opposed to the things said about it. Substance in the last two senses stands opposed to modifications such as quantity, quality, relations, etc. it is hard to keep this set of ideas distinct from the doubtful notion of a substratum, something distinct from any of its properties, and hence, as an incapable characterization. The notions of substances tend to render relinquishments in the ways for which the empiricist is thought by which in the fewer of the sensible questions of things with the notion of that in which they infer of giving way to an empirical notion of their regular occurrence. However, this is in turn is problematic, since it only makes sense to talk of the occurrence of an instance of qualities, not of quantities themselves. So the problem of what it is for a value quality to be the instance that remains.

Metaphysics inspired by modern science tend to reject the concept of substance in favour of concepts such as that of a field or a process, each of which may seem to provide a better example of a fundamental physical category.

It must be spoken of a concept with which to a great depth is embedded in 18th century aesthetics, which had some sorted linkage that originally came from 1st century rhetoric exposition as written throughout by some forming tractatability and founded monographic discourse. On the Sublime, by Longinus. The sublime is great, fearful, noble, calculated to arouse sentiments of pride and majesty, as well as awe and sometimes terror. According to Alexander Gerard’s writing in 1759, ‘When a large object is presented, the mind expands itself to the extent of those objects, and is filled with one grand sensation, which totally possessing it, composes it into a kind of calming solemn sedateness and strikes it with deep silent wonder, and administration’: It finds such a difficulty in spreading itself to the dimensions of its object, as enliven and invigorates which this occasions, it sometimes images itself present in every part of the sense which it contemplates, and from the sense of this immensity, feels a noble pride, and entertains a lofty conception of its own capacity.

In Kant’s aesthetic theory the sublime ‘raises the soul above the height of vulgar complacency’. We experience the vast spectacles of nature as ‘absolutely great’ and of irresistible ability of might and the prerogative to perform in a given way or a capacity for a particular kind of mental inaction. This perception is fearful, but by conquering this fear, and by regarding as small ‘those things of which we are wont to be solicitous’ we quicken our sense of moral freedom. So we turn the experience of frailty and impotence into one of our true, inward moral freedom as the mind triumphs over nature, and it is this triumph of reason that is truly sublime. Kant thus paradoxically places our sense of the sublime in an awareness of ourselves as transcending nature, than in an awareness of us as a frail and insignificant part of it.

Nevertheless, the doctrine that all relations are internal was a cardinal thesis of absolute idealism, and a central point of attack by the British philosopher’s George Edward Moore (1873-1958) and Bertrand Russell (1872-1970). It is a kind of ‘essentialism’, stating that if two things stand in some relationship, then they could not be what they are, did they not do so, if, for instance, I am wearing a hat mow, then when we imagine a possible situation that we would be got to describe as my not wearing the hat now, we would strictly not be imaging as one and the hat, but only some different individual.

The countering partitions a doctrine that bears some resemblance to the metaphysically based view of the German philosopher and mathematician Gottfried Leibniz (1646-1716) that if a person had any other attributes that the ones he has, he would not have been the same person. Leibniz thought that when asked that would have happened if Peter had not denied Christ. That being that if I am asking what had happened if Peter had not been Peter, denying Christ is contained in the complete notion of Peter. But he allowed that by the name ‘Peter’ might be understood as ‘what is involved in those attributes [of Peter] from which the denial does not follow’. In order that we are held accountable to allow of external relations, in that these being relations which individuals could have or not depending upon contingent circumstances. The relations of ideas are used by the Scottish philosopher David Hume (1711-76) in the First Enquiry of Theoretical Knowledge. All the objects of human reason or enquiring naturally, be divided into two kinds: To units, where all, ‘relations of ideas’ and ‘matter of fact ‘ (Enquiry Concerning Human Understanding) the terms reflect the belief that any thing that can be known dependently must be internal to the mind, and hence transparent to us.

In Hume, objects of knowledge are divided into matter of fact (roughly empirical things known by means of impressions) and the relation of ideas. The contrast, also called ‘Hume’s Fork’, is a version of the speculative deductivity distinction, but reflects the 17th and early 18th centauries behind that the deductivity is established by chains of infinite certainty as comparable to ideas. It is extremely important that in the period between Descartes and J.S. Mill that a demonstration is not, but only a chain of ‘intuitive’ comparable ideas, whereby a principle or maxim can be established by reason alone. It is in this sense that the English philosopher John Locke (1632-1704) who believed that theologically and moral principles are capable of demonstration, and Hume denies that they are, and also denies that scientific enquiries proceed in demonstrating its results.

A mathematical proof is formally inferred as to an argument that is used to show the truth of a mathematical assertion. In modern mathematics, a proof begins with one or more statements called premises and demonstrates, using the rules of logic, that if the premises are true then a particular conclusion must also be true.

The accepted methods and strategies used to construct a convincing mathematical argument have evolved since ancient times and continue to change. Consider the Pythagorean theorem, named after the 5th century Bc Greek mathematician and philosopher Pythagoras, which states that in a right-angled triangle, the square of the hypotenuse is equal to the sum of the squares of the other two sides. Many early civilizations considered this theorem true because it agreed with their observations in practical situations. But the early Greeks, among others, realized that observation and commonly held opinion does not guarantee mathematical truth. For example, before the 5th century Bc it was widely believed that all lengths could be expressed as the ratio of two whole numbers. But an unknown Greek mathematician proved that this was not true by showing that the length of the diagonal of a square with an area of one is the irrational number Ã.

The Greek mathematician Euclid laid down some of the conventions central to modern mathematical proofs. His book The Elements, written about 300 Bc, contains many proofs in the fields of geometry and algebra. This book illustrates the Greek practice of writing mathematical proofs by first clearly identifying the initial assumptions and then reasoning from them in a logical way in order to obtain a desired conclusion. As part of such an argument, Euclid used results that had already been shown to be true, called theorems, or statements that were explicitly acknowledged to be self-evident, called axioms; this practice continues today.

In the 20th century, proofs have been written that are so complex that no one person understands every argument used in them. In 1976, a computer was used to complete the proof of the four-colour theorem. This theorem states that four colours are sufficient to colour any map in such a way that regions with a common boundary line have different colours. The use of a computer in this proof inspired considerable debate in the mathematical community. At issue was whether a theorem can be considered proven if human beings have not actually checked every detail of the proof.

The study of the relations of deductibility among sentences in a logical calculus which benefits the prof theory. Deductibility is defined purely syntactically, that is, without reference to the intended interpretation of the calculus. The subject was founded by the mathematician David Hilbert (1862-1943) in the hope that strictly inffinitary methods would provide a way of proving the consistency of classical mathematics, but the ambition was torpedoed by Gödel’s second incompleteness theorem.

What is more, the use of a model to test for consistencies in an ‘axiomatized system’ which is older than modern logic. Descartes’ algebraic interpretation of Euclidean geometry provides a way of showing that if the theory of real numbers is consistent, so is the geometry. Similar representation had been used by mathematicians in the 19th century, for example to show that if Euclidean geometry is consistent, so are various non-Euclidean geometries. Model theory is the general study of this kind of procedure: The ‘proof theory’ studies relations of deductibility between formulae of a system, but once the notion of an interpretation is in place we can ask whether a formal system meets certain conditions. In particular, can it lead us from sentences that are true under some interpretation? And if a sentence is true under all interpretations, is it also a theorem of the system? We can define a notion of validity (a formula is valid if it is true in all interpret rations) and semantic consequence (a formula ‘B’ is a semantic consequence of a set of formulae, written {A1 . . . An} ⊨B, if it is true in all interpretations in which they are true) Then the central questions for a calculus will be whether all and only its theorems are valid, and whether {A1 . . . An} ⊨ B if and only if {A1 . . . An} ⊢B. There are the questions of the soundness and completeness of a formal system. For the propositional calculus this turns into the question of whether the proof theory delivers as theorems all and only ‘tautologies’. There are many axiomatizations of the propositional calculus that are consistent and complete. The mathematical logician Kurt Gödel (1906-78) proved in 1929 that the first-order predicate under every interpretation is a theorem of the calculus.

The Euclidean geometry is the greatest example of the pure ‘axiomatic method’, and as such had incalculable philosophical influence as a paradigm of rational certainty. It had no competition until the 19th century when it was realized that the fifth axiom of his system (parallel lines never meets) could be denied without inconsistency, leading to Riemannian spherical geometry. The significance of Riemannian geometry lies in its use and extension of both Euclidean geometry and the geometry of surfaces, leading to a number of generalized differential geometries. Its most important effect was that it made a geometrical application possible for some major abstractions of tensor analysis, leading to the pattern and concepts for general relativity later used by Albert Einstein in developing his theory of relativity. Riemannian geometry is also necessary for treating electricity and magnetism in the framework of general relativity. The fifth chapter of Euclid’s Elements, is attributed to the mathematician Eudoxus, and contains a precise development of the real number, work which remained unappreciated until rediscovered in the 19th century.

The Axiom, in logic and mathematics, is a basic principle that is assumed to be true without proof. The use of axioms in mathematics stems from the ancient Greeks, most probably during the 5th century Bc, and represents the beginnings of pure mathematics as it is known today. Examples of axioms are the following: ‘No sentence can be true and false at the same time’ (the principle of contradiction); ‘If equals are added to equals, the sums are equal’. ‘The whole is greater than any of its parts’. Logic and pure mathematics begin with such unproved assumptions from which other propositions (theorems) are derived. This procedure is necessary to avoid circularity, or an infinite regression in reasoning. The axioms of any system must be consistent with one another, that is, they should not lead to contradictions. They should be independent in the sense that they cannot be derived from one another. They should also be few in number. Axioms have sometimes been interpreted as self-evident truths. The present tendency is to avoid this claim and simply to assert that an axiom is assumed to be true without proof in the system of which it is a part.

The terms ‘axiom’ and ‘postulate’ are often used synonymously. Sometimes the word axiom is used to refer to basic principles that are assumed by every deductive system, and the term postulate is used to refer to first principles peculiar to a particular system, such as Euclidean geometry. Infrequently, the word axiom is used to refer to first principles in logic, and the term postulate is used to refer to first principles in mathematics.

The applications of game theory are wide-ranging and account for steadily growing interest in the subject. Von Neumann and Morgenstern indicated the immediate utility of their work on mathematical game theory by linking it with economic behaviour. Models can be developed, in fact, for markets of various commodities with differing numbers of buyers and sellers, fluctuating values of supply and demand, and seasonal and cyclical variations, as well as significant structural differences in the economies concerned. Here game theory is especially relevant to the analysis of conflicts of interest in maximizing profits and promoting the widest distribution of goods and services. Equitable division of property and of inheritance is another area of legal and economic concern that can be studied with the techniques of game theory.

In the social sciences, n-person game theory has interesting uses in studying, for example, the distribution of power in legislative procedures. This problem can be interpreted as a three-person game at the congressional level involving vetoes of the president and votes of representatives and senators, analyzed in terms of successful or failed coalitions to pass a given bill. Problems of majority rule and individual decision making is also amenable to such study.

Sociologists have developed an entire branch of game theory devoted to the study of issues involving group decision making. Epidemiologists also make use of game theory, especially with respect to immunization procedures and methods of testing a vaccine or other medication. Military strategists turn to game theory to study conflicts of interest resolved through ‘battles’ where the outcome or payoff of a given war game is either victory or defeat. Usually, such games are not examples of zero-sum games, for what one player loses in terms of lives and injuries are not won by the victor. Some uses of game theory in analyses of political and military events have been criticized as a dehumanizing and potentially dangerous oversimplification of necessarily complicating factors. Analysis of economic situations is also usually more complicated than zero-sum games because of the production of goods and services within the play of a given ‘game’.

All is the same in the classical theory of the syllogism, a term in a categorical proposition is distributed if the proposition entails any proposition obtained from it by substituting a term denoted by the original. For example, in ‘all dogs bark’ the term ‘dogs’ is distributed, since it entails ‘all terriers’ bark’, which is obtained from it by a substitution. In ‘Not all dogs bark’, the same term is not distributed, since it may be true while ‘not all terriers’ bark’ is false.

When a representation of one system by another is usually more familiar, in and for itself, that those extended in representation that their workings are supposed analogously to that of the first. This one might model the behaviour of a sound wave upon that of waves in water, or the behaviour of a gas upon that to a volume containing moving billiard balls. While nobody doubts that models have a useful ‘heuristic’ role in science, there has been intense debate over whether a good model, or whether an organized structure of laws from which it can be deduced and suffices for scientific explanation. As such, the debate of a topic was inaugurated by the French physicist Pierre Marie Maurice Duhem (1861-1916), in ‘The Aim and Structure of Physical Theory’ (1954) by which Duhem’s conception of science is that it is simply a device for calculating as science provides deductive system that is systematic, economical, and predictive, but not that represents the deep underlying nature of reality. Steadfast and holding of its contributive thesis that in isolation, and since other auxiliary hypotheses will always be needed to draw empirical consequences from it. The Duhem thesis implies that refutation is a more complex matter than might appear. It is sometimes framed as the view that a single hypothesis may be retained in the face of any adverse empirical evidence, if we prepared to make modifications elsewhere in our system, although strictly speaking this is a stronger thesis, since it may be psychologically impossible to make consistent revisions in a belief system to accommodate, say, the hypothesis that there is a hippopotamus in the room when visibly there is not.

Primary and secondary qualities are the division associated with the 17th-century rise of modern science, wit h its recognition that the fundamental explanatory properties of things that are not the qualities that perception most immediately concerns. They’re later are the secondary qualities, or immediate sensory qualities, including colour, taste, smell, felt warmth or texture, and sound. The primary properties are less tied to their deliverance of one particular sense, and include the size, shape, and motion of objects. In Robert Boyle (1627-92) and John Locke (1632-1704) the primary qualities are scientifically tractable, as objective qualities, are those that are essential to anything material, are of a minimal listing of size, shape, and mobility, i.e., the state of being at rest or moving. Locke sometimes adds number, solidity, texture (where this is thought of as the structure of a substance, or way in which it is made out of atoms). The secondary qualities are the powers to excite particular sensory modifications in observers. Once, again, that Locke himself thought in terms of identifying these powers with the texture of objects that, according to corpuscularian science of the time, were the basis of an object’s causal capacities. The ideas of secondary qualities are sharply different from these powers, and afford us no accurate impression of them. For Renè Descartes (1596-1650), this is the basis for rejecting any attempt to think of knowledge of external objects as provided by the senses. But in Locke our ideas of primary qualities do afford us an accurate notion of what shape, size, and mobilities are. In English-speaking philosophy the first major discontent with the division was voiced by the Irish idealist George Berkeley (1685-1753), who probably took for a basis of his attack from Pierre Bayle (1647-1706), who in turn cites the French critic Simon Foucher (1644-96). Modern thought continues to wrestle with the difficulties of thinking of colour, taste, smell, warmth, and sound as real or objective properties to things independent of us.

Continuing as such, is the doctrine advocated by ways that the American philosopher David Lewis (1941-2002), in that different possible worlds are to be thought of as existing exactly as this one does. Thinking in terms of possibilities is thinking of real worlds where things are different. The view has been charged with making it impossible to see why it is good to save the child from drowning, since there is still a possible world in which she (or her counterpart) drowned, and from the standpoint of the universe it should make no difference which world is actual. Critics also charge that the notion fails to fit either with a coherent theory lf how we know about possible worlds, or with a coherent theory of why we are interested in them, but Lewis denied that any other way of interpreting modal statements is tenable.

The proposal set forth that characterizes the ‘modality’ of a proposition as the notion for which it is true or false. The most important division is between propositions true of necessity, and those true as things are: Necessary as opposed to contingent propositions. Other qualifiers sometimes called ‘modal’ include the tense indicators, ‘it will be the case that ‘p’, or ‘it was the case that ‘p’, and there are affinities between the ‘deontic’ indicators, ‘it should to be the case that ‘p’, or ‘it is permissible that ‘p’, and that of necessity and possibilities.

The aim of a logic is to make explicitly the rules by which inferences maybe drawn, than to study the actual reasoning processes that people use, which may or may not conform to those rules. In the case of deductive logic, if we ask why we need to obey the rules, the most general form of an answer is that if we do not we contradict ourselves, or strictly speaking, we stand ready to contradict ourselves. Someone failing to draw a conclusion that follows from a set of premises need not be contradicting him or herself, but only failing to notice something. However, he or she is not defended against adding the contradictory conclusion to his or fer set of beliefs. There is no equally simple answer in the case of inductive logic, which is in general a less robust subject, but the aim will be to find reasoning such that anyone failing to conform to it will have improbable beliefs. Pietism in logic dominated the subject until the 19th century, and has become increasingly recognized in the 20th century in that finer works that were done within that tradition, but syllogistic reasoning is now generally regarded as a limited special case of the form of reasoning that can be reprehend within the promotion and predated values, these form the heart of modern logic, as their central notions or qualifiers, variables, and functions were the creation of the German mathematician Gottlob Frége, who is recognized as the father of modern logic, although his treatment of a logical system of intensive mathematical structures, or algebraic, has been heralded by the English mathematician and logistician George Boole (1815-64), his pamphlet The Mathematical Analysis of Logic (1847) pioneered the algebra of classes. The work was made of in An Investigation of the Laws of Thought (1854). Boole also published many works in our mathematics, and on the theory of probability. His name is remembered in the title of Boolean algebra, and the algebraic operations he investigated are denoted by Boolean operations.

The syllogistic, or categorical syllogism is the inference of one proposition from two premises. For example is, ‘all horses have tails, and things with tails are four legged, so all horses are four legged. Each premise has one term in common with the other premises. The term that does not occasion in the occurring conclusion, and hence is called the middle term. The major premise of the syllogism is the premise containing the predicate of the contraction (the major term). And the minor premise contains its subject (the minor term). So the first premise of the example in the minor premise the second the major term. So the first premise of the example is the minor premise, the second the major premise and ‘having a tail’ is the middle term. This enables syllogisms that there of a classification, that according to the form of the premises and the conclusions. The other classification is by figure, or way in which the middle term is placed or way in within the middle term is placed in the premise.

Although the theory of the syllogism dominated logic until the 19th century, it remained a piecemeal affair, able to deal with only relations valid forms of valid forms of argument. There have subsequently been rearguing actions attempting, but in general it has been eclipsed by the modern theory of quantification, the predicate calculus is the heart of modern logic, having proved capable of formalizing the calculus rationing processes of modern mathematics and science. In first-order predicate calculus, it holds to variables that range over objects: In some higher-order calculuses that have of an action, expression, or influence in order that predicated functions in and for them. The fist-order predicated calculus with identity includes ‘=’ as primitive (undefined) expression: In a higher-order calculus It may be defined by law that χ = y iff (∀F)(Fχ↔Fy), which gives greater expressive power for less complexity.

Modal logic was of great importance historically, particularly in the light of the deity, but was not a central topic of modern logic in its gold period as the beginning of the 20th century. It was, however, revived by the American logician and philosopher Irving Lewis (1883-1964), although he wrote extensively on most central philosophical topis, he is remembered principally as a critic of the intentional nature of modern logic, and as the founding father of modal logic. His two independent proofs showing that from a contradiction anything follows a relevance logic, using a notion of entailment stronger than that of strict implication.

The imparting information has been conduced or carried out of the prescribed procedures, as impeding for that which takes place in the chancing encounter out to be to enter ons’s mind may from time to time occasion of various doctrines concerning the necessary properties, least of mention, by adding to a prepositional or predicated calculus two operator, □ and ◊ (sometimes written ‘N’ and ‘M’), meaning necessarily and possible, respectfully. These like, ‘p ➞◊p and □p ➞p’ will be wanted. Controversial these include □p ➞ □□p (if a proposition is necessary, it’s necessarily, characteristic of a system known as S4) and ◊p ➞□◊p (if as preposition is possible, it’s necessarily possible, characteristic of the system known as S5). The classical modal theory for modal logic, due to the American logician and philosopher (1940-) and the Swedish logician, Sig Kanger, who exceptionally involves a worthy quality as for assessing the merited value in some prepositional containment as not true or false of a simpiciter, but as true or false at possible worlds with necessity then corresponding to truth in all worlds, and possibilities to truth in some world. Various different systems of modal logic result from adjusting the accessibility relation between worlds.

In Saul Kripke, gives the classical modern treatment of the topic of reference, both clarifying the distinction between names and definite description, and opening the door to many subsequent attempts to understand the notion of reference in terms of a causal link between the use of a term and an original episode of attaching a name to the subject.

One of the three branches into which ‘semiotic’ is usually divided, the study of semantical meaning of words, and the relation of signs to the degree to which the designs are applicable. In that, in formal studies, a semantical order of condition is provided for a formal language when an interpretation of ‘model’ is specified. However, a natural language comes ready interpreted, and the semantic problem is not that of the specification but of understanding the relationship between terms of various categories (names, descriptions, predicate, adverbs . . . ) and their meaning. An influential proposal by attempting to provide a truth definition for the language, which will involve giving a full structure of different kinds has on the truth conditions of sentences containing them.

Holding that the basic casse of reference is the relation between a name and the persons or object for which it takes its name. The philosophical problems include trying to elucidate that relation, to understand whether other semantic relations, such s that between a predicate and the property it expresses, or that between a description of what it describes, or that between me and the word ‘I’, are examples of the same relation or of very different ones. A great deal of modern work on this was stimulated by the American logician Saul Kripke’s, Naming and Necessity (1970). It would also be desirable to know whether we can refer to such things as objects and how to conduct the debate about each and issue. A popular approach, following Gottlob Frége, is to argue that the fundamental unit of analysis should be the whole sentence. The reference of a term becomes a derivative notion it is whatever it is that defines the term’s contribution to the trued condition of the whole sentence. There need be nothing further to say about it, given that we have a way of understanding the attribution of meaning or truth-condition to sentences. Other approach, searching in the more substantive of possibilities that causality or psychological or social constituents are pronounced between words and things.

However, following Ramsey and the Italian mathematician G. Peano (1858-1932), it has been customary to distinguish logical paradoxes that depend upon a notion of reference or truth (semantic notions) such as those of the ‘Liar family, Berry, Richard, etc. forms the purely logical paradoxes in which no such notions are involved, such as Russell’s paradox, or those of Canto and Burali-Forti. Paradoxes of the fist type seem to depend upon an element of a self-reference, in which a sentence is about itself, or in which a phrase refers to something about itself, or in which a phrase refers to something defined by a set of phrases of which it is itself one. It is to feel that this element is responsible for the contradictions, although the self-reference itself is often benign (for instance, the sentence ‘All English sentences should have a verb’, includes itself happily in the domain of sentences it is talking about), so the difficulty lies in forming a condition that existence only pathological Self-reference. Paradoxes of the second kind then need a different treatment. While the distinction is convenient, in allowing set theory to proceed by circumventing the latter paradoxes by technical mans, even when there is no solution to the semantic paradoxes, it may be a way of ignoring the similarities between the two families. There is still the possibility that while there is no agreed solution to the semantic paradoxes, our understand of Russell’s paradox may be imperfect as well.

Truth and falsity are two classical truth-values that a statement, proposition or sentence can take, as it is supposed in classical (two-valued) logic, that each statement has one of these values, and none has both. A statement is then false if and only if it is not true. The basis of this scheme is that to each statement there corresponds a determinate truth condition, or way the world must be for it to be true: If this condition obtains the statement is true, and otherwise false. Statements may indeed be felicitous or infelicitous in other dimensions (polite, misleading, apposite, witty, etc.) but truth is the central normative notion governing assertion. Consideration’s of some vagueness may introduce greys into this black-and-white scheme. For the issue to be true, any suppressed premise or background framework of thought necessary makes an agreement valid, or positioned tenably, a proposition whose truth is necessary for either the truth or the falsity of another statement. Thus if ‘p’ presupposes ‘q’, ‘q’ must be true for ‘p’ to be either true or false. In the theory of knowledge, the English philologer and historian George Collingwood (1889-1943), announces that any proposition capable of truth or falsity stands on a bed of ‘absolute presuppositions’ which are not properly capable of truth or falsity, since a system of thought will contain no way of approaching such a question (a similar idea later voiced by Wittgenstein in his work On Certainty). The introduction of presupposition therefore mans that either another of a truth value is found, ‘intermediate’ between truth and falsity, or the classical logic is preserved, but it is impossible to tell whether a particular sentence empresses a preposition that is a candidate for truth and falsity, without knowing more than the formation rules of the language. Each suggestion carries with it, as, there is some consensus that at least who where definite descriptions are involved, examples equally given by regarding the overall sentence as false as the existence claim fails, and explaining the data that the English philosopher Frederick Strawson (1919-) relied upon as the effects of ‘implicature’.

Views about the meaning of terms will often depend on classifying the implicature of sayings involving the terms as implicatures or as genuine logical implications of what is said. Implicatures may be divided into two kinds: Conversational implicatures of the two kinds and the more subtle category of conventional implicatures. A term may as a matter of convention carries an implicature, thus one of the relations between ‘he is poor and honest’ and ‘he is poor but honest’ is that they have the same content (are true in just the same conditional) but the second has implicatures (that the combination is surprising or significant) that the first lacks.

It is, nonetheless, that we find in classical logic a proposition that may be true or false. In that, if the former, it is said to take the truth-value true, and if the latter the truth-value false. The idea behind the terminological phrases is the analogue between assigning a propositional variable one or other of these values, as is done in providing an interpretation for a formula of the propositional calculus, and assigning an object as the value of any other variable. Logics with intermediate value are called ‘many-valued logics’.

Nevertheless, an existing definition of the predicate’ . . . is true’ for a language that satisfies convention ‘T’, the material adequately condition laid down by Alfred Tarski, born Alfred Teitelbaum (1901-83), whereby his methods of ‘recursive’ definition, enabling us to say for each sentence what it is that its truth consists in, but giving no verbal definition of truth itself. The recursive definition or the truth predicate of a language is always provided in a ‘metalanguage’, Tarski is thus committed to a hierarchy of languages, each with it’s associated, but different truth-predicate. While this enables the approach to avoid the contradictions of paradoxical contemplations, it conflicts with the idea that a language should be able to say everything that there is to be said, and other approaches have become increasingly important.

So, that the truth condition of a statement is the condition for which the world must meet if the statement is to be true. To know this condition is equivalent to knowing the meaning of the statement. Although this sounds as if it gives a solid anchorage for meaning, some of the securities disappear when it turns out that the truth condition can only be defined by repeating the very same statement: The truth condition of ‘now is white’ is that ‘snow is white’, the truth condition of ‘Britain would have capitulated had Hitler invaded’, is that ‘Britain would have capitulated had Hitler invaded’. It is disputed whether this element of running-on-the-spot disqualifies truth conditions from playing the central role in a substantive theory of meaning. Truth-conditional theories of meaning are sometimes opposed by the view that to know the meaning of a statement is to be able to use it in a network of inferences.

Taken to be the view, inferential semantics takes the role of a sentence in inference give a more important key to their meaning than this ‘external’ relations to things in the world. The meaning of a sentence, begins its placement in a network of inferences that it legitimates. Also known as functional role semantics, procedural semantics, or conception to the coherence theory of truth, and suffers from the same suspicion that it divorces meaning from any clar association with things in the world.

Moreover, a theory of semantic truth is that of the view if language is provided with a truth definition, there is a sufficient characterization of its concept of truth, as there is no further philosophical chapter to write about truth: There is no further philosophical chapter to write about truth itself or truth as shared across different languages. The view is similar to the Disquotational theory.

The redundancy theory, or also known as the ‘deflationary view of truth’ fathered by Gottlob Frége and the Cambridge mathematician and philosopher Frank Ramsey (1903-30), who showed how the distinction between the semantic paradoxes, such as that of the Liar, and Russell’s paradox, made unnecessary the ramified type theory of Principia Mathematica, and the resulting axiom of reducibility. By taking all the sentences affirmed in a scientific theory that use some terms, e.g., quarks, and to a considerable degree of replacing the term by a variable instead of saying that quarks have such-and-such properties, the Ramsey sentence says that there is something that has those properties. If the process is repeated for all of a group of the theoretical terms, the sentence gives ‘topic-neutral’ structure of the theory, but removes any implication that we know what the terms so treated as the denotation, leaves open the possibility of identifying the theoretical item with whatever. It is that best fits the description provided. However, it was pointed out by the Cambridge mathematician Newman, that if the process is carried out for all except the logical bones of a theory, then by the Löwenheim-Skolem theorem, the result will be interpretable, and the content of the theory may reasonably be felt to have been lost.

All and all, both Frége and Ramsey are agreeing that the essential claim is that the predicate’ . . . is true’ does not have a sense, i.e., expresses no substantive or profound or explanatory concept that ought to be the topic of philosophical enquiry. The approach admits of different versions, but centres on the points (1) that ‘it is true that ‘p’ says no more nor less than ‘p’ (hence, redundancy): (2) That to a lesser extent, it seemed to absolved the directions of context, such as ‘everything he said was true’, or ‘all logical consequences of true propositions are true’, the predicate functions as a device enabling us to generalize than as an adjective or predicate describing the things he said, or the kinds of propositions that follow from a true preposition. For example, the second may translate as ‘(∀p, q)(p & p ➞q ➞q)’ where there is no use of a notion of truth.

There are technical problems in interpreting all uses of the notion of truth in such ways, nevertheless, they are not generally felt to be insurmountable. The approach needs to explain away apparently substantive uses of the notion, such as ‘science aims at the truth’, or ‘truth is a norm governing discourse’. Postmodern writing frequently advocates that we must abandon such norms. Along with a discredited ‘objective’ conception of truth. Perhaps, we can have the norms even when objectivity is problematic, since they can be framed without mention of truth: Science wants it to be so that whatever science holds that ‘p’, then ‘p’. Discourse is to be regulated by the principle that it is wrong to assert ‘p’, when ‘not-p’.

Something that tends of something in addition of content, or coming by way to justify such a position can very well be more that in addition to several reasons, as to bring in or joins of something might that there be more so as to a larger combination for us to consider the simplest formulation, is that the real or assumed right to demand something as one’s own or one’s duly challenging is by way of advancing the claim that manifest the projective expression, and, especially if the form that if, ‘S is true’: The same (as an idea) as drawn to infer to the mind is accomplished proper and effected output can be one’s total but yet its finding property as too included a moderately central means by which of ways is the same as the expression of the form ‘S’. Some philosophers dislike the ideas of sameness of meaning, and if this is disallowed, then the claim is that the two forms are equivalent in any sense of equivalence that matters. This is, it makes no difference whether people say ‘Dogs bark’ on Tuesday, or whether they say, ‘dogs bark’. In the former representation of what they say of the sentence ‘Dogs bark’ is mentioned, but in the later it appears to be used, of the claim that the two are equivalent and needs careful formulation and defence. On the face of it someone might know that ‘Dogs bark’ is true without knowing what it means (for instance, if he kids in a list of acknowledged truths, although he does not understand English), and this is different from knowing that dogs bark. Disquotational theories are usually presented as versions of the ‘redundancy theory of truth’.

The relationship between a set of premises and a conclusion when the conclusion follows from the premise. Many philosophers identify this with it being logically impossible that the premises should all be true, yet the conclusion false. Others are sufficiently impressed by the paradoxes of strict implication to look, for a stranger relation, which would distinguish between valid and invalid arguments within the sphere of necessary propositions. The seraph for a strange notion is the field of relevance logic.

From a systematic theoretical point of view, we may imagine the process of evolution of an empirical science to be a continuous process of induction. Theories are evolved and are expressed in short compass as statements of as large number of individual observations in the form of empirical laws, from which the general laws can be ascertained by comparison. Regarded in this way, the development of a science bears some resemblance to the compilation of a classified catalogue. It is, as it was, a purely empirical enterprise.

But this point of view by no means embraces the whole of the actual process, for it bends over the important part played by intuition and deductive thought in the development of an exact science. As soon as a science has emerged from its initial stages, theoretical advances are no longer achieved merely by a process of arrangement. Guided by empirical data, the investigators rather develop a system of thought which, in general, it is built up logically from a small number of fundamental assumptions, the so-called axioms. We call such a system of thought a ‘theory’. The theory finds the justification for its existence in the fact that it correlates a large number of single observations, and is just here that the ‘truth’ of the theory lies.

Corresponding to the same complex of empirical data, there may be several theories, which differ from one another to a considerable extent. But as regards the deductions from the theories which are capable of being tested, the agreement between the theories may be so complete, that it becomes difficult to find any deductions in which the theories differ from each other. As an example, a case of general interest is available in the province of biology, in the Darwinian theory of the development of species by selection in the struggle for existence, and in the theory of development which is based on the hypophysis of the hereditary transmission of acquired characters. The ‘Origin of Species’, was principally successful in marshalling the evidence for evolution, than providing for a convincing mechanism for genetic change. And Darwin himself remained open to the search for additional mechanisms, while also remaining convinced that natural selection was at the hart of it. It was only with the later discovery of the gene as the unit of inheritance that the synthesis known as ‘neo-Darwinism’ became the orthodox theory of evolution in the life sciences.

In the 19th century the attempt to base ethical reasoning o the presumed facts about evolution, the movement is particularly associated with the English philosopher of evolution Herbert Spencer (1820-1903). The premise is that later elements in an evolutionary path are better than earlier ones: The application of this principle then requires seeing western society, laissez-faire capitalism, or some other object of approval, as more evolved than more ‘primitive’ social forms. Neither the principle nor the applications command much respect. The version of evolutionary ethics called ‘social Darwinism’ emphasises the struggle for natural selection, and draws the conclusion that we should glorify and assist such struggles, usually by enhancing competition and aggressive relations between people in society or between evolution and ethics has been re-thought in the light of biological discoveries concerning altruism and kin-selection.

Once again, the psychologically proven attempts are founded to evolutionary principles, in which a variety of higher mental functions may be adaptations, forced in response to selection pressures on the human populations through evolutionary time. Candidates for such theorizing include material and paternal motivations, capacities for love and friendship, the development of language as a signalling system cooperative and aggressive, our emotional repertoire, our moral and reactions, including the disposition to detect and punish those who cheat on agreements or who ‘free-ride’ on the work of others, our cognitive structures, nd many others. Evolutionary psychology goes hand-in-hand with neurophysiological evidence about the underlying circuitry in the brain which subserves the psychological mechanisms it claims to identify. The approach was foreshadowed by Darwin himself, and William James, as well as the sociology of E.O. Wilson. The terms of use are applied, more or less aggressively, especially to explanations offered in Socio-biology and evolutionary psychology.

Another assumption that is frequently used to legitimate the real existence of forces associated with the invisible hand in neoclassical economics derives from Darwin’s view of natural selection as a war-like competing between atomized organisms in the struggle for survival. In natural selection as we now understand it, cooperation appears to exist in complementary relation to competition. Complementary relationships between such results are emergent self-regulating properties that are greater than the sum of parts and that serve to perpetuate the existence of the whole.

According to E.O. Wilson, the ‘human mind evolved to believe in the gods’ and people ‘need a sacred narrative’ to have a sense of higher purpose. Yet it is also clear that the ‘gods’ in his view are merely human constructs and, therefore, there is no basis for dialogue between the world-view of science and religion. ‘Science for its part’, said Wilson, ‘will test relentlessly every assumption about the human condition and in time uncover the bedrock of the moral and religious sentiments. The eventual result of the competition between the other, will be the secularization of the human epic and of religion itself.

Man has come to the threshold of a state of consciousness, regarding his nature and his relationship to the cosmos, in terms that reflect ‘reality’. By using the processes of nature as metaphor, to describe the forces by which it operates upon and within Man, we come as close to describing ‘reality’ as we can within the limits of our comprehension. Men will be very uneven in their capacity for such understanding, which, naturally, differs for different ages and cultures, and develops and changes over the course of time. For these reasons it will always be necessary to use metaphor and myth to provide ‘comprehensible’ guides to living. In thus way. Man’s imagination and intellect play vital roles on his survival and evolution.

Since so much of life both inside and outside the study is concerned with finding explanations of things, it would be desirable to have a concept of what counts as a good explanation from bad. Under the influence of ‘logical positivist’ approaches to the structure of science, it was felt that the criterion ought to be found in a definite logical relationship between the ‘exlanans’ (that which does the explaining) and the explanandum (that which is to be explained). The approach culminated in the covering law model of explanation, or the view that an event is explained when it is subsumed under a law of nature, that is, its occurrence is deducible from the law plus a set of initial conditions. A law would itself be explained by being deduced from a higher-order or covering law, in the way that Johannes Kepler(or Keppler, 1571-1630), was by way of planetary motion that the laws were deducible from Newton’s laws of motion. The covering law model may be adapted to include explanation by showing that something is probable, given a statistical law. Questions for the covering law model include querying for the covering laws are necessary to explanation (we explain whether everyday events without overtly citing laws): Querying whether they are sufficient (it may not explain an event just to say that it is an example of the kind of thing that always happens). And querying whether a purely logical relationship is adapted to capturing the requirements, with which are of explanations. These may include, for instance, that we have a ‘feel’ for what is happening, or that the explanation proceeds in terms of things that are familiar to us or unsurprising, or that we can give a model of what is going on, and none of these notions is captured in a purely logical approach. Recent work, therefore, has tended to stress the contextual and pragmatic elements in requirements for explanation, so that what counts as good explanation given one set of concerns may not do so given another.

The argument to the best explanation is the view that once we can select the best of any in something in explanations of an event, then we are justified in accepting it, or even believing it. The principle needs qualification, since something it is unwise to ignore the antecedent improbability of a hypothesis which would explain the data better than others, e.g., the best explanation of a coin falling heads 530 times in 1,000 tosses might be that it is biassed to give a probability of heads of 0.53 but it might be more sensible to suppose that it is fair, or to suspend judgement.

In a philosophy of language is considered as the general attempt to understand the components of a working language, the relationship the understanding speaker has to its elements, and the relationship they bear to the world. The subject therefore embraces the traditional division of semiotic into syntax, semantics, and pragmatics. The philosophy of language thus mingles with the philosophy of mind, since it needs an account of what it is in our understanding that enables us to use language. It so mingles with the metaphysics of truth and the relationship between sign and object. Much as much is that the philosophy in the 20th century, has been informed by the belief that philosophy of language is the fundamental basis of all philosophical problems, in that language is the distinctive exercise of mind, and the distinctive way in which we give shape to metaphysical beliefs. Particular topics will include the problems of logical form, and the basis of the division between syntax and semantics, as well as problems of understanding the number and nature of specifically semantic relationships such as meaning, reference, predication, and quantification. Pragmatics includes that of speech acts, while problems of rule following and the indeterminacy of translation infect philosophies of both pragmatics and semantics.

On this conception, to understand a sentence is to know its truth-conditions, and, yet, in a distinctive way the conception has remained central that those who offer opposing theories characteristically define their position by reference to it. The conceptions of meaning s truth-conditions of necessity needs not and should not be advanced for being in themselves as complete account of meaning. For instance, one who understands a language must have some idea of the range of speech acts contextually performed by the various types of a sentence in the language, and must have some idea of the insufficiencies of various kinds of speech acts. The claim of the theorist of truth-conditions should rather be targeted on the notion of content: If indicative sentences differ in what they strictly and literally say, then this difference is fully accounted for by the difference in the truth-conditions.

The meaning of a complex expression is a function of the meaning of its constituent. This is just as a sentence of what it is for an expression to be semantically complex. It is one of the initial attractions of the conception of meaning truth-conditions tat it permits a smooth and satisfying account of the way in which the meaning of s complex expression is a function of the meaning of its constituents. On the truth-conditional conception, to give the meaning of an expression is to state the contribution it makes to the truth-conditions of sentences in which it occurs. For singular terms ~ proper names, indexical, and certain pronoun’s ~ this is done by stating the reference of the terms in question. For predicates, it is done either by stating the conditions under which the predicate is true of arbitrary objects, or by stating the conditions under which arbitrary atomic sentences containing it is true. The meaning of a sentence-forming operator is given by stating its contribution to the truth-conditions of as complex sentence, as a function of the semantic values of the sentences on which it operates.

The theorist of truth conditions should insist that not every true statement about the reference of an expression is fit to be an axiom in a meaning-giving theory of truth for a language, such is the axiom: ‘London’ refers to the city in which there was a huge fire in 1666, is a true statement about the reference of ‘London’. It is a consequent of a theory which substitutes this axiom for no different a term than of our simple truth theory that ‘London is beautiful’ is true if and only if the city in which there was a huge fire in 1666 is beautiful. Since a subject can understand the elementary name is ‘London’ without knowing that last-mentioned truth condition, this replacement axiom is not fit to be an axiom in a meaning-specifying truth theory. It is, of course, incumbent on a theorised meaning of truth conditions, to state in a way which does not presuppose any previous, non-truth conditional conceptions of meaning

Among the many challenges facing the theorist of truth conditions, two are particularly salient and fundamental. First, the theorist has to answer the charge of triviality or vacuity, second, the theorist must offer an account of what it is for a person’s language to be truly describable by as semantic theory containing a given semantic axiom.

Since the content of a claim that the sentence, ‘Paris is beautiful’ are true amounts to no more than the claim that Paris is beautiful, we can trivially describers understanding a sentence, if we wish, as knowing its truth-conditions, but this gives us no substantive account of understanding whatsoever. Something other than the grasp of truth conditions must provide the substantive account. The charge rests upon what has been called the redundancy theory of truth, the theory which, somewhat more discriminatingly. Horwich calls the minimal theory of truth. It’s conceptual representation that the concept of truth is exhausted by the fact that it conforms to the equivalence principle, the principle that for any proposition ‘p’, it is true that ‘p’ if and only if ‘p’. Many different philosophical theories of truth will, with suitable qualifications, accept that equivalence principle. The distinguishing feature of the minimal theory is its claim that the equivalence principle exhausts the notion of truth. It is now widely accepted, both by opponents and supporters of truth conditional theories of meaning that it is inconsistent to accept both minimal theory of truth and a truth-conditional account of meaning. If the effectual use as limited to the actual pretension in the sentence, ‘Paris is beautiful’ is true is exhausted by its equivalence to the claim that Paris is beautiful, it is circular to try of its truth conditions. The minimal theory of truth has been endorsed by the Cambridge mathematician and philosopher Plumpton Ramsey (1903-30), and the English philosopher Jules Ayer, the later Wittgenstein, Quine, Strawson. Horwich and ~ confusing and inconsistently if this article is correct ~ Frége himself. But is the minimal theory correct?

The minimal theory treats instances of the equivalence principle as definitional of truth for a given sentence, but in fact, it seems that each instance of the equivalence principle can itself be explained. The truths from which such an instance as: ‘London is beautiful’ is true if and only if London is beautiful. This would be a pseudo-explanation if the fact that ‘London’ refers to London consists in part in the fact that ‘London is beautiful’ has the truth-condition it does. But it is very implausible, it is, after all, it may possibly be that we understand the name of ‘London’ without understanding the predicate ‘is beautiful’.

Sometimes, however, the counterfactual conditional is known as subjunctive conditionals, insofar as a counterfactual conditional is a conditional of the form, ‘if ‘p’ were to take place or come about, ‘q’ would’, or ‘if p were to have happened q would have happened’, where the supposition of ‘p’ is contrary to the known fact that ‘not-p’. Such assertions are nevertheless, useful ‘if you broke the bone, the X-ray would have looked different’, or ‘if the reactor was to fail, this mechanism would ‘click in’, as there are important truths, even when we know that the bone is not broken or are certain that the reactor will not fail. It is arguably distinctive of laws of nature that yield counterfactuals (‘if the metal were to be heated, it would expand’), whereas accidentally true generalizations may not. It is clear that counterfactuals cannot be represented by the material implication of the propositional calculus, since that conditionals come out true whenever ‘p’ is false, so there would be no division between true and false counterfactuals.

Although the subjunctive form indicates counterfactuals, in many contexts it does not seem to matter whether we use a subjunctive form, or a simple conditional form: ‘If you run out of water, you will be in trouble’ seems equivalent to ‘if you were to run out of water, you would be in trouble’, in other contexts there is a big difference: ‘If Oswald did not kill Kennedy, someone else did’ is clearly true, whereas ‘if Oswald had not killed Kennedy, someone would have’ is most probably false.

The best-known modern treatment of counterfactuals is that of David Lewis, which evaluates them as true or false according to whether ‘q’ is true in the ‘most similar’ possible worlds to ours in which ‘p’ is true. The similarity-ranking this approach need have proved controversial, particularly since it may need to presuppose some notion of the same laws of nature, whereas art of the interest in counterfactuals is that they promise to illuminate that notion. There, by some growing awareness that the classification of conditionals is an extremely tricky business, and categorizing them as counterfactuals or does not concern their specific position by some limited use.

The pronouncing of any conditional preposition of the form ‘if p then q’. The condition hypothesizes, ‘p’. It’s called the antecedent of the conditional, and ‘q’ the consequent. Various kinds of conditional have been distinguished. The weakening of material implication, merely telling us that with ‘not-p’, or ‘q’. Stronger conditionals include elements of modality, corresponding to the thought that ‘if p is true then q must be true’. Ordinary language is very flexible in its use of the conditional form, and there is controversy whether, yielding different kinds of conditionals with different meanings, or pragmatically, in which case there should be one basic meaning which case there should be one basic meaning, with surface differences arising from other implicatures.

Passively, there are many forms of Reliabilism. Just as there are many forms of ‘Foundationalism’ and ‘coherence’. How is reliabilism related to these other two theories of justification? We usually regard it as a rival, and this is aptly so, in as far as foundationalism and coherentism traditionally focused on purely evidential relations than psychological processes, but we might also offer Reliabilism as a deeper-level theory, subsuming some precepts of either foundationalism or coherent ism. Foundationalism says that there are ‘basic’ beliefs, which acquire justification without dependence on inference, Reliabilism might rationalize this indicating that reliable non-inferential processes have formed the basic beliefs. Coherence stresses the primary of systematicity in all doxastic decision-making. Reliabilism might rationalize this by pointing to increases in reliability that accrue from systematicity consequently, Reliabilism could complement Foundationalism and coherence than completed with them.

These examples make it seem likely that, if there is a criterion for what makes an alternate situation relevant that will save Goldman’s claim about local reliability and knowledge. Will did not be simple. The interesting thesis that counts as a causal theory of justification, in the making of ‘causal theory’ intended for the belief as it is justified in case it was produced by a type of process that is ‘globally’ reliable, that is, its propensity to produce true beliefs that can be defined, to an acceptable approximation, as the proportion of the beliefs it produces, or would produce where it used as much as opportunity allows, that is true is sufficiently relializable. We have advanced variations of this view for both knowledge and justified belief, its first formulation of a reliability account of knowing appeared in the notation from F.P.Ramsey (1903-30). The theory of probability, he was the first to show how a ‘personalists theory’ and could be developed, based on a precise behavioural notion of preference and expectation. In the philosophy of language, much of Ramsey’s work was directed at saving classical mathematics from ‘intuitionism’, or what he called the ‘Bolshevik menace of Brouwer and Weyl. In the theory of probability he was the first to show how we could develop some personalists theory, based on precise behavioural notation of preference and expectation. In the philosophy of language, Ramsey was one of the first thankers, which he combined with radical views of the function of many kinds of a proposition. Neither generalizations, nor causal propositions, nor those treating probability or ethics, describe facts, but each has a different specific function in our intellectual economy. Ramsey was one of the earliest commentators on the early work of Wittgenstein, and his continuing friendship that led to Wittgenstein’s return to Cambridge and to philosophy in 1929.

Ramsey’s sentence theory is the sentence generated by taking all the sentences affirmed in a scientific theory that use some term, e.g., ‘quark’. Replacing the term by a variable, and existentially quantifying into the result, instead of saying that quarks have such-and-such properties, the Ramsey sentence says that there is something that has those properties. If we repeat the process for all of a group of the theoretical terms, the sentence gives the ‘topic-neutral’ structure of the theory, but removes any implication that we know what the term so treated prove competent. It leaves open the possibility of identifying the theoretical item with whatever, but it is that best fits the description provided. Virtually, all theories of knowledge, of course, share an externalists component in requiring truth as a condition for known in. Reliabilism goes further, however, in trying to capture additional conditions for knowledge by ways of a nomic, counterfactual or similar ‘external’ relations between belief and truth. Closely allied to the nomic sufficiency account of knowledge, primarily due to Dretshe (1971, 1981), the core of this approach is that X’s belief that ‘p’ qualifies as knowledge just in case ‘X’ believes ‘p’, because of reasons that would not obtain unless ‘p’s’ being true, or because of a process or method that would not yield belief in ‘p’ if ‘p’ were not true. An enemy example, ‘X’ would not have its current reasons for believing there is a telephone before it. Or determined not to believe this in the ways it does, thus, there is a counterfactual reliable guarantor of the belief’s bing true. Determined to and the facts of counterfactual approach say that ‘X’ knows that ‘p’ only if there is no ‘relevant alternative’ situation in which ‘p’ is false but ‘X’ would still believe that a proposition ‘p’; must be sufficient to eliminate all the alternatives too ‘p’ where an alternative to a proposition ‘p’ is a proposition incompatible with ‘p?’. That one’s justification or evidence for ‘p’ must be sufficient for one to know that every alternative too ‘p’ is false. This element of our evolving thinking, sceptical arguments have exploited about which knowledge. These arguments call our attentions to alternatives that our evidence sustains itself with no elimination. The sceptic inquires to how we know that we are not seeing a cleverly disguised mule. While we do have some evidence against the likelihood of such as deception, intuitively knowing that we are not so deceived is not strong enough for ‘us’. By pointing out alternate but hidden points of nature, in that we cannot eliminate, and others with more general application, as dreams, hallucinations, etc. , The sceptic appears to show that every alternative is seldom. If ever, satisfied.

All the same, and without a problem, is noted by the distinction between the ‘in itself’ and the ‘for itself’ originated in the Kantian logical and epistemological distinction between a thing as it is in itself, and that thing as an appearance, or as it is for us. For Kant, the thing in itself is the thing as it is intrinsically, that is, the character of the thing apart from any relations in which it happens to stand. The thing for which, or as an appearance, is the thing in so far as it stands in relation to our cognitive faculties and other objects. ‘Now a thing in itself cannot be known through mere relations: and we may therefore conclude that since outer sense gives us nothing but mere relations, this sense can contain in its representation only the relation of an object to the subject, and not the inner properties of the object in itself’. Kant applies this same distinction to the subject’s cognition of itself. Since the subject can know itself only in so far as it can intuit itself, and it can intuit itself only in terms of temporal relations, and thus as it is related to self, it represents itself ‘as it appears to itself, not as it is’. Thus, the distinction between what the subject is in itself and hat it is for itself arises in Kant in so far as the distinction between what an object is in itself and what it is for a Knower is applied to the subject’s own knowledge of itself.

Hegel (1770-1831) begins the transition of the epistemological distinction between what the subject is in itself and what it is for itself into an ontological distinction. Since, for Hegel, what is it, is, in fact, for in itself, necessarily involves relation, the Kantian distinction must be transformed. Taking his cue from the fact that, even for Kant, what the subject is in fact ir in itself involves a relation to itself, or seif-consciousness. Hegel suggests that the cognition of an entity in terms of such relations or self-relations do not preclude knowledge of the thing itself. Rather, what an entity is intrinsically, or in itself, is best understood in terms of the potentiality of that thing to enter specific explicit relations with itself. And, just as for consciousness to be explicitly itself is for it to be for itself by being in relation to itself, i.e., to be explicitly self-conscious, for-itself of any entity is that entity in so far as it is actually related to itself. The distinction between the entity in itself and the entity for itself is thus taken t o apply to every entity, and not only to the subject. For example, the seed of a plant is that plant in itself or implicitly, while the mature plant which involves actual relation among the plant’s various organs is the plant ‘for itself’. In Hegel, then, the in itself/for itself distinction becomes universalized, in is applied to all entities, and not merely to conscious entities. In addition, the distinction takes on an ontological dimension. While the seed and the mature plant are one and the same entity, being in itself of the plan, or the plant as potential adult, in that an ontologically distinct commonality is in for itself on the plant, or the actually existing mature organism. At the same time, the distinction retains an epistemological dimension in Hegel, although its import is quite different from that of the Kantian distinction. To know a thing is necessary to know both the actual, explicit self-relations which mark the thing (the being for itself of the thing) and the inherent simpler principle of these relations, or the being in itself of the thing. Real knowledge, for Hegel, thus consists in a knowledge of the thing as it is in and for itself.

Sartre’s distinction between ‘being in itself’ and ‘being for itself’, which is an entirely ontological distinction with minimal epistemological import, is descended from the Hegelian distinction. Sartre distinguishes between what it is for consciousness to be, i.e., being for itself, and the being of the transcendent being which is intended by consciousness, i.e., being in itself. What is it for consciousness to be, being for itself, is marked by self relation? Sartre posits a ‘pre-reflective Cogito’, such that every consciousness of ‘χ’ necessarily involves a ‘non-positional’ consciousness of the consciousness of ‘χ’. While in Kant every subject is both in itself, i.e., as it is apart from its relations, and for itself in so far as it is related to itself, and for itself in so far as it is related to itself by appearing to itself, and in Hegel every entity can be considered for being both in itself and for itself, in Sartre, to be selfly related or for itself is the distinctive ontological mark of consciousness, while to lack relations or to be in itself is the distinctive ontological mark of non-conscious entities.

This conclusion conflicts with another strand in our thinking about knowledge, in that we know many things. Thus, there is a tension in our ordinary thinking about knowledge ~. We believe that knowledge is, in the sense indicated, an absolute concept and yet, we also believe that there are many instances of that concept.

If one finds absoluteness to be too central a component of our concept of knowledge to be relinquished, one could argue from the absolute character of knowledge to a sceptical conclusion (Unger, 1975). Most philosophers, however, have taken the other course, choosing to respond to the conflict by giving up, perhaps reluctantly, the absolute criterion. This latter response holds as sacrosanct our commonsense belief that we know many things (Pollock, 1979 and Chisholm, 1977). Each approach is subject to the criticism that it preserves one aspect of our ordinary thinking about knowledge at the expense of denying another. We can view the theory of relevant alternatives as an attempt to provide a more satisfactory response to this tension in our thinking about knowledge. It attempts to characterize knowledge in a way that preserves both our belief that knowledge is an absolute concept and our belief that we have knowledge.

Having to its recourse of knowledge, its cental questions include the origin of knowledge, the place of experience in generating knowledge, and the place of reason in doing so, the relationship between knowledge and certainty, and between knowledge and the impossibility of error, the possibility of universal scepticism, and the changing forms of knowledge that arise from new conceptualizations of the world. All these issues link with other central concerns of philosophy, such as the nature of truth and the natures of experience and meaning. Seeing epistemology is possible as dominated by two rival metaphors. One is that of a building or pyramid, built on foundations. In this conception it is the job of the philosopher to describe especially secure foundations, and to identify secure modes of construction, that the resulting edifice can be shown to be sound. This metaphor of knowledge, and of a rationally defensible theory of confirmation and inference as a method in construction of knowledge must be regarded as a structure rose upon secure, certain foundations. These are found in some formidable combinations of experience and reason, with different schools (empiricism, rationalism) emphasizing the role of one over that of the others. Foundationalism was associated with the ancient Stoics, and in the modern era with Descartes (1596-1650). Who discovered his foundations in the ‘clear and distinct’ ideas of reason? Its main opponent is Coherentism, or the view that a body of propositions mas be known without a foundation in certainty, but by their interlocking strength, than as a crossword puzzle may be known to have been solved correctly even if each answer, taken individually, admits of uncertainty. Difficulties at this point led the logical passivists to abandon the notion of an epistemological foundation, and justly philander with the coherence theory of truth. It is widely accepted that trying to make the connection between thought and experience through basic sentences depends on an untenable ‘myth of the given’.

Still, of the other metaphor, is that of a boat or fuselage, that has no foundation but owes its strength to the stability given by its interlocking parts. This rejects the idea of a basis in the ‘given’ to favours in the ideas of coherence and holism, but finds it harder to ward off scepticism. In spite of these concerns, the problem, least of mention, is of defining knowledge in terms of true beliefs plus some favoured relations between the believer and the facts that began with Plato’s view in the ‘Theaetetus’ that knowledge is true belief, and some logos.` Due of its natural epistemology, the enterprising of studying the actual formation of knowledge by human beings, without aspiring to make evidently those processes as rational, or proof against ‘scepticism’ or even apt to yield the truth. Natural epistemology would therefore blend into the psychology of learning and the study of episodes in the history of science. The scope for ‘external’ or philosophical reflection of the kind that might result in scepticism or its refutation is markedly diminished. Nonetheless, the terms are modern, they however distinguish exponents of the approach that include Aristotle, Hume, and J. S. Mills.

The task of the philosopher of a discipline would then be to reveal the correct method and to unmask counterfeits. Although this belief lay behind much positivist philosophy of science, few philosophers at present, subscribe to it. It places too well a confidence in the possibility of a purely a prior ‘first philosophy’, or standpoint beyond that of the working practitioners, from which they can measure their best efforts as good or bad. This point of view now seems that many philosophers are acquainted with the affordance of fantasy. The more modest of tasks that we actually adopt at various historical stages of investigation into different areas with the aim not so much of criticizing but more of systematization, in the presuppositions of a particular field at a particular tie. There is still a role for local methodological disputes within the community investigators of some phenomenon, with one approach charging that another is unsound or unscientific, but logic and philosophy will not, on the modern view, provide an independent arsenal of weapons for such battles, which indeed often come seeming of more likely to a political bid for ascendancy within a discipline.

This is an approach to the theory of knowledge that sees an important connection between the growth of knowledge and biological evolution. An evolutionary epistemologist claims that the development of human knowledge processed through some natural selection process, the best example of which is Darwin’s theory of biological natural selection. There is a widespread misconception that evolution proceeds according to some plan or direct, put it has neither, and the role of chance ensures that its future course will be unpredictable. Random variations in individual organisms create tiny differences in their Darwinian fitness. Some individuals have more offsprings than others, and the characteristics that increased their fitness thereby become more prevalent in future generations. Once upon a time, at least a mutation occurred in a human population in tropical Africa that changed the hemoglobin molecule in a way that provided resistance to malaria. This enormous advantage caused the new gene to spread, with the unfortunate consequence that sickle-cell anaemia came to exist.

Chance can influence the outcome at each stage: First, in the creation of genetic mutation, second, in whether the bearer lives long enough to show its effects, thirdly, in chance events that influence the individual’s actual reproductive success, and fourth, in wether a gene even if favoured in one generation, is, happenstance, eliminated in the next, and finally in the many unpredictable environmental changes that will undoubtedly occur in the history of any group of organisms. As Harvard biologist Stephen Jay Gould has so vividly expressed that process over again, the outcome would surely be different. Not only might there not be humans, there might not even be anything like mammals.

We will often emphasis the elegance of traits shaped by natural selection, but the common idea that nature creates perfection needs to be analyzed carefully. The extent to which evolution achieves perfection depends on exactly what you mean. If you mean ‘Does natural selections always takes the best path for the long-term welfare of a species?’ The answer is no. That would require adaption by group selection, and this is, unlikely. If you mean ‘Does natural selection creates every adaption that would be valuable?’ The answer again, is no. For instance, some kinds of South American monkeys can grasp branches with their tails. The trick would surely also be useful to some African species, but, simply because of bad luck, none have it. Some combination of circumstances started some ancestral South American monkeys using their tails in ways that ultimately led to an ability to grab onto branches, while no such development took place in Africa. Mere usefulness of a trait does not necessitate it mean that will evolve.

This is an approach to the theory of knowledge that sees an important connection between the growth of knowledge and biological evolution. An evolutionary epistemologist claims that the development of human knowledge proceeds through some natural selection process, the best example of which is Darwin’s theory of biological natural selection. The three major components of the model of natural selection are variation selection and retention. According to Darwin’s theory of natural selection, variations are not pre-designed to perform certain functions. Rather, these variations that perform useful functions are selected. While those that suit the purpose of doing the obvious that strikes cold in the nothingness belong to subjective matters, and not selectively to choice in that of physical theories, as such as the selection is responsible for the appearance that specify their variations as built upon intentionality. This is, inasmuch as ado about its own obviousness, in that, the objectivity of real may justly come about. In the modern theory of evolution, genetic mutations provide the blind variations ( blind in the sense that variations are not influenced by the effects they would have, ~ the likelihood of a mutation is not correlated with the benefits or liabilities that mutation would confer on the organism), the environment provides the filter of selection, and reproduction provides the retention. It is achieved because those organisms with features that make them less adapted for survival do not survive about other organisms in the environment that have features that are better adapted. Evolutionary epistemology applies this blind variation and selective retention model to the growth of scientific knowledge and to human thought processes in general.

The parallel between biological evolution and conceptual or we can see ‘epistemic’ evolution as either literal or analogical. The literal version of evolutionary epistemology draws biological evolution as the main cause of the growth of knowledge. On this view, called the ‘evolution of cognitive mechanic programs’, by Bradie (1986) and the ‘Darwinian approach to epistemology’ by Ruse (1986), that growth of knowledge occurs through blind variation and selective retention because biological natural selection itself is the cause of epistemic variation and selection. The most plausible version of the literal view does not hold that all human beliefs are innate but rather than the mental mechanisms that guide the acquisition of non-innate beliefs are themselves innately and the result of biological natural selection. Ruses (1986) responses on the demands of an interlingual rendition of literal evolutionary epistemology that he links to sociology (Rescher, 1990).

Determining the value upon innate ideas can take the path to consider as these have been variously defined by philosophers either as ideas consciously present to the mind priori to sense experience, the non-dispositional sense, or as ideas which we have an innate disposition to form though we need to be actually aware of them at a particular time, e.g., as babies ~ the dispositional sense. Understood in either way they were invoked to account for our recognition of certain verification, such as those of mathematics, or to justify certain moral and religious clams which were held to be capable of being know by introspection of our innate ideas. Examples of such supposed truths might include ‘murder is wrong’ or ‘God exists’.

One difficulty with the doctrine is that it is sometimes formulated as one about concepts or ideas which are held to be innate and at other times one about a source of propositional knowledge, in so far as concepts are taken to be innate the doctrine reflates primarily to claims about meaning: our idea of God, for example, is taken as a source for the meaning of the word God. When innate ideas are understood prepositionally, their supposed innateness is taken an evidence for the truth. This latter thesis clearly rests on the assumption that innate propositions have an unimpeachable source, usually taken to be God, but then any appeal to innate ideas to justify the existence of God is circular. Despite such difficulties the doctrine of innate ideas had a long and influential history until the eighteenth century and the concept has in recent decades been revitalized through its employment in Noam Chomsky’s influential account of the mind’s linguistic capacities.

The attraction of the theory has been felt strongly by those philosophers who have been unable to give an alternative account of our capacity to recognize that some propositions are certainly true where that recognition cannot be justified solely o the basis of an appeal to sense experiences. Thus Plato argued that, for example, recognition of mathematical truths could only be explained on the assumption of some form of recollection, in Plato, the recollection of knowledge, possibly obtained in a previous stat e of existence e draws its topic as most famously broached in the dialogue ‘Meno’, and the doctrine is one attempt to account for the ‘innate’ unlearned character of knowledge of first principles. Since there was no plausible post-natal source the recollection must allude to a pre-natal acquisition of knowledge. Thus understood, the doctrine of innate ideas supported the views that there were importantly gradatorially innate in human beings and it was the sense which hindered their proper apprehension.

The ascetic implications of the doctrine were important in Christian philosophy throughout the Middle Ages and scholastic teaching until its displacement by Locke’ philosophy in the eighteenth century. It had in the meantime acquired modern expression in the philosophy of Descartes who argued that we can come to know certain important truths before we have any empirical knowledge at all. Our idea of God must necessarily exist, is Descartes held, logically independent of sense experience. In England the Cambridge Plantonists such as Henry Moore and Ralph Cudworth added considerable support.

Locke’s rejection of innate ideas and his alternative empiricist account was powerful enough to displace the doctrine from philosophy almost totally. Leibniz, in his critique of Locke, attempted to defend it with a sophisticated disposition version of theory, but it attracted few followers.

The empiricist alternative to innate ideas as an explanation of the certainty of propositions in the direction of construing with necessary truths as analytic. Kant’s refinement of the classification of propositions with the fourfold Analytic distentions/synthetic and deductive/inductive did nothing to encourage a return to their innate idea’s doctrine, which slipped from view. The doctrine may fruitfully be understood as the genesis of confusion between explaining the genesis of ideas or concepts and the basis for regarding some propositions as necessarily true.

Chomsky’s revival of the term in connection with his account of the spoken exchange acquisition has once more made the issue topical. He claims that the principles of language and ‘natural logic’ are known unconsciously and is a precondition for language acquisition. But for his purposes innate ideas must be taken in a strong dispositional sense ~ so strongly that it is far from clear that Chomsky’s claims are as in conflict with empiricists accounts as some (including Chomsky) have supposed. Quine, for example, sees no clash with his own version of empirical behaviourism, in which old talk of ideas is eschewing in favours of dispositions to observable behaviourism.

Locke’ accounts of analytic propositions was, that everything that a succinct account of analyticity should be (Locke, 1924). He distinguishes two kinds of analytic propositions, identity propositions for which we affirm the said term of itself’, e.g., ‘Roses are roses’ and predicative propositions in which ‘a part of the complex idea is predicated of the name of the whole’, e.g., ‘Roses are flowers’. Locke calls such sentences ‘trifling’ because a speaker who uses them ‘trifling with words’. A synthetic sentence, in contrast, such as some mathematical theorems, states that ‘a real truth that conveys, and with it parallels of really instructive ‘knowledge’ and, accorded by proportionateness, that Locke distinguishes two kinds of ‘necessary consequences’, analytic entailments where validity depends on the literal containment of the conclusion in the premiss and synthetic entailment where it does not. Locke did not originate this concept-containment notion of analyticity. It is discussed by Arnaud and Nicole, and it is safe to say that it has been around for a very long intermittent interval as a divisional place in time.

All the same, the analogical version of evolutionary epistemology, called the ‘evolution of theory’s program’, by Bradie (1986). The ‘Spenserians approach’ (after the nineteenth century philosopher Herbert Spencer) by Ruse (1986), a process analogous to biological natural selection has governed the development of human knowledge, rather than by an instance of the mechanism itself. This version of evolutionary epistemology, introduced and elaborated by Donald Campbell (1974) and Karl Popper, sees the [partial] fit between theories and the world as explained by a mental process of trial and error known as epistemic natural selection.

We have usually taken both versions of evolutionary epistemology to be types of naturalized epistemology, because both take some empirical facts as a starting point for their epistemological project. The literal version of evolutionary epistemology begins by accepting evolutionary theory and a materialist approach to the mind and, from these, constructs an account of knowledge and its developments. In contrast, the analogizing version does not require the truth of biological evolution: It simply draws on biological evolution as a source for the model of natural selection. For this version of evolutionary epistemology to be true, the model of natural selection need only apply to the growth of knowledge, not to the origin and development of species. Savagery put, evolutionary epistemology of the analogical sort could still be true even if creationism is the correct theory of the origin of species.

Although they do not begin by assuming evolutionary theory, most analogical evolutionary epistemologists are naturalized epistemologists as well, their empirical assumptions, least of mention, implicitly come from psychology and cognitive science, not evolutionary theory. Sometimes, however, evolutionary epistemology is characterized in a seemingly non-naturalistic fashion. (Campbell 1974) says that ‘if one is expanding knowledge beyond what one knows, one has no choice but to explore without the benefit of wisdom’, i.e., blindly. This, Campbell admits, makes evolutionary epistemology close to being a tautology (and so not naturalistic). Evolutionary epistemology does assert the analytic claim that when expanding one’s knowledge beyond what one knows, one must precessed to something that is already known, but, more interestingly, it also makes the synthetic claim that when expanding one’s knowledge beyond what one knows, one must proceed by blind variation and selective retention. This claim is synthetic because we can empirically falsify it. The central claim of evolutionary epistemology is synthetic, not analytic. If the essential contradiction, which is not, Campbell is right that evolutionary epistemology does have the analytic feature he mentions, but he is wrong to thinking that this is a distinguishing feature, since any plausible epistemology has the same analytic feature (Skagestad, 1978).

Two extra-ordinary issues lie to awaken the literature that involves questions about ‘realism’, i.e., What metaphysical commitment does an evolutionary epistemologist have to make? . (Progress, i.e., according to evolutionary epistemology, does knowledge develop toward a goal?) With respect to realism, many evolutionary epistemologists endorse that is called ‘hypothetical realism’, a view that combines a version of epistemological ‘scepticism’ and tentative acceptance of metaphysical realism. With respect to progress, the problem is that biological evolution is not goal-directed, but the growth of human knowledge is. Campbell (1974) worries about the potential dis-analogy here but is willing to bite the stone of conscience and admit that epistemic evolution progress toward a goal (truth) while biological evolution does not. Some have argued that evolutionary epistemologists must give up the ‘truth-topic’ sense of progress because a natural selection model is in non-teleological in essence alternatively, following Kuhn (1970), and embraced along with evolutionary epistemology.

Among the most frequent and serious criticisms levelled against evolutionary epistemology is that the analogical version of the view is false because epistemic variation is not blind (Skagestad, 1978 and Ruse, 1986), Stein and Lipton (1990) have argued, however, that this objection fails because, while epistemic variation is not random, its constraints come from heuristics that, for the most part, are selective retention. Further, Stein and Lipton argue that lunatics are analogous to biological pre-adaptions, evolutionary pre-biological pre-adaptions, evolutionary cursors, such as a half-wing, a precursor to a wing, which have some function other than the function of their descendable structures: The function of descendable structures, the function of their descendable character embodied to its structural foundations, is that of the guidelines of epistemic variation is, on this view, not the source of disanalogousness, but the source of a more articulated account of the analogy.

Many evolutionary epistemologists try to combine the literal and the analogical versions (Bradie, 1986, and Stein and Lipton, 1990), saying that those beliefs and cognitive mechanisms, which are innate results from natural selection of the biological sort and those that are innate results from natural selection of the epistemic sort. This is reasonable as long as the two parts of this hybrid view are kept distinct. An analogical version of evolutionary epistemology with biological variation as its only source of blindness would be a null theory: This would be the case if all our beliefs are innate or if our non-innate beliefs are not the result of blind variation. An appeal to the legitimate way to produce a hybrid version of evolutionary epistemology since doing so trivializes the theory. For similar reasons, such an appeal will not save an analogical version of evolutionary epistemology from arguments to the effect that epistemic variation is blind (Stein and Lipton, 1990).

Although it is a new approach to theory of knowledge, evolutionary epistemology has attracted much attention, primarily because it represents a serious attempt to flesh out a naturalized epistemology by drawing on several disciplines. In science is used for understanding the nature and development of knowledge, then evolutionary theory is among the disciplines worth a look. Insofar as evolutionary epistemology looks there, it is an interesting and potentially fruitful epistemological program.

What makes a belief justified and what makes a true belief knowledge? Thinking that whether a belief deserves one of these appraisals is natural depends on what caused such subjectivity to have the belief. In recent decades many epistemologists have pursued this plausible idea with a variety of specific proposals. Some causal theories of knowledge have it that a true belief that ‘p’ is knowledge just in case it has the right causal connection to the fact that ‘p’. They can apply such a criterion only to cases where the fact that ‘p’ is a sort that can enter inti causal relations, as this seems to exclude mathematically and other necessary facts and perhaps any fact expressed by a universal generalization, and proponents of this sort of criterion have usually supposed that it is limited to perceptual representations where knowledge of particular facts about subjects’ environments.

For example, Armstrong (1973) proposed that a belief of the form ‘This [perceived] object is F’ is [non-inferential] knowledge if and only if the belief is a completely reliable sign that the perceived object is ‘F’, that is, the fact that the object is ‘F’ contributed to causing the belief and its doing so depended on properties of the believer such that the laws of nature dictated that, for any subject ‘χ’ and perceived object ‘y’, if ‘χ’ has those properties and believed that ‘y’ is ‘F’, then ‘y’ is ‘F’. (Dretske, 1981) offers a rather similar account, in terms of the belief’s being caused by a signal received by the perceiver that carries the information that the object is ‘F’.

This sort of condition fails, however, to be sufficiently for non-inferential perceptivity, for knowledge is accountable for its compatibility with the belief’s being unjustified, and an unjustified belief cannot be knowledge. For example, suppose that your organism for sensory data of colour as perceived, is working well, but you have been given good reason to think otherwise, to think, say, that the sensory data of things look chartreuse to say, that chartreuse things are apparent if by appearances of the eye for which it is coloured of in the shading of the magenta, if you fail to heed these reasons you have for thinking that your colour perception is awry and believe of a thing that looks magenta to you that it is magenta, your belief will fail top be justified and will therefore fail to be knowledge, although it is caused by the thing’s being withing the grasp of sensory perceptivity, in a way that is a completely reliable sign, or to send forth of its significantly relevant tidings or the words of information that the thing is sufficiently to organize all sensory data as perceived in and of the World, or Holistic view.

The view that a belief acquires favourable epistemic status by having some kind of reliable linkage to the truth. Variations of this view have been advanced for both knowledge and justified belief. The first formulation of a reliable account of knowing notably appeared as marked and noted and accredited to F. P. Ramsey (1903-30), whereby much of Ramsey’s work was directed at saving classical mathematics from ‘intuitionism’, or what he called the ‘Bolshevik menace of Brouwer and Weyl’. In the theory of probability he was the first to develop, based on precise behavioral nations of preference and expectation. In the philosophy of language, Ramsey was one of the first thinkers to accept a ‘redundancy theory of truth’, which he combined with radical views of the function of many kinds of propositions. Neither generalizations, nor causal positions, nor those treating probability or ethics, described facts, but each has a different specific function in our intellectual economy. Additionally, Ramsey, who said that an impression of belief was knowledge if it were true, certain and obtained by a reliable process. P. Unger (1968) suggested that ‘S’ knows that ‘p’ just in case it is of at all accidental that ‘S’ is right about its being the case that D.M. Armstrong (1973) drew an analogy between a thermometer that reliably indicates the temperature and a belief interaction of reliability that indicates the truth. Armstrong said that a non-inferential belief qualified as knowledge if the belief has properties that are nominally sufficient for its truth, i.e., guarantee its truth via laws of nature.

Closely allied to the nomic sufficiency account of knowledge, primarily due to F.I. Dretske (1971, 1981), A.I. Goldman (1976, 1986) and R. Nozick (1981). The core of this approach is that ‘S’s’ belief that ‘p’ qualifies as knowledge just in case ‘S’ believes ‘p’ because of reasons that would not obtain unless ‘p’s’ being true, or because of a process or method that would not yield belief in ‘p’ if ‘p’ were not true. For example, ‘S’ would not have his current reasons for believing there is a telephone before him, or would not come to believe this in the way he does, unless there was a telephone before him. Thus, there is a counterfactual reliable guarantee of the belief’s being true. A variant of the counterfactual approach says that ‘S’ knows that ‘p’ only if there is no ‘relevant alternative’ situation in which ‘p’ is false but ‘S’ would still believe that ‘p’ must be sufficient to eliminate all the other situational alternatives of ‘p’, where an alternative to a proposition ‘p’ is a proposition incompatible with ‘p’, that is, one’s justification or evidence fort ‘p’ must be sufficient for one to know that every subsidiary situation is ‘p’ is false.

They standardly classify reliabilism as an ‘externaturalist’ theory because it invokes some truth-linked factor, and truth is ‘eternal’ to the believer the main argument for externalism derives from the philosophy of language, more specifically, from the various phenomena pertaining to natural kind terms, indexicals, and so forth, so that to motivate the views that have come to be known as direct reference’ theories. Such phenomena seem, at least to show that the belief or thought content that can be properly attributed to a person is dependent on facts about his environment, i.e., whether he is on Earth or Twin Earth, what in fact he is pointing at, the classificatory criteria employed by the experts in his social group, etc. ~. Not just on what is going on internally in his mind or brain (Putnam, 175 and Burge, 1979.) Virtually all theories of knowledge, of course, share an externalist component in requiring truth as a condition for knowing. Reliabilism goes further, however, in trying to capture additional conditions for knowledge by means of a nomic, counterfactual or other such ‘external’ relations between ‘belief’ and ‘truth’.

The most influential counter-example to Reliabilism is the demon-world and the clairvoyance examples. The demon-world example challenges the necessity of the reliability requirement, in that a possible world in which an evil demon creates deceptive visual experience, the process of vision is not reliable. Still, the visually formed beliefs in this world are intuitively justified. The clairvoyance example challenges the sufficiency of reliability. Suppose a cognitive agent possesses a reliable clairvoyance power, but has no evidence for or against his possessing such a power. Intuitively, his clairvoyantly formed beliefs are unjustifiably unreasoned, but Reliabilism declares them justified.

Another form of reliabilism, ‘normal worlds’, Reliabilism (Goldman, 1986), answers the range problem differently, and treats the demon-world problem in the same stroke. Permit a ‘normal world’ be one that is consistent with our general beliefs about the actual world. Normal-worlds reliabilism says that a belief, in any possible world is justified just in case its generating processes have high truth ratios in normal worlds. This resolves the demon-world problem because the relevant truth ratio of the visual process is not its truth ratio in the demon world itself, but its ratio in normal worlds. Since this ratio is presumably high, visually formed beliefs in the demon world turn out to be justified.

Yet, a different version of Reliabilism attempts to meet the demon-world and clairvoyance problems without recourse to the questionable notion of ‘normal worlds’. Consider Sosa’s (1992) suggestion that justified beliefs is belief acquired through ‘intellectual virtues’, and not through intellectual ‘vices’, whereby virtues are reliable cognitive faculties or processes. The task is to explain how epistemic evaluators have used the notion of indelible virtues, and vices, to arrive at their judgements, especially in the problematic cases. Goldman (1992) proposes a two-stage reconstruction of an evaluator’s activity. The first stage is a reliability-based acquisition of a ‘list’ of virtues and vices. The second stage is application of this list to queried cases. Determining has executed the second stage whether processes in the queried cases resemble virtues or vices. We have classified visual beliefs in the demon world as justified because visual belief formation is one of the virtues. Clairvoyance formed, beliefs are classified as unjustified because clairvoyance resembles scientifically suspect processes that the evaluator represents as vices, e.g., mental telepathy, ESP, and so forth

We now turn to a philosophy of meaning and truth, for which it is especially associated with the American philosopher of science and of language (1839-1914), and the American psychologist philosopher William James (1842-1910), wherefore the study in Pragmatism is given to various formulations by both writers, but the core is the belief that the meaning of a doctrine is the same as the practical effects of adapting it. Peirce interpreted of theocratical sentences is only that of a corresponding practical maxim (telling us what to do in some circumstance). In James the position issues in a theory of truth, notoriously allowing that belief, including for an example, belief in God, is the widest sense of the works satisfactorily in the widest sense of the word. On James’s view almost any belief might be respectable, and even rue, provided it works (but working is no s simple matter for James). The apparent subjectivist consequences of tis were wildly assailed by Russell (1872-1970), Moore (1873-1958), and others in the early years of the 20th century. This led to a division within pragmatism between those such as the American educator John Dewey (1859-1952), whose humanistic conception of practice remains inspired by science, and the more idealistic route that especially by the English writer F.C.S. Schiller (1864-1937), embracing the doctrine that our cognitive efforts and human needs actually transform the reality that we seek to describe. James often writes as if he sympathizes with this development. For instance, in ‘The Meaning of Truth’ (1909), he considers the hypothesis that other people have no minds (dramatized in the sexist idea of an ‘automatic sweetheart’ or female zombie) and remarks’ hat the hypothesis would not work because it would not satisfy our egoistic craving for the recognition and admiration of others. The implications that this is what it is to make it true that the other persons have minds in the disturbing part.

Modern pragmatists such as the American philosopher and critic Richard Rorty (1931-) and some writings of the philosopher Hilary Putnam (1925-) who has usually tried to dispense with an account of truth and concentrate, as perhaps James should have done, upon the nature of belief and its relations with human attitude, emotion, and need. The driving motivation of pragmatism is the idea that belief in the truth on te one hand must have a close connection with success in action on the other. One way of cementing the connection is found in the idea that natural selection must have adapted us to be cognitive creatures because beliefs have effects, as they work. Pragmatism can be found in Kant’s doctrine of the primary of practical over pure reason, and continued to play an influential role in the theory of meaning and of truth.

In case of fact, the philosophy of mind is the modern successor to behaviourism, as do the functionalism that its early advocates were Putnam (1926-) and Sellars (1912-89), and its guiding principle is that we can define mental states by a triplet of relations they have on other mental stares, what effects they have on behaviour. The definition need not take the form of a simple analysis, but if w could write down the totality of axioms, or postdate, or platitudes that govern our theories about what things of other mental states, and our theories about what things are apt to cause (for example), a belief state, what effects it would have on a variety of other mental states, and what effectual result is likely to have on behaviour, then we would have done all that is needed to make the state a proper theoretical notion. It could be implicitly defied by these theses. Functionalism is often compared with descriptions of a computer, since according to mental descriptions correspond to a description of a machine in terms of software, that remains silent about the underlaying hardware or ‘realization’ of the program the machine is running. The principal advantages of functionalism include its fit with the way we know of mental states both of ourselves and others, which is via their effects on behaviour and other mental states. As with behaviourism, critics charge that structurally complex items that do not bear mental states might nevertheless, imitate the functions that are cited. According to this criticism functionalism is too generous and would count too many things as having minds. It is also queried whether functionalism is too paradoxical, able to see mental similarities only when there is causal similarity, when our actual practices of interpretations enable us for ascribing the thoughts and desires that specific differentness occur from our own, it may then seem as though beliefs and desires can be, ‘variably realized’ causal architecture, just as much as they can be in different neurophysiological states.

The philosophical movement of Pragmatism had a major impact on American culture from the late 19th century to the present. Pragmatism calls for ideas and theories to be tested in practice, by assessing whether acting upon the idea or theory produces desirable or undesirable results. According to pragmatists, all claims about truth, knowledge, morality, and politics must be tested in this way. Pragmatism has been critical of traditional Western philosophy, especially the notions that there are absolute truths and absolute values. Although pragmatism was popular for a time in France, England, and Italy, most observers believe that it encapsulates an American faith in know-how and practicality and an equally American distrust of abstract theories and ideologies.

In mentioning the American psychologist and philosopher we find William James, who helped to popularize the philosophy of pragmatism with his book Pragmatism: A New Name for Old Ways of Thinking (1907). Influenced by a theory of meaning and verification developed for scientific hypotheses by American philosopher C.S. Peirce, James held that truths are what works, or has good experimental results. In a related theory, James argued the existence of God is partly verifiable because many people derive benefits from believing.

Pragmatists regard all theories and institutions as tentative hypotheses and solutions. For this reason they believed that efforts to improve society, through such means as education or politics, must be geared toward problem solving and must be ongoing. Through their emphasis on connecting theory to practice, pragmatist thinkers attempted to transform all areas of philosophy, from metaphysics to ethics and political philosophy.

Pragmatism sought a middle ground between traditional ideas about the nature of reality and radical theories of nihilism and irrationalism, which had become popular in Europe in the late 19th century. Traditional metaphysics assumed that the world has a fixed, intelligible structure and that human beings can know absolute or objective truths about the world and about what constitutes moral behaviour. Nihilism and irrationalism, on the other hand, denied those very assumptions and their certitude. Pragmatists today still try to steer a middle course between contemporary offshoots of these two extremes.

The ideas of the pragmatists were considered revolutionary when they first appeared. To some critics, pragmatism’s refusal to affirm any absolutes carried negative implications for society. For example, pragmatists do not believe that a single absolute idea of goodness or justice exists, but, rather, that these concepts are changeable and depend on the context in which they are being discussed. The absence of these absolutes, critics feared, could result in a decline in moral standards. The pragmatists’ denial of absolutes, moreover, challenged the foundations of religion, government, and schools of thought. As a result, pragmatism influenced developments in psychology, sociology, education, semiotics (the study of signs and symbols), and scientific method, as well as philosophy, cultural criticism, and social reform movements. Various political groups have also drawn on the assumptions of pragmatism, from the progressive movements of the early 20th century to later experiments in social reform.

Pragmatism is best understood in its historical and cultural context. It arose during the late 19th century, a period of rapid scientific advancement typified by the theories of British biologist Charles Darwin, whose theories suggested too many thinkers that humanity and society are in a perpetual state of progress. During this same period a decline in traditional religious beliefs and values accompanied the industrialization and material progress of the time. In consequence it became necessary to rethink fundamental ideas about values, religion, science, community, and individuality.

The three most important pragmatists are American philosophers’ Charles Sanders Peirce, William James, and John Dewey. Peirce was primarily interested in scientific method and mathematics; His objective was to infuse scientific thinking into philosophy and society, and he believed that human comprehension of reality was becoming ever greater and that human communities were becoming increasingly progressive. Peirce developed pragmatism as a theory of meaning ~ in particular, the meaning of concepts used in science. The meaning of the concept ‘brittle’, for example, is given by the observed consequences or properties that objects called ‘brittle’ exhibit. For Peirce, the only rational way to increase knowledge was to form mental habits that would test ideas through observation, experimentation, or what he called inquiry. Many philosophers known as logical positivists, a group of philosophers who have been influenced by Peirce, believed that our evolving species was fated to get ever closer to Truth. Logical positivists emphasize the importance of scientific verification, rejecting the assertion of positivism that personal experience is the basis of true knowledge.

James moved pragmatism in directions that Peirce strongly disliked. He generalized Peirce’s doctrines to encompass all concepts, beliefs, and actions; he also applied pragmatist ideas to truth as well as to meaning. James was primarily interested in showing how systems of morality, religion, and faith could be defended in a scientific civilization. He argued that sentiment, as well as logic, is crucial to rationality and that the great issues of life ~ morality and religious belief, for example ~ are leaps of faith. As such, they depend upon what he called ‘the will to believe’ and not merely on scientific evidence, which can never tell us what to do or what is worthwhile. Critics charged James with relativism (the belief that values depend on specific situations) and with crass expediency for proposing that if an idea or action works the way one intends, it must be right. But James can more accurately be described as a pluralist ~ someone who believes the world to be far too complex for anyone philosophy to explain everything.

Dewey’s philosophy can be described as a version of philosophical naturalism, which regards human experience, intelligence, and communities as ever-evolving mechanisms. Using their experience and intelligence, Dewey believed, human beings can solve problems, including social problems, through inquiry. For Dewey, naturalism led to the idea of a democratic society that allows all members to acquire social intelligence and progress both as individuals and as communities. Dewey held that traditional ideas about knowledge, truth, and values, in which absolutes are assumed, are incompatible with a broadly Darwinian world-view in which individuals and societies are progressing. In consequence, he felt that these traditional ideas must be discarded or revised. Indeed, for pragmatists, everything people know and do depend on a historical context and are thus tentative rather than absolute.

Many followers and critics of Dewey believe he advocated elitism and social engineering in his philosophical stance. Others think of him as a kind of romantic humanist. Both tendencies are evident in Dewey’s writings, although he aspired to synthesize the two realms.

The pragmatist’s tradition was revitalized in the 1980's by American philosopher Richard Rorty, who has faced similar charges of elitism for his belief in the relativism of values and his emphasis on the role of the individual in attaining knowledge. Interest has renewed in the classic pragmatists ~ Pierce, James, and Dewey ~ has an alternative to Rorty’s interpretation of the tradition.

One of the earliest versions of a correspondence theory was put forward in the 4th century Bc, by the Greek philosopher Plato, who sought to understand the meaning of knowledge and how it is acquired. Plato wished to distinguish between true belief and false belief. He proposed a theory based on intuitive recognition that true statements correspond to the facts ~ that agree with reality ~ while false statements do not. In Plato’s example, the sentence ‘Theaetetus flies’ can be true only if the world contains the fact that Theaetetus flies. However, Plato ~ and much later, 20th-century British philosopher Bertrand Russell ~ recognized this theory as unsatisfactory because it did not allow for false belief. Both Plato and Russell reasoned that if a belief is false because there is no fact to which it corresponds, it would then be a belief about nothing and so not a belief at all. Each then speculated that the grammar of a sentence could offer a way around this problem. A sentence can be about something (the person Theaetetus), yet false (flying is not true of Theaetetus). But how, they asked, are the parts of a sentence related to reality? One suggestion, proposed by 20th-century philosopher Ludwig Wittgenstein, is that the parts of a sentence relate to the objects they describe in much the same way that the parts of a picture relate to the objects pictured. Once again, however, false sentences pose a problem: If a false sentence pictures nothing, there can be no meaning in the sentence.

In the late 19th-century American philosopher Charles S. Peirce offered another answer to the question ‘What is truth?’ He asserted that truth is that which experts will agree upon when their investigations are final. Many pragmatists such as Peirce claim that the truth of our ideas must be tested through practice. Some pragmatists have gone so far as to question the usefulness of the idea of truth, arguing that in evaluating our beliefs we should rather pay attention to the consequences that our beliefs may have. However, critics of the pragmatic theory are concerned that we would have no knowledge because we do not know which set of beliefs will ultimately be agreed upon; nor are their sets of beliefs that are useful in every context.

A third theory of truth, the coherence theory, also concerns the meaning of knowledge. Coherence theorists have claimed that a set of beliefs is true if the beliefs are comprehensive ~ that is, to confirmingly agree with fact, such that they hide in concealment at any time or on any occasion, ~ and do not contradict each other.

Other philosophers dismiss the question ‘What is truth? With the observations for attaching the claim ‘it is true that’ to a sentence adds no meaning. However, these theorists, who have proposed what are known as deflationary theories of truth, do not dismiss such talk about truth as useless. They agree that there are contexts in which a sentence such as ‘it is true that the book is blue’ can have a different impact than the shorter statement ‘the book is blue.’ What is more important, use of the word true is essential when making a general claim about everything, nothing, or something, as in the statement ‘most of what he says is true?’

Nevertheless, in the study of neuroscience it reveals that the human brain is a massively parallel system in which language processing is widely distributed. Computers generated images of human brains engaged in language processing reveals a hierarchical organization consisting of complicated clusters of brain areas that process different component functions in controlled time sequences. Stand-alone or unitary modules have clearly not accomplished language processing that evolved with the addition of separate modules that were eventually incorporated systematically upon some neural communications channel board.

Similarly, we have continued individual linguistic symbols as given to clusters of distributed brain areas and are not in a particular area. We may produce the specific sound patterns of words in dedicated regions. We have generated all the same, the symbolic and referential relationships between words through a convergence of neural codes from different and independent brain regions. The processes of words comprehension and retrieval result from combinations simpler associative processes in several separate brain fields of forces that command stimulation from other regions. The symbolic meaning of words, like the grammar that is essential for the construction of meaningful relationships between stings of words, is an emergent property from the complex interaction of several brain parts.

While the brain that evolved this capacity was obviously a product of Darwinian evolution, we cannot simply explain the most critical precondition for the evolution of brain in these terms. Darwinian evolution can explain why the creation of stone tools altered condition for survival in a new ecological niche in which group living, pair bonding, and more complex social structures were critical to survival. Darwinian evolution can also explain why selective pressure in this new ecological niche favoured pre-adaptive changes required for symbolic commonisation. Nevertheless, as this communication resulted in increasingly more complex behaviour evolution began to take precedence of physical evolution in the sense that mutations resulting in enhanced social behaviour became selectively advantageously within the context of the social behaviour of hominids.

Although male and female hominids favoured pair bonding and created more complex social organizations in the interests of survival, the interplay between social evolution and biological evolution changed the terms of survival radically. The enhanced ability to use symbolic communication to construct of social interaction eventually made this communication the largest determinant of survival. Since this communication was based on a symbolic vocalization that requires the evolution of neural mechanisms and processes that did not evolve in any other species, this marked the emergence of a mental realm that would increasingly appear as separate nd distinct from the external material realm.

Nonetheless, if we could, for example, define all of the neural mechanisms involved in generating a particular word symbol, this would reveal nothing about the active experience of the world symbol as an idea in human consciousness. Conversely, the experience of the word symbol as an idea would reveal nothing about the neuronal processes involved. While one mode of understanding the situation necessarily displaces the other, we require both to achieve a complete understanding of the situation.

Most experts agree that our ancestries became knowledgeably articulated in the spoken exchange as based on complex grammar and syntax between two hundred thousand and some hundred thousand years ago. The mechanisms in the human brain that allowed for this great achievement clearly evolved, however, over great spans of time. In biology textbooks, the lists of prior adaptations that enhanced the ability of our ancestors to use communication normally include those that are inclining to inclinations to increase intelligence. As to approach a significant alteration of oral and auditory abilities, in that the separation or localization of functional representations is found on two sides of the brain. The evolution of some innate or hard wired grammar, however, when we look at how our ability to use language could have really evolved over the entire course of hominid evolution. The process seems more basic and more counterintuitive than we had previously imagined.

Although we share some aspects of vocalization with our primate cousins, the mechanisms of human vocalization are quite different and have evolved over great spans of time. Incremental increases in hominid brain size over the last 2.5 million years enhanced cortical control over the larynx, which originally evolved to prevent food and other particles from entering the windpipe or trachea; This eventually contributed to the use of vocal symbolization. Humans have more voluntary motor control over sound produced in the larynx than any other vocal species, and this control are associated with higher brain systems involved in skeletal muscle control as opposed to just visceral control. As a result, humans have direct cortical motor control over phonation and oral movement while chimps do not.

We position the larynx in modern humans in a comparatively low position to the throat and significantly increase the range and flexibility of sound production. The low position of the larynx allows greater changes in the volume to the resonant chamber formed by the mouth and pharynx and makes it easier to shift sounds to the mouth and away from the nasal cavity. Formidable conclusions are those of the sounds that comprise vowel components of speeches that become much more variable, including extremes in resonance combinations such as the ‘ee’ sound in ‘tree’ and the ‘aw’ sound in ‘flaw.’ Equally important, the repositioning of the larynx dramatically increases the ability of the mouth and tongue to modify vocal sounds. This shift in the larynx also makes it more likely that food and water passing over the larynx will enter the trachea, and this explains why humans are more inclined to experience choking. Yet this disadvantage, which could have caused the shift to e selected against, was clearly out-weighed by the advantage of being able to produce all the sounds used in modern language systems.

Some have argued that this removal of constraints on vocalization suggests that spoken language based on complex symbol systems emerged quite suddenly in modern humans only about one hundred thousand years ago. It is, however, far more likely that language use began with very primitive symbolic systems and evolved over time to increasingly complex systems. The first symbolic systems were not full-blown language systems, and they were probably not as flexible and complex as the vocal calls and gestural displays of modern primates. The first users of primitive symbolic systems probably coordinated most of their social comminations with call and display behavioural attitudes alike those of the modern ape and monkeys.

Critically important to the evolution of enhanced language skills are that behavioural adaptive adjustments that serve to precede and situate biological changes. This represents a reversal of the usual course of evolution where biological change precedes behavioural adaption. When the first hominids began to use stone tools, they probably rendered of a very haphazard fashion, by drawing on their flexible ape-like learning abilities. Still, the use of this technology over time opened a new ecological niche where selective pressures occasioned new adaptions. A tool use became more indispensable for obtaining food and organized social behaviours, mutations that enhanced the use of tools probably functioned as a principal source of selection for both bodied and brains.

The first stone choppers appear in the fossil remnant fragments remaining about 2.5 million years ago, and they appear to have been fabricated with a few sharp blows of stone on stone. If these primitive tools are reasonable, which were hand-held and probably used to cut flesh and to chip bone to expose the marrow, were created by Homo habilis ~ the first large-brained hominid. Stone making is obviously a skill passed on from one generation to the next by learning as opposed to a physical trait passed on genetically. After these tools became critical to survival, this introduced selection for learning abilities that did not exist for other species. Although the early tool maskers may have had brains roughly comparable to those of modern apes, they were already confronting the processes for being adapted for symbol learning.

The first symbolic representations were probably associated with social adaptations that were quite fragile, and any support that could reinforce these adaptions in the interest of survival would have been favoured by evolution. The expansion of the forebrain in Homo habilis, particularly the prefrontal cortex, was on of the core adaptations. Increased connectivity enhanced this adaption over time to brain regions involved in language processing.

Imagining why incremental improvements in symbolic representations provided a selective advantage is easy. Symbolic communication probably enhanced cooperation in the relationship of mothers to infants, allowed forgoing techniques to be more easily learned, served as the basis for better coordinating scavenging and hunting activities, and generally improved the prospect of attracting a mate. As the list of domains in which symbolic communication was introduced became longer over time, this probably resulted in new selective pressures that served to make this communication more elaborate. After more functions became dependent on this communication, those who failed in symbol learning or could only use symbols awkwardly were less likely to pass on their genes to subsequent generations.

We must have considerably gestured the crude language of the earliest users of symbolics and nonsymbiotic vocalizations. Their spoken language probably became reactively independent and a closed cooperative system. Only after the emergence of hominids were to use symbolic communication evolved, symbolic forms progressively took over functions served by non-anecdotical symbolic forms. We reflect this in modern languages. The structure of syntax in these languages often reveals its origins in pointing gestures, in the manipulation and exchange of objects, and in more primitive constructions of spatial and temporal relationships. We still use nonverbal vocalizations and gestures to complement meaning in spoken language.

The encompassing intentionality to its thought is mightily effective, least of mention, the relevance of spatiality to self-consciousness comes about not merely because the world is spatial but also because the self-conscious subject is a spatial element of the world. One cannot be self-conscious without being aware that one is a spatial element of the world, and one cannot be ware that one is a spatial element of the world without a grasp of the spatial nature of the world. Face to face, the idea of a perceivable, objective spatial world that causes ideas too subjectively becoming to denote in the world. During which time, his perceptions as they have of changing position within the world and to the essentially stable way the world is. The idea that there is an objective world and the idea that the subject is somewhere, and where what he can perceive gives it apart.

Research, however distant, are those that neuroscience reveals in that the human brain is a massive parallel system which language processing is widely distributed. Computers generated images of human brains engaged in language processing reveals a hierarchal organization consisting of complicated clusters of brain areas that process different component functions in controlled time sequences. Stand-alone or unitary modules that evolved with the addition of separate modules have clearly not accomplished language processing that were incorporated on some neutral circuit board.

While the brain that evolved this capacity was obviously a product of Darwinian evolution, he realized that the different chances of survival of different endowed offsprings could account for the natural evolution of species. Nature ‘selects’ those members of some spacies best adapted to the environment in which they are themselves, just as human animal breeders may select for desirable traits for their livestock, and by that control the evolution of the kind of animal they wish. In the phase of Spencer, nature guarantees the ‘survival of the fittest.’ The Origin of Species was principally successful in marshalling the evidence for evolution, than providing a convincing mechanism for genetic change, and Darwin, to such a degree, that he remained open to the search for additional mechanisms, also reaming convinced that natural selection was at the heat of it. It was only with the later discovery of the ‘gene’ as the unit of inheritance that the syntheses known as ‘neo-Darwinism’ became the orthodox theory of evolution.

The solutions to the mysterious evolution by natural selection can shape sophisticated mechanisms are to found in the working of natural section, in that for the sake of some purpose, namely, some action, the body as a whole must evidently exist for the sake of some complex action: Have simplistically actualized in the cognitive process through fundamentals in proceeding as made simple just as natural selection occurs whenever genetically influence’s variation among individual effects their survival and reproduction? If a gene codes for characteristics that result in fewer viable offspring in future generations, governing evolutionary principles have gradually eliminated that gene. For instance, genetic mutation that an increase vulnerability to infection, or cause foolish risk taking or lack of interest in sex, will never become common. On the other hand, genes that cause resistance that causes infection, appropriate risk taking and success in choosing fertile mates are likely to spread in the gene pool even if they have substantial costs.

A classical example is the spread of a gene for dark wing colour in a British moth population living downward form major source of air pollution. Pale moths were conspicuous on smoke-darkened trees and easily caught by birds, while a rare mutant form of a moth whose colour closely matched that of the bark escaped the predator beaks. As the tree trucks became darkened, the mutant gene spread rapidly and largely displaced the gene for pale wing colour. That is all on that point to say is that natural selection insole no plan, no goal, and no direction ~ just genes increasing and decreasing in frequency depending on whether individuals with these genes have, compared with order individuals, greater of lesser reproductive success.

Many misconceptions have obscured the simplicity of natural selection. For instance, they have widely thought Herbert Spencer’s nineteenth-century catch phrase ‘survival of the fittest’ to summarize the process, but an abstractive actuality openly provides a given forwarding to several misunderstandings. First, survival is of no consequence by itself. This is why natural selection has created some organisms, such as salmon and annual plants, that reproduces only once, the die. Survival increases fitness only as far as it increases later reproduction. Genes that increase lifetime reproduction will be selected for even if they result in a reduced longevity. Conversely, a gene that deceases selection will obviously eliminate total lifetime reproduction even if it increases an individual’s survival.

Considerable confusion arises from the ambiguous meaning of ‘fittest.’ The fittest individuals in the biological scene, is not necessarily the healthiest, stronger, or fastest. In today’s world, and many of those of the past, individuals of outstanding athletic accomplishment need not be the ones who produce the most grandchildren, a measure that should be roughly correlated with fattiness. To someone who understands natural selection, it is no surprise that the parents who are not concerned about their children;’s reproduction.

We cannot call a gene or an individual ‘fit’ in isolation but only concerning some particular spacies in a particular environment. Even in a single environment, every gene involves compromise. Consider a gene that makes rabbits more fearful and thereby helps to keep then from the jaws of foxes. Imagine that half the rabbits in a field have this gene. Because they do more hiding and less eating, these timid rabbits might be, on average, some bitless well fed than their bolder companions. Of, a hundred downbounded in the March swamps awaiting for spring, two thirds of them starve to death while this is the fate of only one-third of the rabbits who lack the gene for fearfulness, it has been selected against. It might be nearly eliminated by a few harsh winters. Milder winters or an increased number of foxes could have the opposite effect, but it all depends on the current environment.

The version of an evolutionary ethic called ‘social Darwinism’ emphasizes the struggle for natural selection, and draws the conclusion that we should glorify the assists each struggle, usually by enhancing competitive and aggressive relations between people in society, or better societies themselves. More recently we have re-thought the reaction between evolution and ethics in the light of biological discoveries concerning altruism and kin-selection.

We cannot simply explain the most critical precondition for the evolution of this brain in these terms. Darwinian evolution can explain why the creation of stone tools altered conditions for survival in a new ecological niche in which group living, pair bonding, and more complex social structures were critical to survival. Darwinian evolution can also explain why selective pressures in this new ecological niche favoured pre-adaptive changes required for symbolic communication. All the same, this communication resulted directly through its passing an increasingly atypically structural complex and intensively condensed behaviour. Social evolution began to take precedence over physical evolution in the sense that mutations resulting in enhanced social behaviour became selectively advantageously within the context of the social behaviour of hominids.

Because this communication was based on symbolic vocalization that required the evolution of neural mechanisms and processes that did not evolve in any other species. As this marked the emergence of a mental realm that would increasingly appear as separate and distinct from the external material realm.

If they cannot reduce to, or entirely explain the emergent reality in this mental realm as for, the sum of its parts, concluding that this reality is greater than the sum of its parts seems reasonable. For example, a complete proceeding of the manner in which light in particular wave lengths has ben advancing by the human brain to generate a particular colour says nothing about the experience of colour. In other words, a complete scientific description of all the mechanisms involved in processing the colour blue does not correspond with the colour blue as perceived in human consciousness. No scientific description of the physical substrate of a thought or feeling, no matter how accomplish it can but be accounted for in actualized experience, especially of a thought or feeling, as an emergent aspect of global brain function.

If we could, for example, define all of the neural mechanisms involved in generating a particular word symbol, this would reveal nothing about the experience of the word symbol as an idea in human consciousness. Conversely, the experience of the word symbol as an idea would reveal nothing about the neuronal processes involved. While one mode of understanding the situation necessarily displaces the other, they require both to achieve a complete understanding of the situation.

Even if we are to include two aspects of biological reality, finding to a more complex order in biological reality is associated with the emergence of new wholes that are greater than the orbital parts. Yet, the entire biosphere is of a whole that displays self-regulating behaviour that is greater than the sum of its parts. Seemingly, that our visionary skills could view the emergence of a symbolic universe based on a complex language system as another stage in the evolution of more complicated and complex systems. As marked and noted by the appearance of a new profound complementarity in relationships between parts and wholes. This does not allow us to assume that human consciousness was in any sense preordained or predestined by natural process. Even so, it does make it possible, in philosophical terms at least, to argue that this consciousness is an emergent aspect of the self-organizing properties of biological life.

If we also concede that an indivisible whole contains, by definition, no separate parts and that in belief alone one can assume that a phenomenon was ‘real’ only when it is ‘observed’ phenomenon, have sparked advance the given processes for us to more interesting conclusions. The indivisible whole whose existence we have inferred in the results of the aspectual experiments that cannot in principal is itself the subject of scientific investigation. In that respect, no simple reason of why this is the case. Science can claim knowledge of physical reality only when experiment has validated the predictions of a physical theory. Since, invisibility has restricted our view we cannot measure or observe the indivisible whole, as do, that we encounter by engaging the ‘eventful horizon’ or knowledge where science can say nothing about the actual character of this reality. Why this is so, is a property of the entire universe, then we must also conclude that undivided wholeness exists on the most primary and basic level in all aspects of physical reality. What we are dealing within science per se, however, are manifestations of tis reality, which are invoked or ‘actualized’ in making acts of observation or measurement. Since the reality that exists between the space-like separated regions is a whole whose existence can only be inferred in experience. As opposed to proven experiment, the correlations between the particles, and the sum of these parts, do not constitute the ‘indivisible’ whole. Physical theory allows us to understand why the correlations occur. Nevertheless, it cannot in principal impart or describe the actualized character of the indivisible whole.

The scientific implications to this extraordinary relationship between parts (in that, to know what it is like to have an experience is to know its (Qualia) and indivisible whole (the universe) are quite staggering. Our primary concern, however, is a new view of the relationship between mind and world that carries even larger implications in human terms. When factors into our understanding of the relationship between parts and wholes in physics and biology, then mind, or human consciousness, must be viewed as an emergent phenomenon in a seamlessly interconnected whole called the cosmos.

All that is required to embrace the alternative view of the relationship between mind and world that are consistent with our most advanced scientific knowledge is a commitment to metaphysical and epistemological realism and a willingness to follow arguments to their logical conclusions. Metaphysical realism assumes that physical reality or has an actual existence independent of human observers or any act of observation, epistemological realism assumes that progress in science requires strict adherence to scientific mythology, or to the rules and procedures for doing science. If one can accept these assumptions, most of the conclusions drawn should appear self-evident in logical and philosophical terms. Attributing any extra-scientific properties to the whole to understand is also not necessary and embrace the new relationship between part and whole and the alternative view of human consciousness that is consistent with this relationship. This is, in this that our distinguishing character between what can be ‘proven’ in scientific terms and what can with reason be realized and ‘inferred’ as a philosophical basis through which grounds can be assimilated as some indirect scientific evidence.

Moreover, advances in scientific knowledge rapidly became the basis for the creation of a host of new technologies. Yet are those responsible for evaluating the benefits and risks associated with the use of these technologies, much less their potential impact on human needs and values, normally have expertise on only one side of a two-culture divide. Perhaps, more important, many potential threats to the human future ~ such as, to, environmental pollution, arms development, overpopulation, and spread of infectious diseases, poverty, and starvation ~ can be effectively solved only by integrating scientific knowledge with knowledge from the social sciences and humanities. We have not done so for a simple reason ~ the implications of the amazing new fact of nature named for by non-locality, and cannot be properly understood without some familiarity with the actual history of scientific thought. The less resultant quantity is to suggest that what be most important about this background can be understood in its absence. Those who do not wish to struggle with the small and perhaps, fewer resultant quantities by which measure has substantiated the strengthening background implications with that should feel free to ignore it. Yet this material will be no more challenging as such, that the hope is that from those of which will find a common ground for understanding and that will meet again on this commonly functions as addressed to the relinquishing clasp of closure, and unswervingly close of its circle, resolve in the equations of eternity and complete of the universe of its obtainable gains for which its unification holds all that should be.

Of what exists in the mind as a representation (as of something comprehended) or as a formulation (as of a plan) absorbs in the apprehensions toward belief. That is, ‘ideas’, as eternal, mind-independent forms or archetypes of the things in the material world. Something such as a thought or conception that potentially or actually exist by the element or complex of elements in an individual velleity, which feels, perceives, thinks, wills and especially reasons as a product of mental activity has upon itself the intelligence, intellect, consciousness, mental mentality, faculty, function or power in an ‘idea’. Additionally, and, least of mention, a bethinking inclination of the awareness on knowing its mindful human history is in essence a history of ideas, as thoughts are distinctly intellectual and stress contemplation and reasoning, justly as language is the unclothing of thought.

Although ideas give rise to many problems of interpretation, but narrative descriptions between them, they define a space of philosophical problems. Ideas are that with which we think, or in Locke’s terms, whatever the mind may be employed about in thinking. Looked at that way, they seem to be inherently transient, fleeting, and unstable private presences. Ideas tentatively give in a provisional contributive way in which objective knowledge can be affirmatively approved for what exists in the mind or a representation with which it is expressed. They are the essential components of understanding, and any intelligible proposition that is true must be capable of being understood.

Something such as a thought or conception that potentially or actually exist by the element or complex of elements in an individual velleity, which feels, perceives, thinks, wills and especially reasons as a product of mental activity has upon itself the intelligence, intellect, consciousness, mental mentality, faculty, function or power in an ‘idea’. Additionally, and, least of mention, a bethinking inclination of the awareness on knowing its mindful human history is in essence a history of ideas, as thoughts are distinctly intellectual and stress contemplation and reasoning. Justly as language is the dress of thought. Ideas began with Plato, as eternal, mind-independent forms or archetypes of the things in the material world. Neoplatonism made them thoughts in the mind of God who created the world. The much criticized ‘new way of ideas’, so much a part of seventeenth and eighteenth-century philosophy, began with Descartes’ (1596-1650) a conscionable extension of ideas to cover whatever is in human minds too, an extension, of which Locke (1632-1704) made much use. But are they like mental images, of things outside the mind, or non-representational, like sensations? If representational, are they mental objects, standing between the mind and what they represent, or are they mental acts and modifications of a mind perceiving the world directly? Finally, are they neither objects nor mental acts, but dispositions? Malebranche (1632-1715) and Arnauld (1612-94), and then Leibniz (1646-1716), famously disagreed about how ‘ideas’ should be understood, and recent scholars disagree about how Arnauld, Descartes, Locke and Malebranche in fact understood them.

Although ideas give rise to many problems of interpretation, but narrative descriptions between them, they define a space of philosophical problems. Ideas are that with which we think, or in Locke’s terms, whatever the mind may be employed about in thinking. Looked at that way, they seem to be inherently transient, fleeting, and unstable private presences. Ideas tentatively give in a provisional contribution way in which objective knowledge can be affirmatively approved for what exists in the mind or a representation with which it is expressed. They are the essential components of understanding, and any intelligible proposition that is true must be capable of being understood. Plato’s theory of ‘forms’ is a launching celebration of the objective and timeless existence of ideas as concepts, and reified to the point where they make up the only real world, of separate and perfect models of which the empirical world is only a poor cousin. This doctrine, notably in the ‘Timaeus’, opened the way for the Neoplatonic notion of ideas as the thoughts of God. The concept gradually lost this otherworldly aspect, until after Descartes ideas became assimilated to whatever it is that lies in the mind of any thinking being.

Together with a general bias toward the sensory, so that what lies in the mind may be thought of as something like images, and a belief that thinking is well explained as the manipulation having no real existence but existing in a fancied imagination. It is not reason but ‘the imagination’ that is found to be responsible for our making the empirical inferences that we do. There are certain general ‘principles of the imagination’ according to which ideas naturally come and go in the mind under certain conditions. It is the task of the ‘science of human nature’ to discover such principles, but without itself going beyond experience. For example, an observed correlation between things of two kinds can be seen to produce in everyone a propensity to expect a thing to the second sort given an experience of a thing of the first sort. We get a feeling, or an ‘impression’, when the mind makes such a transition and that is what directly leads us to attribute the necessary reflation between things of the two kinds, there is no necessity in the relations between things that happen in the world, but, given our experience and the way our minds naturally work, we cannot help thinking that there is.

A similar appeal to certain ‘principles of the imagination’ is what explains our belief in a world of enduring objects. Experience alone cannot produce that belief, everything we directly perceive is ‘momentary’ and ‘fleeting’. And whatever our experience is like, no reasoning could assure us of the existence of something as autonomous of our impressions which continues to exist when they cease. The series of constantly changing sense impressions presents us with observable features which Hume calls ‘constancy ‘ and ‘coherence’, and these naturally operate on the mind in such a way as eventually to produce ‘the opinion of a continued and distinct existence. The explanation is complicated, but it is meant to appeal only to psychological mechanisms which can be discovered by ‘careful and exact experiments, and the observation of those particular effects, which have succumbently resulted from [the mind’s] different circumstances and situations’.

We believe not only in bodies, but also in persons, or ourselves, which continue to exist through time, and this belief too can be explained only by the operation of certain ‘principles of the imagination’. We never directly perceive anything we can call ourselves: The most we can be aware of in ourselves are our constantly changing momentary perceptions, not the mind or self which has them. For Hume (1711-76), there is nothing that really binds the different perceptions together, we are led into the ‘fiction’ that they form a unity only because of the way in which the thought of such series of perceptions works upon the mind. ‘The mind is a kind of theatre, where several perceptions successively make their appearance, . . . there is properly no simplicity in it at one time, nor identity in different: Whatever natural propensity we may have to imagine that simplicity and identity. The comparison of the theatre must not mislead us. They are the successive perceptions only, that constitutes the mind.

Leibniz held, in opposition to Descartes, that adult humans can have experiences of which they are unaware: Experiences of which effect what they do, but which are not brought to self-consciousness. Yet there are creatures, such as animals and infants, which completely lack the ability to reflect of their experiences, and to become aware of them as experiences of theirs. The unity of a subject’s experience, which stems from his capacity to recognize all his experience as his, was dubbed by Kant ‘ as the transcendental unity of an apperception ~ Leibniz’s term for inner awareness or self-consciousness. But, in contrast with ‘perception’ or ‘outer awareness’ ~ though, this apprehension of unity is transcendental, than empirical, it is presupposed in experience and cannot be derived from it. Kant used the need for this unity as the basis of his attemptive scepticism about the external world. He argued that my experiences could only be united in one-self-consciousness, if, at least some of them were experiences of a law-governed world of objects in space. Outer experience is thus a necessary condition of inner awareness.

Here we seem to have a clear case of ‘introspection’, derived from the Latin ‘intro’ (within) + ‘specere’ (to look), introspection is the attention the mind gives to itself or to its own operations and occurrences. I can know there is a fat hairy spider in my bath by looking there and seeing it. But how do I know that I am seeing it rather than smelling it, or that my attitude to it is one of disgust than delight? One answer is considered as: A subsequent introspective act of ‘looking within’ and attending to the psychological state, ~ my seeing the spider. Introspection, therefore, is a mental occurrence, which has, as its object, some other psychological state like perceiving, desiring, willing, feeling, and so forth. In being a distinct awareness-episode it is different from more general ‘self-consciousness’ which characterizes all or some of our mental history.

The awareness generated by an introspective act can have varying degrees of complexity. It might be a simple knowledge of (mental) things’ ~ such as a particular perception-episode, or it might be the more complex knowledge of truths about one’s own mind. In this latter full-blown judgement form, introspection is usually the self-ascription of psychological properties and, when linguistically expressed, results in statements like ‘I am watching the spider’ or ‘I am repulsed’.

In psychology this deliberate inward look becomes a scientific method when it is ‘directed toward answering questions of theoretical importance for the advancement of our systematic knowledge of the laws and conditions of mental processes’. In philosophy, introspection (sometimes also called ‘reflection’) remains simply that notice which mind takes of its own operations and has been used to serve the following important functions:

(1) Methodological: However, the fact that though experiments are a powerful addition in philosophical investigation. The Ontological Argument, for example, asks us to try to think of the most perfect being as lacking existence and Berkeley’s Master Argument challenges us to conceive of an unseen tree, conceptual results are then drawn from our failure or success. From such experiments to work, we must not only have (or fail to have) the relevant conceptions but also know that we have (or fail to have) them ~ presumably by introspection.

(2) Metaphysical: A philosophy of mind needs to take cognizance of introspection. One can argue for ‘ghostly’ mental entities for ‘Qualia’, for ‘sense-data’ by claiming introspective awareness of them. First-person psychological reports can have special consequences for the nature of persons and personal identity: Hume, for example, was content to reject the notion of a soul-substance because he failed to find such a thing by ‘looking within’. Moreover, some philosophers argue for the existence of additional positions in sensory-data show the seismical fact’s ~ the fact of ‘what it is like’ to be the person I am or to have an experience of such-and-such-a-kind. Introspection as our access to such facts becomes important when we collectively consider the managing forms of a complete substantiation of the world.

(3) Epistemological: Surprisingly, the most important use made of introspection has been in an accounting for our knowledge of the outside world. According to a foundationalist theory of justification an empirical belief is either basic and ‘self-justifying’ or justified in relation to basic beliefs. Basic beliefs therefore, constitute the rock-bottom of all justification and knowledge. Now introspective awareness is said to have a unique epistemological status in it, we are said to achieve the best possibly epistemological position and consequently, introspective beliefs and thereby constitute the foundation of all justification.

Coherence is a major player in the theatre of knowledge. There are coherence theories of belief, truth and justification where these combine in various ways to yield theories of knowledge, coherence theories of belief are concerned with the content of beliefs. Consider a belief you now have, the belief that you are reading a page in a book. So what makes that belief the belief that it is? What makes it the belief that you are reading a page in a book than the belief that you have something other that is elsewhere of a preoccupation? The same stimuli may produce various beliefs and various beliefs may produce the same action. The role that gives the belief the content it has is the role it plays within a network of relations to other beliefs, the role in inference and implication, for example, I infer different things from believing that I am reading a page in a book than from any other belief, just as I infer that belief from different things than I refer other belief’s form.

The input of perception and the output of an action supplement the central role of the systematic relations the belief has to other beliefs, except that the systematic relations given to the belief specified of the content it has. They are the fundamental source of the content of beliefs. That is how coherence comes to be. A belief that the content that it does because of the away in which it coheres within the systems of belief, however, weak coherence theories affirm that coherence is one determinant of the content of belief as strong coherence theories on the content of belief affirm that coherence is the sole determinant of the content of belief.

Nonetheless, the concept of the given-referential immediacy as apprehended of the contents of sense experience is expressed in the first person, and present tense reports of appearances. Apprehension of the given is seen as immediate both in a causal sense, since it lacks the usual causal chain involved in perceiving real qualities of physical objects, and in an epistemic sense, since judgements expressing it are justified independently of all other beliefs and evidence. Some proponents of the idea of the ‘given’ maintain that its apprehension is absolutely certain: Infallible, incorrigible and indubitable. It has been claimed also that a subject is omniscient with regard to the given ~ if a property appears, then the subject knows this.

Without some independent indication that some of the beliefs within a coherent system are true, coherence in itself is no indication of truth. Fairy stories can cohere, however, our criteria for justification must indicate to us the probable truth of our beliefs. Hence, within any system of beliefs there must be some privileged class with which others must cohere to be justified. In the case of empirical knowledge, such privileged beliefs must represent the point of contact between subject and world: They must originate within our descendable inherent perceptions of the world, that when challenged, however, we justify our ordinary perceptual beliefs about physical properties by appeal to beliefs about appearances. The latter seem more suitable as foundational, since there is no class of more certain perceptual beliefs to which we appeal for their justification.

The argument that foundations must be certain was offered by Lewis (1946). He held that no proposition can be probable unless some are certain. If the probability of all propositions or beliefs were relative to evidence expressed in others, and if these relations were linear, then any regress would apparently have to terminate in propositions or beliefs that are certain. But Lewis shows neither that such relations must be linear nor that redresses cannot terminate in beliefs that are merely probable or justified in themselves without being certain or infallible.

Arguments against the idea of the given originate with Kant (1724-1804), who argues that percepts without concepts do not yet constitute any form of knowing. Being non-epistemic, they presumably cannot serve as epistemic foundations. Once we recognize that we must apply concepts of properties to appearances and formulate beliefs utilizing those concepts before the appearances can play any epistemic role, it becomes more plausible that such beliefs are fallible. The argument was developed by Wilfrid Sellars (1963), which according to him, the idea of the given involves a confusion between sensing particulars (having sense impressions), which is non-epistemic, and having non-inferential knowledge of propositions referring to appearances. The former may be necessary for acquiring perceptual knowledge, but it is not itself a primitive kind of knowing. Its being non-epistemic renders it immune from error, but also unsuitable for epistemological foundations. The latter, non-referential perceptual knowledge, are fallible, requiring concepts acquired through trained responses to public physical objects.

Contemporary foundationalist denies the non-epistemic claim whole eschewing the claim that foundations, in the form of reports about appearances, are infallible. They seek alternatives to the given as foundations. Although arguments against infallibility are sound, other objections to the idea of foundations are not. That concepts of objective properties are learned prior to concepts of appearances, for example, implied neither that claims about appearances are less certain than claims about objective properties, nor that the latter are prior in chains of justification. That there can be no knowledge prior to the acquisition and consistent application of concepts allows for propositions whose truth requires only consistent applications of concepts, and this may be so for some claims about appearances, least of mention, coherentists would add that such genuine belief’s stands in need of justification in themselves and so cannot be foundations.

Coherentists will claim that a subject requires evidence that he applies concepts consistently that he is able, for example, consistently to distinguish red from other colours that appear. Beliefs about red appearances could not then be justified independently of other beliefs expressing that evidence. To say that part of the doctrine of the given that holds beliefs about appearances to be self-justified, we require an account of how such justification is possible, how some beliefs about appearances can be justified without appeal to evidence. Some foundationalist simply asserts such warrant as derived from experience, but, unlike appeals to certainty by proponents of the given.

It is, nonetheless, an explanation of this capacity that enables its developments as an epistemological corollary to metaphysical dualism. The world of ‘matter’ is known through external/outer sense-perception. So cognitive access to ‘mind’ must be based on a parallel process of introspection which ‘thought . . . not ‘sense’, as having nothing to do with external objects: Yet [put] is a great deal like it, and might properly enough be called ‘internal sense’. However, having mind as object, is not sufficient to make a way of knowing ‘inner’ in the relevant sense be because mental facts can be grasped through sources other than introspection. To point, is rather than ‘inner perception’, provides a kind of access to the mental not obtained otherwise ~ it is a ‘look within from within’. Stripped of metaphor this indicates the following epistemological features:

1. Only I can introspect my mind.

2. I can introspect only my mind.

3. Introspective awareness is superior

To any other knowledge of contingent

Facts that I or others might have.

The tenets of (1) and (2) are grounded in the Cartesian of ‘privacy’ of the mental. Normally, a single object can be perceptually or inferentially grasped by many subjects, just as the same subject can perceive and infer different things. The epistemic peculiarity of introspection is that, is, is exclusive ~ it gives knowledge only of the mental history of the subject introspecting.

The tenet (2) of the traditional theory is grounded in the Cartesian idea of ‘privileged access’. The epistemic superiority of introspection lies in its being and infallible source of knowledge. First-person psychological statements which are its typical results cannot be mistaken. This claim is sometimes supported by an ‘imaginistical test’, e.g., the impossibility of imaging that I believe that I am in pain, while at the same time imaging evidence that I am not in pain. An apparent counter-example to this infallibility claim would be the introspective judgement ‘I am perceiving a dead friend’ when I am really hallucinating. This is taken to by reformulating such introspective reports as ‘I seem to be perceiving a dead friend’. The importance of such privileged access is that introspection becomes a way of knowing immune from the pitfalls of other sources of cognition. The basic asymmetry between first and third person psychological statements by introspective and non-introspective methods, but even dualists can account for introspective awareness in different ways:

(1) Non-perceptual models ~ Self-scrutiny need not be perceptual. My awareness of an object ‘O’ changes the status of ‘O’. It now acquires the property of ‘being an object of awareness’. On the basis of this or the fact that I am aware of ‘O’, such an ‘inferential model’ of awareness is suggested by the Bhatta Mimamsa school of Indian Epistemology. This view of introspection does not construe it as a direct awareness of mental operations but, interestingly, we will have occasion to refer to theories where the emphasis on directness itself leads to a Non-perceptual, or at least, a non-observational account of introspection.

(2) Reflexive models ~ Epistemic access to our minds need not involve a separate attentive act. Part of the meaning of a conscious state is that I know in that state when I am in that state. Consciousness is here conceived as ‘phosphorescence’ attached to some mental occurrence and in no need of a subsequent illustration to reveal itself. Of course, if introspection is defined as a distinct act then reflexive models are really accounts of the first-person access that makes no appeal to introspection.

(3) Public-mind theories and fallibility/infallibility models ~ the physicalists denial of metaphysically private mental facts naturally suggests that ‘looking within’ is not merely like perception but is perception. For Ryle (1900-76), mental states are ‘iffy’ behavioural facts which, in principle, are equally accessible to everyone in the same throughout. One’s own self-awareness therefore is, in effect, no different in type from anyone else’s observations about one’s mind.

A more interesting move is for the physicalist to retain the truism that I grasp that I am sad in a very different way from that in which I know you to be sad. This directedness or non-inferential nature of self-knowledge can be preserved in some physicalists theories of introspection. For instance, Armstrong’s identification of mental states with causes of bodily behaviour and of the latter with brain states, makes introspection the process of acquiring information about such inner physical causes. But since introspection is itself a mental state, it is a process in the brain as well: And since its grasp of the relevant causal information is direct, it becomes a process in which the brain scans itself.

Alternatively, a broadly ‘functionalist’ inclination of what is consenting to mental states suggest of the machine-analogue of the introspective situation: A machine-table with the instruction ‘Print: ‘I am in state ‘A’ when in state ‘A’ results in the output ‘I am in state ‘A’ when state ‘A’ occurs. Similarly, if we define mental states and events functionally, we can say that introspection occurs when an occurrence of a mental state ‘M’ directly results in awareness of ‘M’. Observe with care that this way of emphasizing directness yields a Non-perceptual and non-observational model of introspection. The machine in printing ‘I am in state ‘A’ does so (when it is not making a ‘verbal mistake’) just because it is in state ‘A’. There is no computation of information or process of ascertaining involved. The latter, at best, consist simply in passing through a sequence of states.

Furthering toward the legitimate question: How do I know that I am seeing a spider? Was interpreted as a demand for the faculty or information-processing-mechanism whereby I come to acquire this knowledge? Peculiarities of first-person psychological awareness and reports were carried over as peculiarities of this mechanism. However, the question need not demand the search for a method of knowing but rather for an explanation of the special epistemic features of first-person psychological statements. In that, the problem of introspection (as a way of knowing) dissolves but the problem of explaining ‘introspective’ or first-person authority remains.

Traditionally, belief has been of epistemological interest in its propositional guise: ‘S’ believes that ‘p’, where ‘p’ is a proposition toward which an agent, ‘S’, exhibits an attitude of acceptance. Not all belief is of this sort. If I trust what you say, I believe you. And someone may believe in Mrs. Dudek, or in a free-market economy, or in God. It is sometimes supposed that all beliefs are ‘reducible’ to propositional belief, belief-that. Thus, my believing you might be thought as matter of my believing, perhaps, that what you say is true, and your belief in free markets or in God, a matter of your believing that free-market economy is desirable or that God exists.

It is doubtful, however, that non-propositional believers can, in every case, be reduced in this way. Debated on this point has tended to focus on an apparent distinction between ‘belief-that’ and ‘belief-in’, and the application of this distinction to belief in God: St. Thomas Aquinas (1225-64), accepted or advanced as true or real on the basis of less than convincing evidence in supposing that to believe in God is simply to believe that certain truths hold, such that God exists, that he is benevolent, and so forth. Others ague that belief-in is a distinctive attitude, one that includes essentially an element of trust. More commonly, belief-in has been taken to involve a combination of propositional belief together with some further attitude.

H.H. Price (1969) defends the claim that there is different sorts of belief-in, some, but not all, reducible to beliefs-that. If you believe in God, you believe that God exists, that God is good, etc. But, according to Price, your belief involves, in addition, a certain complex pro-attitude toward its object. One might attempt to analyse this further attitude in terms of additional beliefs-that: ‘S’ believes in ‘χ’ exists (and perhaps holds further factual beliefs about ‘χ’) (2) ‘S’ believes that ‘χ’ is good or valuable in some respect? ; and (3) ‘S’ believes that ‘χ’ is being good or valuable in this respect is it is a good thing. An analysis of this sort, however, fails adequately to capture the further affective component of belief-in. Thus, according to Price, if you believe in God, your belief is merely that certain truths hold: You possess, in addition, an attitude of commitment and trust toward God.

Notoriously, belief-in outruns the evidence for the corresponding belief-that. Does this diminish its rationality? If belief-in presupposes belief-that, it might be thought that the evidential standards for the former must be, at least, as, high as standards for the latter. And any additional pro-attitude might be thought to require further layers of justification not required for cases of belief-that.

Some philosophers have argued that, at least for cases in which belief-in is synonymous with faith (or, faith-in), evidential thresholds for constituent propositional beliefs are diminished. You may reasonably have faith in God or Mrs. Dudek, even though beliefs about their respective attributes, were you to harbour them would be evidentially standard.

Belief-in may be, in general, less susceptible to alteration in the face of unfavourable evidence than belief-that. A believer who encounters evidence against God’s existence may remain unshaken in his belief, in part because the evidence does not bear in his pro-attitude. So long as this is united with his belief that God exists, the belief may survive epistemic buffeting ~ and reasonably so ~ in a way that an ordinary propositional belief that would not.

What is at stake here is the appropriateness of distinct types of explanation. That ever since the times of Aristotle (384-322 Bc) philosophers have emphasized the importance of explanatory knowledge. In simplest terms, we want to know not only what is the case but also why it is. This consideration suggests that we define explanation as an answer to a why-question. Such a definition would, however, be too broad, because some why-questions are request for consolation (Why did my son have to die?) Or moral justification (Why should women not be paid the same as men for the same work?) It would also be too narrow because some explanations are responses to how-questions (How does radar work?) Or how-possibly-questions (How is it possible for cats always to land on four feet?)

In its overall sense, ‘to explain’ means to make clear, to make plain, or to provide understanding. Definitions of this sort are used philosophically in disproving the condition of, for which the terms used in the definitions are no less problematic than the term to be defined. Moreover, since a wide variety of things require explanation, and since many different types of explanation exist, a more complex explanation is required. The term ‘explanandum’ is used to refer to that which is to be explained: The term ‘exlanans’ aim to that which does the explaining. The exlanans and the explanandum taken together constitute the explanation.

One common type of explanation occurs when deliberate human actions are explained in terms of consciousable purposes. ‘Why did you go to the pharmacy yesterday? ‘Because I had a headache and needed to get some aspirin’. It is tacitly assumed that aspirin is an appropriate medication for headaches and that going to the pharmacy would be an efficient way of getting some. Such explanations are, of course, teleological, referring, as they do to goals. The exlanans are not the realisation of a future goal ~ if the pharmacy happened to be closed for stocktaking the aspirin would not have been obtained there, but that would not invalidate the explanation. Some philosophers would say that the antecedent desire to achieve the end is what does the explaining: Others might say that the explaining is done by the nature of the goal and the fact that the action promoted the chances of realizing it. In any case, it should not be automatically assumed that such explanations are causal. Philosophers differ considerably on whether these explanations are to be framed in terms of cause or reason.

The distinction between reason and causes is motivated in good part by a desire to separate the rational from the natural order. Many who have insisted on distinguishing reasons from causes have failed to distinguish two kinds of reason. Consider my reason for sending a letter by express mail. Asked why I did so, I might say I wanted to get it there in a day, or simply: to get it there in a day. Strictly, the reason is expressed by ‘to get it there in a day’. But what this expresses are my reasons only because I am suitably motivated, in that I am in a reason state, wanting to come into possession of the letter that arrives there in a day. ~ Especially, the wanting reason that states, beliefs and intentional ~ and not reasons strictly so called, that are candidates for causes. The latter are abstract contents of propositional altitudes, as the former are psychological elements that play motivational roles.

It has also seemed to those who deny that reasons are causes that the former justifies, as well as explain the actions for which they are reasons, whereas the role of causes is at most to explain. Another claim is that the relation between reasons (and here reason states are often cited explicitly) and the action they explain is non-contingent: Whereas, the relation of causes to their effects is contingent. The ‘logical connection argument’ proceeds from this claim to the conclusion that reasons are mot causes.

All and all, the explanations as framed in terms of reason and causes, there are many differing analyses of such concepts as intention and agency. Expanding the domain beyond consciousness. Freud maintained, in addition, that much human behaviour can be explained in terms of unconscious wishes. These Freudian explanations should probably be construed as basically causal.

Problems arise when teleological explanations are offered in other context. The behaviour of non-human animals is often explained in terms of purpose, e.g., the mouse ran to escape from the cat. In such cases the existence of conscious purpose seems dubious. The situation is still more problematic when a super-empirical purpose is invoked -, e.g., the explanation of living species in terms of God’s purpose, or the vitalistic explanation of biological phenomena in terms of an entelechy or vital principle. In recent years an ‘anthropic principle’ has received attention in cosmology. All such explanations have been condemned by many philosophers as anthropomorphic.

The preceding objection, for and all, that philosophers and scientists often maintain that functional explanations play an important and legitimate role in various sciences such as evolutionary biology, anthropology and sociology. For example, the case of the peppered moth in Liverpool, the change in colour and back again to the light phase provided adaption to a changing environment and fulfilled the function of reducing predation on the species. In the study of primitive societies anthropologists have maintained that various rituals, e.g., a rain dance, which may be inefficacious in brings about their manifest goals, e.g., producing rain. Actually fulfil the latent function of increasing social cohesion at a period of stress, e.g., theological and/or functional explanations in common sense and science often take pains to argue that such explanations can be analysed entirely in terms of efficient causes, thereby escaping the change of anthropomorphism, yet not all philosophers agree.

Mainly to avoid the incursion of unwanted theology, metaphysics, or anthropomorphism into science, many philosophers and scientists ~ especially during the first half of the twentieth century ~ held that science provides only descriptions and predictions of natural phenomena, but not explanations. Beginning in the 1930's, a series of influential philosophers of science ~ including Karl Pooper (1935) Carl Hempel and Paul Oppenheim (1948) and Hempel (1965) ~ maintained that empirical science can explain natural phenomena without appealing to metaphysics and theology. It appears that this view is now accepted by a vast majority of philosophers of science, though there is sharp disagreement on the nature of scientific explanation.

The previous approach, developed by Hempel Popper and others became virtually a ‘received view’ in the 1960 and 1970. According to this view, to give scientific explanation of a natural phenomenon is to show how this phenomenon can be subsumed under a law of nature. A particular rupture in a water pipe can be explained by citing the universal law that water expands when it heated and the fact that the temperature of the water in the pipe dropped below the freezing point, so began the contraction of structural composites that sustain the particular metal. General laws, as well as particular facts, can be explained by subsumption. The law of conservation of linear momentum can be explained by derivation from Newton’s second and third laws of motion. Each of these explanations is a deductive argument: The premisses constitute the exlanans and the conclusion is the explanandum. The exlanans contain one or more statements of universal laws and, in many cases, statements describing initial conditions. This pattern of explanation is known as the ‘deductive-nomological model’ any such argument shows that the explanandum had to occur given the exlanans.

Moreover, in contrast to the foregoing views ~ which stress such factors as logical relations, laws of nature and causality ~ a number of philosophers have argued that explanation, and not just scientific explanation, can be analysed entirely in pragmatic terms.

During the past half-century much philosophical attention has been focussed on explanation in science and in history. Considerable controversy has surrounded the question of whether historical explanation must be scientific, or whether history requires explanations of different types. Many diverse views have been articulated: the foregoing brief survey does not exhaust the variety.

In everyday life we encounter many types of explanation, which appear not to raise philosophical difficulties, in addition to those already of mention. Prior to take off a flight attendant explains how to use the safety equipment on the aeroplane. In a museum the guide explains the significance of a famous painting. A mathematics teacher explains a geometrical proof to be a bewildered student. A newspaper story explains how a prisoner escaped. Additional examples come easily to mind. The main point is to remember the great variety of context in which explanations are sought and given.

Another item of importance to epistemology is the widely held notion that corrective designs for demonstrative inferences can be characterized as the inference to the best explanation. Given the variety of views on the nature of explanation, this popular slogan can hardly provide a useful philosophical analysis.

The inference to the best explanation is claimed by many to be a legitimate form of non-deductive reasoning, which provides an important alternative to both deduction and enumerative induction. Some would claim it is only through reasoning to the best explanation that one can justify beliefs about the external world, the past, theoretical entities in science, and even the future. Consider belief about the external world and assume that we know what we do about our subjective and fleeting sensations. It seems obvious that we cannot deduce any truths about the existence of physical objects from truths describing the character of our sensations. But neither can we observe a correlation between sensations and something other than sensations since by hypothesis all we have to rely on ultimately is knowledge of our sensations. Nonetheless, we may be able to posit physical objects as the best explanation for the character and order of our sensations. In the same way, various hypotheses about the past might best explain present memory: Theatrical postulates in physics might best explain phenomena in the macro-world, and it is possible that our access to the future is through past observations. But what exactly is the form of an inference to the best explanation?

When one has of precenting such an inference in ordinary discourse, it often seems to have as of:

1. ‘O’ is the case

2. If ‘E’ had been the case ‘O’ is what we would expect,

Therefore there is a high probability that:

3. ‘E’ was the case.

This is the independent variable that Peirce (1839-1914) called ‘hypophysis’ or ‘abduction’. To consider a very simple example, we might upon coming across some footsteps on the beach, reason to the conclusion that a person walking along the beach recently by noting that if a person had walked along the beach one would expect to find just such footsteps.

But is abduction a legitimate form of reasoning? Obviously, if the conditional in (2) above is read as a material conditional such arguments would be hopelessly based. Since the proposition that ‘E’ materially implies ‘O’ is entailed by ‘O’, there would always be an infinite number of competing inferences to the best explanation and none of them would seem to lend support to its conclusion. The conditionals we employ in ordinary discourse, however, are seldom, if ever, material conditionals. Such that the vast majority of ‘if . . . Then . . . ‘ statements do not seem to be truth-functionally complex. Rather, they seem to assert a connection of some sort between the states of affairs referred to in the antecedent (after the ‘if’) and in the consequent (after the ‘then’). Perhaps the argument form has more plausibility if the conditional is read in this more natural way. But consider an alternative footsteps explanation:

1. There are footprints on the beach

2. If cows wearing boots had walked along the beach recently one would expect to find such footprints

Therefore. There is a high probability that:

3. Cows wearing boots walked along the beach recently.

This inference has precisely the same form as the earlier inference to the conclusion that people walked along the beach recently and its premisses are just as true, but we would be no doubt considered of both the conclusion and the inference as simply silly. If we are to distinguish between legitimate and illegitimate reasoning to the best explanation, it would seem that we need a more sophisticated model of the argument form. It would seem that in reasoning to an explanation we need criteria for choosing between alternative explanations. Substantively, if reasoning to the best explanation is to constitute a genuine alternative to inductive reasoning. It is important that these criteria not be implicit premisses which will convert our argument into an inductive argument. Thus, for example, if the reason we conclude that people rather than cows walked along the beach are only that we are implicitly relying on the premiss that footprints of this sort are usually produced by people. Then it is certainly tempting to suppose that our inference to the best explanation was really a disguised inductive inference of the form:

1. Most footprints are produced by people.

2. Here are footprints

Therefore in all probability,

3. These footprints were produced by people.

If we follow the suggestion made above, we might construe the form of reasoning to the best explanation, such that:

1. ‘O’ (a description of some phenomenon).

2. Of the set of available and competing explanations

E1, E2 . . . , En capable of explaining ‘O’, E1 is the best

According to the correct criteria for choosing among

Potential explanations.

Therefore, in all probability,

3. E1.

Here too, is a crucial ambiguity in the concept of the best explanation. It might be true of an explanation E1 that it has the best chance of being correct without it being probable that E1 is correct. If I have two tickets in the lottery and one hundred, other people each have one ticket, I am the person who has the best chance of winning, but it would be completely irrational to conclude on that basis that I am likely too win. It is much more likely that one of the other people will, and particularly having won, the aspect of mind to another too win, than, I too, by its preparation of mind or by disposition retain the experience of winning. To conclude that a given explanation is actually likely to be correct on must hold that it is more likely that it is true than that the distinction of all other possible explanations is correct. And since on many models of explanation the number of potential explanations satisfying the formal requirements of adequate explanation is unlimited. This will be a normal feat.

Explanations are also sometimes taken to be more plausible the more explanatory ‘power’ that they have. This power is usually defined in terms of the number of things or more likely, the number of kinds of things, the theory can explain. Thus, Newtonian mechanics were so attractive, the argument goes, partly because of the range of phenomena the theory could explain.

The familiarity of an explanation in terms of explanations is also sometimes cited as a reason for preferring that explanation to fewer familiar kinds of explanation. So if one provides a kind of evolutionary explanation for the disappearance of one organ in a creature, one should look more favourably on a similar sort of explanation for the disappearance of another organ.

Evaluating the claim that inference to the best explanation constitutes a legitimate and independent argument form. One must explore the question of whether it is a contingent fact that, at least, most phenomena have explanations and that explanations that satisfy a given criterion, simplicities, for example, are more likely to be correct. While it might be nice if the universe were structured in such a way that simple, powerful, familiar explanations were usually the correct explanation, it is difficult to avoid the conclusion that if this is true it would be an empirical fact about our universe discovered only a posteriori. If the reasoning to the explanation relies on such criteria, it seems that one cannot without circularity use reasoning to the best explanation to discover that the reliance on such criteria is safe. But if one has some independent way of discovering that simple, powerful, familiar explanations are more often correct, then why should we think that reasoning to the best explanation is an independent source of information about the world? Again, that reason may wherefore implement the significancy of being variable to why ought that we not conclude that it would be more perspicuous to represent the reasoning this way:

1. Most phenomena have the simplest, most powerful,

Familiar explanations available

2. Here is an observed phenomenon, and E1 is the simplest,

Most powerful, familiar explanation available.

Therefore, in all probability,

3. This is to be explained by E1.

But the above is simply an instance of familiar inductive reasoning.

There are various ways of classifying mental activities and states. One useful distinction is that between the propositional attitudes and everything else. A propositional attitude in one whose description takes a sentence as complement of the verb. Belief is a propositional attitude: One believes (truly or falsely as the case may be), that there are cookies in the jar. That there are cookies in the jar is the proposition expressed by the sentence following the verb. Knowing, judging, inferring, concluding and doubts are also propositional attitudes: One knows, judges, infers, concludes, or doubts that a certain proposition (the one expressed by the sentential complement) is true.

Though the propositions are not always explicit, hope, fear, expectation, intention, and a great many others terms are also (usually) taken to describe propositional attitudes, one hopes that (is afraid that, etc.) there are cookies in the jar. Wanting a cookie is, or can be construed as, a propositional attitude: Wanting that one has (or eat or whatever) a cookie, intending to eat a cookie is intending that one will eat a cookie.

Propositional attitudes involve the possession and use of concepts and are, in this sense, representational. One must have some knowledge or understanding of what χ’s are in order to think, believe or hope that something is ‘χ’. In order to want a cookie, intend to eat one must, in some way, know or understand what a cookie is. One must have this concept. There is a sense in which one can want to eat a cookie without knowing what a cookie is ~ if, for example, one mistakenly thinks there are muffins in the jar and, as a result wants to eat what is in the jar ( = cookies). But this sense is hardly relevant, for in this sense one can want to eat the cookies in the jar without wanting to eat any cookies. For this reason(and this sense) the propositional attitudes are cognitive: They require or presuppose a level of understanding and knowledge, this kind of understanding and knowledge required to possess the concepts involved in occupying the propositional state.

Though there is sometimes disagreement about their proper analysis, non-propositional mental states, yet do not, at least on the surface, take propositions as their object. Being in pain, being thirsty, smelling the flowers and feeling sad are introspectively prominent mental states that do not, like the propositional attitudes, require the application or use of concepts. One doesn’t have to understand what pain or thirst is to experience pain or thirst. Assuming that pain and thirst are conscious phenomena, one must, of course, be conscious or aware of the pain or thirst to experience them, but awareness of must be carefully distinguished from awareness that. One can be aware of ‘χ’, ~ thirst or a toothache ~ without being aware that, that, e.g., feeling thirsty or a painful sensation, is that like beliefs that and knowledge that, are a propositional attitude, awareness of is not.

As the examples, pain, thirst, tickles, itches, hungers are meant to suggest, the non-propositional states have a felt sense experience or experiential [phenomenal] quality to them that is absent in the case of the propositional attitudes. Aside from whom it is we believe to be playing the tuba, believing that John is playing the tuba is much the same as believing that Joan is playing the tuba. These are different propositional states, different beliefs, yet, they are distinguished entirely in terms of their propositional content ~ in terms of what they are beliefs about. Contrast this with the difference between hearing John play the tuba and seeing him play the tuba. Hearing John play the tuba and seeing John play the tubas’ discordant in its opposition as it is inconsistent to be unlike or distinct in nature, form, or characteristics, yet not just (as do beliefs) in what they are of or about (for these experiences are, in fact, of the same thing: John playing the tuba), but in their qualitative character, the one involves a visual, the other and auditory, experience. The difference between seeing John play the tuba and hearing John play the tuba, is then, a sensorial relation to sensation or the senses, e.g., Sensory perception, so as to become or cause to become separately independent to cognates’ procedure or from a norm or standard deviation.

Some mental states are a combination of sensory and cognitive elements, e.g., as fears and terror, sadness and anger, feeling joy and depression, are ordinarily thought of in this way sensations are: Not in terms of what propositions (if any) they represent, but (like visual and auditory experience) in their intrinsic character, as they are felt by someone experiencing them. But when we describe a person for being afraid that, sad that, upset that (as opposed too merely thinking or knowing that) so-and-so happened, we typically mean to be describing the kind of sensory feelings or emotional quality accompanying the cognitive state. Being afraid that the dog is going to bite me are determinate non-conscious attributes for which of a prenominal innovatory outlook as to have had some directive ruling by ways of some applicable deviation of course or direction considering on or upon the factual longanimity or the clemency of being to value the quantitative attributions for which quality may fluctuate or sway off its course of study, since the undulating characterizations by which of something is purposively classifiability: As we have learnt that its finding the peculiarity that merits the degree of excellence.

To speak of sensations of red objects, the tuba and so forth, is to say that these sensations carry information about an object’s colour, its shape, orientation, and position and (in the case of an audition) information about acoustic qualities such as pitch, timbre, volume. It is not to say that the sensations share the properties of the objects they are sensations of or that they have the properties they carry information about. Auditory sensations are not loud and visual sensations are not coloured. Sensations are bearers of nonconceptualized information, and the bearer of the information that something is red need not itself be red. It need not even be the sort of thing that could be red: It might be a certain pattern of neuronal events in the brain. Nonetheless, the sensation, though not itself red, will (being the normal bearer of the information) typically produce in the subject who undergoes some sort of experiential belief, or has of a tendency to believe, in that, something red is being experienced. Hence the existence of hallucinations.

Just as there are theories of the mind, which would deny the existence of any state of mind whose essence was purely qualitative (i.e., did not consist of the state’s extrinsic, causal, properties) there are theories of perception and knowledge ~ cognitive theories ~ that denies a sensory component to ordinary sense perception. The sensor y dimension (the look, feel, smells, taste of things) is (if it is not altogether denied) identified with some cognitive condition (knowledge or belief) of the experienced. All seeing (not to mention hearing, smelling and feeling) becomes a form of believing or knowing. As a result, organisms that cannot know cannot have experiences. Often, to avoid these striking counterintuitive results, implicit or otherwise unobtrusive (and, typically, undetectable) forms of believing or, knowing.

Aside, though, from introspective evidence (closing and opening one’s eyes, if it changes beliefs at all, doesn’t just change beliefs, it eliminates and restores a distinctive kind of conscionable experience), there is a variety of empirical evidence for the existence of a stage in perceptual processing that is conscious without being cognitive (in any recognizable sense). For example, experiments with brief visual displays reveal that when subjects are exposed for very brief (50 m’s/sec.) Intervals to information-rich stimuli, there is persistence (at the conscious level) of what is called an image or visual icon that embodies more information about the stimulus than the subject can cognitively process or report on. Subjects can and do exploit the information in this persisting icon that by reporting on any part of the absent array of numbers (they can, for instance, reports of the top three numbers, the middle three or the bottom three). They cannot, however, identify all nine numbers. The y report seeing all nine, and the y can identify any one of the nine, but they cannot identify all nine. Knowledge and brief, recognition and identification ~ these cognitive states, though present for any two or three numbers in the array, are absent for all nine numbers in the array. Yet, the image carries information about all nine numbers (how else accounts for subjects’ ability to identify any number in the absent array?) Obviously, then, information is there, in the experience itself, whether or not it is, or even can be. As psychologists conclude, there is a limit on the information processing capacities of the latter (cognitive) mechanisms that are not shared by the sensory stages themselves.

Perceptual knowledge is knowledge acquired by or through the senses. This includes most of what we know. Some would say it includes everything we know. We cross intersections when we see the light turn green, head for the kitchen when we smell the roast burning, squeeze the fruit to determine its ripeness, and climb out of bed when we hear the alarm, ring. In each case we come to know something ~ that the light has turned green, that the roast is burning, that the melon is overripe, and that it is time to get up ~ that the light has turned green ~ by use of the eyes. Feeling that the melon is overripe in coming to know a fact ~ that the melon is overripe ~ by one’s sense of touch. In each case the resulting knowledge is somehow based on, derived from or grounded in the sort of experience that characterizes the sense modality in question.

Seeing a rotten kumquat is not at all like the experience of smelling, tasting or feeling a rotten kumquat. Yet all these experiences can result in the same knowledge ~ Knowledge that the kumquat is rotten. Although the experiences are much different, they must, if they are to yield knowledge, embody information about the kumquat: The information that it is rotten. Seeing that the fruit is rotten differs from smelling that it is rotten, not in what is known, but how it is unknown. In each case, the information that having to have had the requisites of the same source, ~ the rotten kumquat -, but it is, so top speak, delivered via different channels and coded and re-coded in different experiential neuronal excitations as stimulated sense attractions.

It is important to avoiding confusing perceptual knowledge of facts, e.g., that the kumquat is rotten, with the perception of objects, e.g., rotten kumquats. It is one thing to see than it is to look and make sure to take care, and lastly, take a gander at (taste, smell, feel) a rotten kumquat, and quite another to know (by seeing or tasting) that it is a rotten kumquat. Some people, after all, do not know what kumquats to look like. They see a kumquat, but, are not to realize (do not see that) it is a kumquat. Again, some people do not know what a kumquat smell like. They smell a rotten kumquat and ~ thinking, perhaps, that this is a way this strange fruit is supposed to smell ~ does not realize from the smell, i.e., do not smell that it is a rotted kumquat. In such cases people see and smell rotten kumquats ~ and in this sense perceive a rotten kumquat ~ and never know that they are kumquats ~ let alone rotten kumquats. They cannot, not at least by seeing and smelling, and not until they have learned something about (rotten) kumquats. Since the topic as such is incorporated in the perceptual knowledge ~ knowing, by sensory means, that something if ‘F’ -, we will be primary concerned with the question of what more, beyond the perception of F’s, is needed to see that (and thereby know that) they are ‘F’. The question is, however, not how we see kumquats (for even the ignorant can do this) but, how we know (if, that in itself, that we do) that, that is what we see.

Much of our perceptual knowledge is indirect, dependent or derived. By this is that it is meant that the facts we describe ourselves as learning, as coming to know, by perceptual means are pieces of knowledge that depend on our coming to know something else, some other fat, in a more direct way. We see, by the gauge, that we need gas, see, by the newspapers, that our team has lost again, or see, by her expression that is nervous. This derived or dependent sort of obtainable knowledge is particularly prevalent in the case of vision but it occurs, to a lesser degree, in every sense modality. We install bells and other noise-makers so that we can, for example, hear (by the bells) that someone is at the door and (by the alarm) that its time to get away. When we obtain knowledge in this way. It is clear that unless one sees ~ hence, comes to know. Something about the gauge (that it reads ‘empty’), the newspaper (which is says) and the person’s expression, one would not see (hence, know) what one is described as coming to know by perceptual means. If one cannot hear that the bell is ringing, one cannot ~ not at least in this way ~ hear that one’s visitors have arrived. In such cases one sees (hears, smells, etc.) that ‘a’ is ‘F’, coming to know thereby that ‘a’ is ‘F’, by seeing (hearing, and so forth), that some other condition, b’s being ‘G’, obtains. When this occurs, the knowledge (that ‘a’ is ‘F’) is derived, or dependent on, the more basic perceptual knowledge that ‘b’ is ‘G’.

Though perceptual knowledge about objects is often, in this way, dependent on knowledge of fats about different objects, the derived knowledge is sometimes about the same object. That is, we see that ‘a’ is ‘F’ by seeing, not that some other object is ‘G’, but that ‘a’ itself is ‘G’. We see, by her expression, that she is nervous. She tells that the fabric is silk (not polyester) by the characteristic ‘greasy’ feel of the fabric itself (not, as I do, by what is printed on the label). We tell whether it is an oak tree, a Porsche, a geranium, an igneous rock or a misprint by its shape, colour, texture, size, behaviour and distinctive markings. Perceptual knowledge of this sort is also deprived ~ derived from the more basic facts (about ‘a’) as we can use in making the identification. In this case the perceptual knowledge is still indirect because, although the same object is involved, the facts we come to know about it are different from the facts that enable us to know it.

Derived knowledge is sometimes described as inferential, but this is misleading, at the conscious level there is no passage of the mind from premise to conclusion, no reasoning, no problem-solving. The observer, the one who sees that ‘a’ is ‘F’ by seeing that ‘b’ (or ‘a’ itself) is ‘G’, need not be (and typically is not) aware of any process of inference, any passage of the mind from one belief to another. The resulting knowledge, though logically derivative, is psychologically immediate. I could see that she was getting angry: so, I moved my hand. I did not, ~ at least not at any conscious level ~ infers (from her expression and behaviour) that she was getting angry. I could (or, so it seemed to me) see that she was getting angry. It is this psychological immediacy that makes indirect perceptual knowledge a species of perceptual knowledge.

The psychological immediacy that characterises so much of our perceptual knowledge ~ even (sometimes) the most indirect and derived forms of it ~ does not mean that learning is not required to know in this way. One is not born with (may, in fact, never develop) the ability to recognize daffodils, muskrats and angry companions. It is only after a long experience that one is able visually to identify such things. Beginners may do something corresponding to inference: They recognize relevant features of trees, birds, and flowers, factures they already know how to perceptually identify, and then infer (conclude), on the basis of what they see, and under the guidance of more expert observers, that it’s an oak a finch or a geranium. But the experts (and we are all experts on many aspects of our familiar surroundings) do not typically go through such a process. The expert just sees that it’s an oak, a finch or a geranium. The perceptual knowledge of the expert is still dependent, of course, since even an expert cannot see what kind of flower it is if she cannot first see its colour and shape, but it is to say, that the expert has developed identificatory skills that no longer require the sort of conscious inferential process that characterize a beginner’s efforts.

Coming to know that ‘a’ is ‘F’ by seeing that ‘b’ is ‘G’ obviously requires some background assumption on the part of the observer, an assumption to the effect that ‘a’ is ‘F’ (or perhaps only probable ‘F’) when ‘b’ is ‘G’. If one does not, as yet taken as given in posting the premises (as taken to be granted) that the gauge is properly connected, and does not, thereby assume that it would not register ‘empty’, unless the tank was nearly empty, then even if one could see that it registered ‘empty’, one would not learn ( hence, would not see) that one needed gas. At least, one would not see it by consulting the gauge. Likewise, in trying to identify birds, its no use being able to see their markings if one doesn’t know something about which birds have which marks ~ sometimes of the form: A bird with these markings is (probably) a finch.

It would seem, moreover, that these background assumptions, if they are to yield knowledge that ‘a’ is ‘F’, as they must if the observer is to see (by b’s being ‘G’) that ‘a’ is ‘F’, must they qualify as knowledge. For if this background fact is not known, if it is not known whether ‘a’ is ‘F’ when ‘b’ is ‘G’, then the knowledge of b’s being ‘G’, taken by itself, powerless to generate the knowledge that ‘a; is ‘F?’. If the conclusion is to be known to be true, both the premises used to reach that conclusion must be known to be true. Or so it would seem.

Externalism/internalism are most generally accepted of this distinction if that a theory of justification is internalist, if and only if it requires that all of the factors needed for a belief to be epistemically justified for a given person be cognitively accessible to that person. Internal to his cognitive perspective, and external, if it allows that, at least, part of the justifying factor need not be thus accessible, so they can be external to the believers’ cognitive perspective, beyond his understanding. As complex issues well beyond our perception to the knowledge or an understanding, however, epistemologists often use the distinction between internalist and externalist theories of epistemic justification without offering any very explicit explication.

The externalism/internalism distinction has been mainly applied to theories of epistemic justification. It has also been applied in a closely related way to accounts of knowledge and in a rather different way to accounts of belief and thought content.

The internalist requirement of cognitive accessibility can be interpreted in at least two ways: A strong version of internalism required that the believer actually be aware of the justifying factor in order to be justified: While a weaker version would require only that he be capable of becoming aware of them by focussing his attention appropriately, but without the need for any change of position, new information etc. Though the phrase ‘cognitively accessible’ suggests the weak for internalism, wherefore, the idea that epistemic justification requires that the believer actually have in his cognitive possession a reason for thinking that the belief is true.

It should be carefully noticed that when internalism is construed by either that the justifying factors literally are internal mental states of the person or that the internalism. On whether actual awareness of the justifying elements or only the capacity to become aware of them is required, comparatively, the consistency and usually through a common conformity brings upon some coherentists views that could also be internalist, if both the belief and other states with which a justification belief is required to cohere and the coherence relations themselves are reflectively accessible. In spite of its apparency, it is necessary, because on at least some views, e.g., a direct realist view of perception, something other than a mental state of the believer can be cognitively accessible, not sufficient, because there are views according to which at least, some mental states need not be actual (strong versions) or even possible (weak versions) objects of cognitive awareness.

Obviously too, a view that was externalist in relation to a strong version of internalism (by not requiring that the believer actually be aware of all justifying factors) could still be internalist in relation to a weak version (by requiring that, at least, be capable of becoming aware of them).

The most prominent recent externalist views have been versions of ‘reliabilism’, whose main requirement for justification is roughly that the beliefs are produced in a way or to a considerable degree in which of subject matter conducting a process that makes of objectively likely that the belief is true. What makes such a view externalist is the absence of any requirement that the person for whom the belief is justified have any sort of cognitive access to the relation of reliability in question. Lacking such access, such a person will in general have no reason for thinking that the belief is true or likely to be true, but will, on such an account, nonetheless, be epistemically justified in accepting it. Thus, such a view arguably marks a major break from the modern epistemological tradition, stemming from Descartes, which identifies epistemic justification with having a reason, perhaps, even a conclusive reason, for thinking that the belief is true. An epistemologist working within this tradition is likely to feel that the externalist, than offering a competing account of the same concept of epistemic justification with which the traditional epistemologist is concerned, has simply changed the subject.

An alterative to giving an externalist account of epistemic justification, one which may be more defensible while still accommodating many of the same motivating concerns, is especially given to some externalists account of knowledge directly, without relying on an intermediate account of justification. Such a view will obviously have to reject the justified true belief account of knowledge, holding instead that knowledge is true belief which satisfies the chosen externalist condition, e.g., is a result of a reliable process, and, perhaps, further conditions as well. This makes it possible for such a view to retain an internalist account of epistemic justification, though the centralities are seriously diminished. Such an externalist account of knowledge can accommodate the common-sense conviction that animals, young children and unsophisticated adults possess knowledge though not the weaker conviction that such individuals are epistemically justified in their belief. It is also, at least. Vulnerable to internalist counter-examples, since the intuitions involved there pertains more clearly to justification than to knowledge, least of mention, as with justification and knowledge, the traditional view of content has been strongly internalist in character. An objection to externalist accounts of content is that they seem unable to do justice to our ability to know the content of our beliefs or thoughts ‘from the inside’, simply by reflection. So, then, the adoption of an externalist account of mental content would seem as if part of all of the content of a belief is inaccessible to the believer, then both the justifying status of other beliefs in relation to that content and the status of the content as justifying further beliefs will be similarly inaccessible, thus contravening the internalist requirements for justification.

Externalists, however, argue that the indirect knowledge that ‘a’ is ‘F’, though it may depend on the knowledge that ‘b’ is ‘G’, does not require knowledge of the connecting fact, the fact that ‘a’ is ‘F’ when ‘b’ is ‘G’. Simple belief (or, perhaps, justified belief, there is stronger and weaker versions of externalism) in the connecting fact is sufficient to confer a knowledge of the fact is sufficient to confer a knowledge of the connected fact. Even if, strictly speaking, I don’t know she is nervous whenever she fidgets like that, I can nonetheless, see and hence know, that she is nervous by the way she fidgets, if I (correctly) assume that his behaviour r is a reliable expression of nervousness. One need not know the gauge is working well to make observations (acquire observational knowledge) with it. All that is required, besides the observer believing that the gauge is reliable, is that the gauge, in fact, be reliable, i.e., that the observer’s background beliefs be true. Critics of externalisms have been quick to point out that this theory has the unpalatable consequence that knowledge can be made possible by ~ and, in this sense, be made to rest on ~ lucky hunches (that turn out true) and unsupported (even irrational) beliefs. Surely, internalist argues, if one is going to posses an intellectual hold of something that is presupposed as knowing that ‘a’ is ‘F’ on the basis of b’s being ‘G’, one should have (as a bare minimum) some justification for thinking that ‘a’ is ‘F’, or is probably ‘F’, when ‘b’ is ‘G’.

Whatever prerequisite persuasively allures of the unduly persuasions in the presence of self, it is in-or-of-itself, that one can take on or upon these matters (with the possible exception of extreme externalism) indirect perception obviously requires some understanding (knowledge? Justification? Belief?) Of the general relationship between the fact one comes to know (that ‘a’ is ‘F’) and the facts (that ‘b’ is ‘G’) that enable one to know it. And it is this requirement on background knowledge or understanding that leads to questions to questions about the possibility of indirect perceptual knowledge. Is it really knowledge? The first question is inspired by sceptical doubts about whether we can ever know the connecting facts in question. How is it possible to learn, to acquire knowledge of, the connecting fact’s knowledge of which is necessary to see? By b’s being ‘G’, and that ‘a’ is ‘F’? These connecting facts do not appear to be perceptually knowable. Quite the contrary, they appear to b e general truths knowable (if knowable at all) by inductive inference e from past observations. And if one is sceptical about obtaining knowledge in this indirect, inductive way one is, perforce, sceptical about the existence of the kind of indirect knowledge, including indirect perceptual knowledge of the set described, in that depends on it.

Even if one puts aside such sceptical questions, it is, nonetheless, an ever-reminding concern about the perceptual character of this kind of knowledge. If one sees that ‘a’ is ‘F’ by seeing that ‘b’ is ‘G’, is really seeing that ‘a’ is ‘F’? Isn’t perception merely a part ~ and, from an epistemological standpoint, the less significant part ~ of the process whereby one comes to know that ‘a’ is ‘F?’. One must, it is true, sere that ‘b’ is ‘G’, but this is only one of the premises needed to reach the conclusion (knowledge) that ‘a’ is ‘F’. There is also the background knowledge that is essential to the process. If we think of a theory as any factual proposition, or set of factual propositions, that cannot itself be known in some direct observational way, we can express this worry by saying that indirect perception is always theory-loaded: Seeing (indirectly)that ‘a’ is ‘F’ is only possible if the observer already has knowledge of (justification for, belief in) some theory, the theory ‘connecting’ the fast one cannot come to know (that ‘a’ is ‘F’) with the fact (that ‘b’ is ‘G’) that enables one to know it.

This, of course, reverses the standard foundationalist picture of human knowledge. Instead of theoretical knowledge depending on, and being derived from, perception, perception (of the indirect sort) presupposes the cognizance of the ‘a prior’.

Foundationalists are quick to point out that this apparent reversal in the structure of human knowledge is only apparent. Our indirect perception of facts depends on theory, yes, but this merely shows that indirect perceptual knowledge is not part of the foundation. To reach the kind of perceptual knowledge that lies at the foundation, we need to look at a form of perception that is purified of all theoretical elements. This then, will be perceptual knowledge pure and direct. No background knowledge or assumptions about connecting regularities are needed in direct perception because the known facts are presented directly and immediately and not (as, in indirect perception) on the basis of some other facts. In direct perception all the justification (needed for knowledge) is right there in the experience itself.

What, then, about the possibility of perceptual knowledge pure and direct, the possibility of coming to know, on the basis of sensory experience, that ‘a’ is ‘F’ where this does not require assumptions or knowledge that has a source outside the experience itself? Where is this epistemological ‘pure gold’ to be found?

There are, basically, two views about the nature of direct perceptual knowledge (coherentists would deny that any of our knowledge is basic in this sense). These views (following traditional nomenclature) can be called ‘direct realism’ and ‘representationalist’ or ‘representative realism’. A representationalist restricts direct perceptual knowledge to objects of a very special sort: Ideas, impressions, or sensations, sometimes called sense-data ~ entities in the mind of the observer. One directly perceives a fact, e.g., that ‘b’ is ‘G’, only when ‘b’ is a mental entity of some sort ~ a subjective appearance or sense-data ~ and ‘G’ is a property of this datum. Knowledge of these sensory states is supposed to be certain and infallible. These sensory facts are, so to speak, right up against the mind’s eye. One cannot be mistaken about these facts for these facts are, in reality, facts about the way things appear to be, and one cannot be mistaken about the way things appear to be. Normal perception of external conditions, then, turns out to be (always) a type of indirect perception. One ‘sees’ that there is a tomato in front of one by seeing that the appearance (of the tomato) has a certain quality (reddish and bulgy) and inferring as this is topically said to be automatic and unconscious, on the basis of certain background assumptions, e.g., that there typically is a tomato in front of one when one has experiences of this sort, that there is a tomato in front of one. All knowledge of objective reality, then, even what common-sense regards as the most commanding of directive perceptual knowledge, is based on an even more direct knowledge of the appearances.

For the representationalist, then, perceptual knowledge of our physical surroundings is always theory-loaded and indirect. Such perception is ‘loaded’ with the theory that there is some regular, some uniform, correlations between the way things appear (known in the perceptually direct way) and the way things actually are (known, if known at all, in a perceptual indirect way).

The second view, direct realism, refuses to restrict perceptual knowledge, to an inner world of subjective experience. Though the direct realist is willing to concede that much of our knowledge of the physical world is indirect, however, direct and immediate it may sometimes feel, some perceptual knowledge of physical reality is direct. What makes it direct is that such knowledge is not based on, nor in any way dependent on, other knowledge and belief. The justification needed for the knowledge is right there in the experience itself.

To understand the way this is supposed to work, consider an ordinary example, ‘S’ identifies a banana (learns that it is a banana) by noting its shape and colour ~ perhaps, even tasting and smelling it (to make sure it’s not wax). In this case the perceptual knowledge that is a banana is (the direct realist admits) indirect, dependence on S’s perceptual knowledge of its shape, colour, smell, and taste. ‘S’ learns that it is a banana by seeing that it is yellow, banana-shaped, etc. Nonetheless, S’s perception of the banana’s colour and shape is direct. ‘S’ does not see that the object is yellow, for example, by seeing, knowing, believing anything more basic ~ also not about the banana or anything else, e.g., his own sensations of the banana. ‘S’ has learned to identify such features, of course, but when ‘S’ learned to do is not an inference, even a unconscious inference, from other things be believed. What ‘S’ acquired was a cognitive skill, a disposition to believe of yellow objects he saw that they were yellow. The exercise of this skill does not require, and in no way depends on having of any other beliefs. S’s identificatorial successes will depend on his operating in certain special conditions, of course, ‘S’ will not, perhaps, be able to visually identify yellow objects in drastically reduced lighting, at funny viewing angles, or when afflicted with certain nervous disorders. But these facts about ‘S’ can see that something is yellow does not show that his perceptual knowledge (that ‘a’ is yellow) in any way deepens on a belief, let alone knowledge, that he is in such special conditions. It merely shows that direct perceptual knowledge is the result of exercising a skill, an identificatorial skill, that like any skill. Requires certain conditions for its successful exercise. An expert basketball player cannot shoot accurately in a hurricane. He needs normal conditions to do what he has learned to do. So also, with individuals who have developed perceptual (cognitive) skills. They need normal conditions to do what they have learned to do. They need normal conditions to see, for example, that something is yellow. But they do not, any more than the basketball player, have to know they are in these conditions to do what being in these conditions enables them to do.

This means, of course, that for a direct realist direct perceptual knowledge is fallible and corrigible. Whether ‘S’ sees that ‘a’ is ‘F’ depends on his being caused to believe that ’a’ is ‘F’ in conditions that are appropriate for an exercise of that cognitive skill. If conditions are right, then ‘S’ sees (hence, knows) that ‘a’ is ‘F’. If they aren’t, he doesn’t. Whether or not ‘S’ knows depends, then, not on what else, if anything, ‘S’ believes, but on the circumferences in which ‘S’ comes to believe. This being so, this type of direct realism is a form of externalism, direct perception of objective facts, pure perceptual knowledge of external events, is made possible because what is needed, by way of justification for such knowledge has been reduced. Background knowledge ~ and, in particular, the knowledge that the experience does, and suffices for knowing ~ is not needed.

This mans that the foundations of knowledge are fallible. Nonetheless, though fallible, they are in no way derived. That is, of what has made them foundational. Even if they are brittle, as foundations sometimes are, everything else rests upon them

The theory of representative realism holds that (1) there is a world whose existence and nature are independent of us and of our perceptual experience of it, and (2) perceiving an object located in that external world necessarily involves causally interacting with that object, (3) the information acquired in perceiving an object is indirect: It is information most immediately about the perceptual experience caused in us by the object, and only derivatively about the object itself:

Clause 1. Makes representative realism a species of realism.

Clause 2. Makes it a species of causal theory of perception.

Clause 3. Makes it a species of representative as opposed

To direct realism.

Traditionally, representative realism has been allied with an act/object analysis of sensory experience. Its act/object analysis is traditionally a major plank in arguments for representative realism. According to the act/object analysis of experience with content involves an object of experience to which the subject is related by an act of awareness (the event of experiencing that object). This is meant to apply not only to perceptions, which have material objects (whatever is perceived), but also to experiences like hallucinations and dream experiences, which do not. Such experiences nonetheless, appear to represent something. And their objects are supposed to be whatever it is that they represent. Act/object theorists may differ on the nature of objects of experience, which have been treated as properties, Meinongian objects (which may not exist or have any form of being), and, more commonly, private mental entities with sensory qualities. (The term ‘sense-data’ is now usually applied to the latter, but has also been used as a general term for objects of sense experiences, as in the work of G.E. Moore.) Act/object theorists may also differ on the relationship between objects of experience and objects of perception. In terms of representative realism, objects of perception (of which we are ‘indirectly aware’). Meinongians, however, may simply treat objects of perception as existing objects of experience.

Realism in any area of thought is the doctrine that certain entities allegedly associated with that area are indeed real. Common sense realism ~ sometimes called ‘realism’, without t qualification ~ says that ordinary things like chairs and trees and people are real. Scientific realism says that theoretical posits like electrons and fields of force and quarks are equally real. And psychological realism says mental states like pain and beliefs are real. Realism can be upheld ~ and opposed ~ in all such areas, as it can with differently or more finely drawn provinces of discourse: For example, with discourse about colours, about the past, about possibility and necessity, or about matters of moral right and wrong. The realist in any such area insists on the reality of the entities in question in the discourse.

If realism itself can be given a fairly quick characterization, it is more difficult to chart the various forms of opposition, for they are legion. Some opponents deny that there are any distinctive posits associated with the area of discourse under dispute: A good example is the emotivist doctrine that moral discourse does not posit values but serves only, like applause and exclamation, to express feelings. Other opponents deny that entity posited by the relevant discourse exist, or, at least, exist independently of our thinking about them: Here the standard example is ‘idealism’. And others again, insist that the entities associated with the discourse in question, and are tailored to our human capacities and interests and, to that extent, are as much a product of invention as a matter of discovery.

Nevertheless, one us e of terms such as ‘looks’, ‘seems’, and ‘feels’ is to express opinion. ‘It looks as if the Labour Party will win the next election’ expresses an opinion about the party’s chances and does not describe a particular kind of perceptual experience. We can, however, use such terms to describe perceptual experience divorced from any opinion to which the experience may incline us. A straight-stick half in water looks bent, and does so to people completely familiar with this illusion who has, therefore, no inclination to hold that the stick is in fact bent. Such users of ‘looks’, ‘seems’, ‘taste’, and so forth, that is commonly called ‘phenomenological’.

The act/object theory holds that the sensory experience recorded by sentence employing sense is a matter of being directly acquainted with something which actually bears the red to me. I am acquainted with a red expanse (in my visual field): When something tastes bitter to me I am directly acquainted with a sensation with the property of being bitter, and so on and so forth. (If you do not understand the term ‘directly acquainted’, stick a pin into your finger. The relation you will then bear to your pain, as opposed to the relation of concern you might bear to another’s pain when told about it, is an instance e of direct acquaintance e in the intended sense.)

The act/object account of sensory experience combines with various considerations traditionally grouped under the head of the argument for illusion to provide arguments for representative realism, or more precisely for the clause in it that contents that our basic stand of civility is derived through the interconnectivity of information about the world that comes indirectly, that is, that we are mostly directively acquainted with not having with an aspect of the world but an aspect for our mental sensory responses to it. Consider, for instance, the aforementioned refractive illusion, that of a straight stick in water looking bent. The act/object account holds that in this case we are directly acquainted with a bent shape. This shape, so the argument runs, cannot be the stick as it is straight, and thus, must be a mental item, commonly called a sense-datum. And, ion general sense-data-visual, tactual, etc. ~ is held to a physical object of direct acquaintance. Perhaps the most striking uses of the act/object analysis to bolster representative realism turns on what modern science tell us about the fundamental nature of the physical world. Modern science tells us that the objects of the physical world in and about us, or simply, all traitful environmental surroundings which are literally made up of enormously many, widely separated, tiny particles whose nature can be given in terms of a small number of properties like mass, charge, spin and so on. (These properties are commonly called the primary qualities, as primary and secondary qualities represent a metaphysical distinction with which really belong to objects in the world and qualities which only appear to belong to them, or which human beings only believe to belong to them, because of the effects those objects produce ion human beings, typically through the sense organs, that is to say, something that does not hold everywhere by nature, but is producing in or contributed by human beings in their interaction with a world which really contains only atoms of certain kinds in a void. To thinking that some objects in the world are coloured, or sweet ort bitter is to attributive object qualities which on this view they do not actually possess. Rather, it is only that some of the qualities which are imputed to objects, e.g., colour, sweetness, bitterness, which are not possessed by those objects. But, of course, that is not how the objects look to us, not how they present to our senses. They look continuous and coloured. What then, can be these coloured expanses with which we are directly acquainted, is other than mentalistic thoughts of sense-data.

Two objections dominate the literature on representative realism: One goes back to Berkeley (1685-1753) and is that representative realism lead straight to scepticism about the external world, the other is that the act/object account of sensory awareness is to be rejected in favour of an adverbial account.

Traditional representative realism is a ‘veil of perception’ doctrine, in Bennetts (1971) phrase. Lock e’s idea (1632-1704) was that the physical world was revealed by science to be in essence colourless, odourless, tasteless and silent and that we perceive it by, to put it metaphorically, throwing a veil over it by means of our senses. It is the veil we see, in the strictest sense of ‘see’. This does not mean that we do not really see the objects around us. It means that we see an object in virtue of seeing the veil, the sense-data, causally related in the right way to that object, an obvious question to ask, therefore, is what justifies us in believing that there is anything behind the veil, and if we are somehow justified in believing that there is something behind the veil. How can we be confident of what it is like?

One intuition that lies at the heart of the realist’s account of objectivity is that, in the last analysis, the objectivity of a belief is to be explained by appeal to the independent existence of the entities it concerns: Epistemological objectivity, this is, is to b e analysed in terms of ontological notions of objectivity. A judgement or beliefs are epistemological notions of objectivity, if and only if it stands in some specified reflation to an independently existing determinate reality. Frége (1848-1925), for example, believed that arithmetic could comprise objective knowledge only if the numbers it refers to, the propositions it consists of, the functions it employs, and the truth-values it aims at, are all mind-independent entities. And conversely, within a realist framework, to show that the members of a given class of judgements are merely subjective, it is sufficient to show that there exists no independent reality that those judgements characterize or refer to.

Thus, it is favourably argued that if values are not part of the fabric of the world, then moral subjectivity is inescapable. For the realist, the, of epistemological notions of objectivity is to be elucidated by appeal to the existence of determinate facts, objects, properties, events and the like, which exit or obtain independent of any cognitive access we may have to them. And one of the strongest impulses toward platonic realism ~ the theoretical commitment to the existence of abstract objects like sets, numbers, and propositions ~ stems from the widespread belief that only if such things exist in their own right can, we allow that logic, arithmetic and science are indeed objective. Though ‘Platonist’ realism in a sense accounts for mathematical knowledge, it postulates such a gulf between both the ontology and the epistemology of science and that of mathematics that realism is often said to make the applicability of mathematics in natural science into an inexplicable mystery

This picture is rejected by anti-realists. The possibility that our beliefs and theories are objectively true is not, according to them, capable of being rendered intelligible by invoking the nature and existence of reality as it is in and of itself. If our conception of epistemological objective notions is minimal, requiring only ‘presumptive universality’, then alternative, non-realist analysers of it can seem possible ~ and even attractive. Such analyses have construed the objectivity of an arbitrary judgement as a function of its coherence with other judgements, of its possession of grounds that warrant it. Of its conformity to the a prior rules that constitute understanding, of its verifiability (or falsifiability), or if its permanent presence in the mind of God. On e intuitive common to a variety of different anti-realist theories is such that for our assertions to be objective, for our beliefs to comprise genuine knowledge, those assertions and beliefs must be, among other things, rational, justifiable, coherent, communicable and intelligible. But it is hard, the anti-realist claims, to see how such properties as these can be explained by appeal to entities as they are on and of themselves. On the contrary, according to most forms of anti-realism, it is only the basis of ontological subjective notions like ‘the way reality seems to us’, ‘the evidence that is available to us’, ‘the criteria we apply’, ‘the experience we undergo’ or ‘the concepts we have acquired’ that epistemological notions of objectivity of our beliefs can possibly be explained.

The reason by which a belief is justified must be accessible in principle to the subject hold that belief, as Externalists deny this requirement, proposing that this makes ‘knowing’ too difficult to achieve in most normal contexts. The internalist-Externalists debate is sometimes also viewed as a debate between those who think that knowledge can be naturalized (Externalists) and those who do not (internalist) naturalists hold that the evaluative notions used in epistemology can be explained in terms of non-evaluative concepts ~ for example, that justification can be explained in terms of something like reliability. They deny a special normative realm of language that is theoretically different from the kinds of concepts used in factual scientific discourse. Non-naturalists deny this and hold to the essential difference between normative and the factual: The former can never be derived from or constituted by the latter. So internalists tend to think of reason and rationality as non-explicable in natural, descriptive terms, whereas, Externalists think such an explanation is possible.

Although the reason, . . . to what we think to be the truth. The sceptic uses an argumentive strategy to show the alternatives strategies that we do not genuinely have knowledge and we should therefore suspend judgement. But, unlike the sceptics, many other philosophers maintain that more than one of the alternatives is acceptable and can constitute genuine knowledge. However, it seems dubitable to have invoked hypothetical sceptics in their work to explore the nature of knowledge. These philosophers did no doubt that we have knowledge, but thought that by testing knowledge as severely as one can, one gets clearer about what counts as knowledge and greater insight results. Hence there are underlying differences in what counts as knowledge for the sceptic and other philosophical appearances. As traditional epistemology has been occupied with dissassociative kinds of debate that led to a dogmatism. Various types of beliefs were proposed as candidates for sceptic-proof knowledge, for example, those beliefs that are immediately derive by many as immune to doubt. Nevertheless, that they all had in common was that empirical knowledge began with the data of the senses, that this was safe from scepticism and that a further superstructure of knowledge was to be built on this firm basis.

It might well be observed that this reply to scepticism fares better as a justification for believing in the existence of external objects, than as a justification of the views we have about their nature. It is incredible that nothing independent of us is responsible for the manifest patterns displayed by our sense-data, but granting this leaves open many possibilities about the nature of the hypnotized external reality. Direct realists often make much of the apparent advantage that their view has in the question of the nature of the external world. The fact of the matter is, though, that it is much harder to arrive at tenable views about the nature of external reality than it is to defend the view that there is an external reality of some kind or other. The history of human thought about the nature of the external world is littered with what are now seen (with the benefit of hindsight) to be egregious errors ~ the four element theory, phlogiston, the crystal spheres, vitalism, and so on. It can hardly be an objection to a theory that makes the question of the nature of external reality much harder than the question of its existence.

The way we talk about sensory experience certainly suggests an act/object view. When something looks thus and so in the phenomenological sense, we naturally describe the nature of our sensory experience by saying that we are acquainted with a thus and so ‘given’. But suppose that this is a misleading grammatical appearance, engendered by the linguistic propriety of forming complete, putatively referring expressions like ‘the bent shape on my visual field’, and that there is no more a bent shape in existence for the representative realist to contend to be a mental sense-data, than there is a bad limp in existence when someone has, as we say, a bad limp. When someone has a bad limo, they limp badly, similarly, according to an adverbial theorist, when, as we naturally put it, I am aware of a bent shape, we would better express the way things are by saying that I sense bent shapely. When the act/object theorist analyses as a feature of the object which gives the nature of the sensory experience, the adverbial theorist analyses as a mode of sense which gives the nature of the sensory experience. (The decision between the act/object and adverbial theories is a hard one.)

In the best-known form the adverbial theory of experience proposes that the grammatical object of a statement attributing an experience to someone be analysed as an adverb. For example,

(1) Rod is experiencing a pink square

Is rewritten as? ,

Rod is experiencing (pink square)-ly

This is presented as an alterative to the act/object analysis, according to which the truth of a statement like (1) requires the existence of an object of experience corresponding to its grammatical object. A commitment to the explicit adverbialization of statements of experience is not, however, essential to adverbialism. The core of the theory consisted, rather, in the denial of objects of experience, as opposed to objects of perception, and coupled with the view that the role of the grammatical object is a statement of experience is to characterize more fully the sort of experience which is being attributed to the subject. The claim, then, is that the grammatical object is functioning as a modifier, and, in particular, as a modifier of a verb. If this is so, it is perhaps appropriate to regard it as a special kind of adverb at the semantic level.

Nonetheless, in the arranging accordance to the act/object analysis of experience, every experience with content involves an object of experience to which the subject is related by an act of awareness in the event of experiencing that object. Such as these experiences are, it is, nonetheless. The experiences are supposed to be whatever it is that they represent. Act, an object theorist may differ on the nature of objects of experience, which h has been treated as properties. However, and, more commonly, private mental objects in which may not exist have any form of being, and, with sensory qualifies the experiencing imagination may walk upon the corpses of times’ generations, but this has also been used as a unique application to is mosaic structure in its terms for objects of sensory experience or the equivalence of the imaginations striving from the mental act as presented by the object and forwarded by and through the imaginistic thoughts that are released of a vexing imagination. Finally, in the terms of representative realism, objects of perception of which we are ‘directly aware’, as a functionally complex set of plexuities the are in fact, abstract objects of perception existing if objects were to ascertain our only experience.

As the aforementioned, traditionally representative realism is allied with the act/object theory. But we can approach the debate or by rhetorical discourse as meant within dialectic awareness, for which representative realism and direct realism are achieved by the mental act in abdication to some notion of regard or perhaps, happiness, all of which the prompted excitation of the notion expels or extractions of information processing. Mackie (1976( argues that Locke (1632-1704) can be read as approaching the debate ion television. My senses, in particular my eyes and ears, ‘tell’ me that Carlton is winning. What makes this possible is the existence of a long and complex causal chain of electromagnetic radiation from the game through the television cameras, various cables between my eyes and the television screen. Each stage of this process carries information about preceding stages in the sense that the way things are at a given stage depends on, and the way things are at preceding stages. Otherwise, the information would not be transferred from the game to my brain. There needs to be a systematic covariance between the state of my brain and the state unless it obtains between intermediate members of the long causal chain. For instance, if the state of my retina did not systematically remit or consign with the state of the television screen before me, my optic nerve would have, so to speak, nothing to go on to tell my brain about the screen, and so in turn would have nothing to go on to tell my brain about the game. There is no information at a distance’.

A few of the stages in this transmission of information between game and brain are perceptually aware of them. Much of what happens between brain and match I am quite ignorant about, some of what happens I know about from books, but some of what happens I am perceptually aware of the images on the scree. I am also perceptually aware of the game. Otherwise, I could not be said to watch the game on television. Now my perceptual awareness of the match depends on my perceptual awareness of the screen. The former goes by means of the latter. In saying this I am not saying that I go through some sort of internal monologue like ‘Such and such images on the screen are moving thus and thus. Therefore, Carlton is attacking the goal’. Indeed, if you suddenly covered the screen with a cloth and asked me (1) to report on the images, and (2) to report in the game. I might well find it easier to report on the game than on the images. But that does not mean that my awareness of the game does not go by way of my awareness of the images on the screen. The shows that I am more interested in the game than in the screen, and so am storing beliefs about it in preference e to beliefs about the screen.

We can now see how elucidated representative realism independently of the debate between act/object and adverbial theorists about sensory experience. Our initial statement of representative realism talked of the information acquired in perceiving an object being most immediately about the perceptual experience caused in us by the object, and only derivatively about objects itself, in the act/object, sense-data approach, what is held to make that true is that the fact that what we are immediately aware of it’s mental sense-datum. But instead, representative realists can put their view this way: Just as awareness of the match game by means of awareness of the screen, so awareness of the screen foes by way of awareness of experience. , And in general when subjects perceive objects, their perceptual awareness always does by means of the awareness of experience.

Why believe such a view? Because of the point that was inferred earlier: The worldly provision by our senses is so very different from any picture provided by modern science. It is so different in fact that it is hard to grasp what might be meant by insisting that we are in epistemologically direct contact with the world.

An argument from illusion is usually intended to establish that certain familia r facts about illusion disprove the theory of perception and called naïve or direct realism. There are, however, many different versions of the argument which must be distinguished carefully. Some of these premisses (the nature of the appeal to illusion): Others centre on the interpretation of the conclusion (the kind of direct realism under attack). In distinguishing important differences in the versions of direct realism. One might be taken to be vulnerable to familiar facts about the possibility of perceptual illusion.

A crude statement of direct realism would concede to the connection with perception, such that we sometimes directly perceive physical objects and their properties: We do not always perceive physical objects by perceiving something else, e.g., a sense-data. There are, however, difficulties with this formulation of the view. For one thing a great many philosophers who are not direct realists would admit that it is a mistake to describe people as actually perceiving something other than a physical object. In particular, such philosophers might admit, we should never say that we perceive sense-data. To talk that way would be to suppose that we should model our understanding of our relationship to sense-data on our understanding of the ordinary use of perceptual verbs as they describe our relation to the physical world, and that is the last thing paradigm sense-data theorists had better want. At least, many of the philosophers who objected to direct realism would prefer to express what they were objecting too in terms of a technical and philosophical controversial concept such as acquaintance. Using such a notion, we could define direct realism this way: In veridical experience we are directly acquainted with parts, e.g., surfaces, or constituents of physical objects. A less cautious version of the view might drop the reference to veridical experience and claim simply that in all parts or constituents of physical objects.

We know things by experiencing them, and knowledge of acquaintance. (Russell changed the preposition to ’by’) is epistemically prior to and has a relatively higher degree of epistemic justification than knowledge about things. Indeed, sensation has ‘the one great value of trueness or freedom from mistake’.

A thought (using that term broadly, to mean any mental state) constituting knowledge of acquaintance with thing is more or less causally proximate to sensations caused by that thing is more or less distant causal y, being separated from the thing and experience of it by processes of attention and inference. At the limit, if a thought is maximally of the acquaintance type, it is the first mental state occurring in a object to which the thought refers, i.e., it is a sensation. The things we have knowledge of acquaintance includes ordinary objects in the external world, such as the Sun.

Grote contrasted the imaginistic thoughts involved in knowledge of acquaintance with things, with the judgements involved in knowledge about things, suggesting that the latter but not the former are contentful mental states. Elsewhere, however, he suggested that every thought capable of constituting knowledge of or about a thing involves a form, idea, or what we might call conceptual propositional content, referring the thought to its object. Whether contentful or not, thoughts constituting knowledge of acquaintance with a thing as r relatively indistinct, although this indistinctness does not imply incommunicability. Yet, thoughts constituting knowledge about a thing are relatively distinct, as a result of ‘the application of notice or attention’ to the ‘confusion or chaos’ of sensation. Grote did not have an explicit theory of reference e, the relation by which a thought of or about a specific thing. Nor did he explain how thoughts can be more or less indistinct.

Helmholtz (1821-94) held unequivocally that all thoughts capable of constituting knowledge, whether ‘knowledge e’s which has to do with notions’ or ‘mere familiarities with phenomena’ are judgements or, we may say, have conceptual propositional contents. Where Grote saw a difference e between distinct and indistinct thoughts. Helmholtz found a difference between precise judgements which are expressible in words and equally precise judgement which, in principle, are not expressible in words, and so are not communicable.

James (1842-1910), however, made a genuine advance over Grote and Helmholtz by analysing the reference relations holding between a thought and the specific thing of or about which it is knowledge. In fact, he gave two different analyses. On both analyses, a thought constituting knowledge about a thing refers to and is knowledge about ‘a reality, whenever it actually or potentially terminates in’ a thought constituting knowledge of acquaintance with that thing. The two analyses differ in their treatments of knowledge of acquaintance. On James’s first analyses, reference in both sorts of knowledge is mediated by causal chains. A thought constituting pure knowledge of acquaintance with a thing refers to and is knowledge of ‘whatever reality it directly or indirectly operates on and resembles’. The concepts of a thought ‘operating in’ a thing or ‘terminating in’ another thought are causal, but where Grote found chains of efficient causation connecting thought and referent. James found teleology and final causes. On James’s later analysis, the reference involved in knowledge of acquainting e with a thing is direct. A thought constituting knowledge of acquaintance with a thing as a constituent and the thing and the experiences of it are identical.

James further agreed with Grote that pure knowledge of acquaintance with things, eg., sensory experience, is epistemically prior to knowledge about things. While the epistemic justifications involved in knowledge about all thoughts about things are fallible and their justification is augmented by their mutual coherence. James was unclear about the precise epistemic status of knowledge of acquaintance. At times, thoughts constituting pure knowledge of acquaintance are said to posses ‘absolute veritableness’ and ‘the maximal conceivable truth’, suggesting that such thoughts are genuinely cognitive and that they provide an infallible epistemic foundation. At other times, such thoughts are said not to bear truth-values, suggesting that ‘knowledge’ of acquaintance is not genuine knowledge at all, but only a non-cognitive necessary condition of genuine knowledge, that is to say, the knowledge about things.

What is more that, Russell (1872-1970) agreed with James that knowledge of things by acquaintance ‘is essentially simpler than any knowledge of truths, and logically independent of knowledge of truth?’. That the mental states involved when one is acquainted with things do not have propositional contents. Russell’s reasons were to seem as having been similar to James’s. Conceptually unmediated reference to particulars is necessary for understanding any proposition mentioning a particular and, if scepticism about the external world is to be avoided, some particulars must be directly perceived. Russell vacillated about whether or not the absence of propositional content renders knowledge by acquaintance incommunicable.

Russell agreed with James that different accounts should be given of reference as it occurs in knowledge by acquaintance and in knowledge about things, and that in the former case reference is direct. But, Russell objected on the number of grounds to James’s causal account of the indirect reference involved in knowledge about things. Russell gave a description al rather than a causal analysis of that sort of reference. A thought is about a thing when the content of the thought involves a definite description uniquely satisfied by the thing referred to. Yet, he preferred to speak of knowledge of things by description, than of knowledge about things.

Russell advanced beyond Grote and James by explaining how thoughts can be more or less articulate and explicit. If one is acquainted with a complex thing without being aware of or acquainted with its complexity, the knowledge one has by acquaintance e with that thing is vague and inexplicit. Reflection and analysis can lead to distinguish constituent parts of the object of acquaintance and to obtain progressively more distinct, explicit, and complete knowledge about it.

Because one can interpret the reflation of acquaintance or awareness as one that is not epistemic, i.e., not a kind of propositional knowledge, it is important to distinguish the views read as ontological theses from a view one might call epistemological direct realism: In perception we are, on, at least some occasions, non-inferentially justified in believing a proposition asserting the existence e of a physical object. A view about what the objects of perceptions are. Direct realism is a type of realism, since it is assumed that these objects exist independently of any mind that might perceive them: And so it thereby rules out all forms of idealism and phenomenalism, which holds that there are no such independently existing objects. Its being a ‘direct realism rules out those views’ defended under the rubic of ‘critical realism’, of ‘representative realism’, in which there is some non-physical intermediary ~ usually called a ‘sense-data’ or a ‘sense impression’ ~ that must first be perceived or experienced in order to perceive the object that exists independently of this perception. According to critical realists, such an intermediary need not be perceived ‘first’ in a temporal sense, but it is a necessary ingredient which suggests to the perceiver an external reality, or which offers the occasion on which to infer the existence of such a reality. Direct realism, however, denies the need for any recourse to mental go-between in order to explain our perception of the physical world.

This reply on the part of the direct realist does not, of course, serve to refute the global sceptic, who claims that, since our perceptual experience could be just as it is without there being any real properties at all, we have no knowledge of any such properties. But no view of perception alone is sufficient to refute such global scepticism. For such a refutation we must go beyond a theory that claims how best to explain our perception of physical objects, and defend a theory that best explains how we obtain knowledge of the world.

All is the equivalent for an external world, as philosophers have used the term, is not some distant planet external to Earth. Nor is the external world, strictly speaking, a world. Rather, the external world consists of all those objects and events which exist external to perceiver. So the table across the room is part of the external world, and so is the room in part of the external world, and so is its brown colour and roughly rectangular shape. Similarly, if the table falls apart when a heavy object is placed on it, the event of its disintegration is a pat of the external world.

One object external to and distinct from any given perceiver is any other perceiver. So, relative to one perceiver, every other perceiver is a part of the external world. However, another way of understanding the external world results if we think of the objects and events external to and distinct from every perceiver. So conceived the set of all perceiver makes up a vast community, with all of the objects and events external to that community making up the external world. Thus, our primary considerations are in the concern from which we will suppose that perceiver is entities which occupy physical space, if only because they are partly composed of items which take up physical space.

What, then, is the problem of the external world. Certainly it is not whether there is an external world, this, and much are taken for granted. Instead, the problem is an epistemological one which, in rough approximation, can be formulated by asking whether and if so how a person gains of the external world. So understood, the problem seems to admit of an easy solution. There is knowledge of the external world which persons acquire primarily by perceiving objects and events which make up the external world.

However, many philosophers have found this easy solution problematic. Nonetheless, the very statement of ‘the problem of the external world itself’ will be altered once we consider the main thesis against the easy solution.

One way in which the easy solution has been further articulated is in terms of epistemological direct realism. This theory is the realist insofar as it claims that objects and events in the external world, along with many of their various features, exist independently of and are generally unaffected by perceiver and acts of perception in which they engage. And this theory is epistemologically direct since it also claims that in perception people often, and typically acquire immediate non-inferential knowledge of objects and events in the external world. It is on this latter point that it is thought to face serious problems.

The main reason for this is that knowledge of objects in the external world seems to be dependent on some other knowledge, and so would not qualify as immediate and non-inferentially is claimed that I do not gain immediate non-inferential perceptual knowledge that there is a brown and rectangular table before me, because I would know such a proposition unless I knew that something then appeared brown and rectangular. Hence, knowledge of the table is dependent upon knowledge of how it appears. Alternately expressed, if there is knowledge of the table at all, it is indirect knowledge, secured only if the proposition about the table may be inferred from propositions about appearances. If so, epistemological direct realism is false’

This argument suggests a new way of formulating the problem of the external world:

:Problem of the external world: Can firstly, have?

knowledge of propositions about objects and events

in the external world based on or upon propositions

which describe how the external world appears,

i.e., upon appearances?

Unlike our original formulation of the problem of the external world, this formulation does not admit of an easy solution. Instead, it has seemed to many philosophers that it admits of no solution at all, so that scepticism regarding the eternal world is only remaining alternative.

This theory is realist in just the way described earlier, but it adds, secondly, that objects and events in the external world are typically directly perceived, as are many of their features such as their colour, shapes, and textures.

Often perceptual direct realism is developed further by simply adding epistemological direct realism to it. Such an addition is supported by claiming that direct perception of objects in the external world provides us with immediate non-referential knowledge of such objects. Seen in this way, perceptual direct realism is supposed to support epistemological direct realism, strictly speaking they are independent doctrines. One might consistently, perhaps even plausibly, hold one without also accepting the other.

Direct perception is that perception which is not dependent on some other perception. The main opposition to the claim that we directly perceive external objects come from direct or representative realism. That theory holds that whenever an object in the external world is perceived, some other object is also perceived, namely a sensum ~ a phenomenal entity of some sort. Further, one would not perceive the external object if one would not perceive the external object if one were to fail to receive the sensum. In this sense the sensum is a perceived intermediary, and the perception of the external object is dependent on the perception of the sensum. For such a theory, perception of the sensum is direct, since it is not dependent on some other perception, while perception on the external object is indirect. More generally, for the indirect realism. All directly perceived entities are sensum. On the other hand, those who except perceptual direct realism claim that perception of objects in the external world is typically direct, since that perception is not dependent on some perceived intermediaries such as sensum.

It has often been supposed, however, that the argument from illusion suffices to refute all forms of perceptual direct realism. The argument from illusion is actually a family of different arguments rather than one argument. Perhaps the most familiar argument in this family begins by noting that objects appear differently to different observers, and even to the same observers on different occasions or in different circumstances. For example, a round dish may appear round to a person viewing it from directly above and elliptical to another viewing it from one side. As one changes position the dish will appear to have still different shapes, more and more elliptical in some cases, closer and closer to round in others. In each such case, it is argued, the observer directly sees an entity with that apparent shape. Thus, when the dish appears elliptical, the observer is said to see directly something which is elliptical. Certainly this elliptical entity is not the top surface of the dish, since that is round. This elliptical entity, a sensum, is thought to be wholly distinct from the dish.

In seeing the dish from straight above it appears round and it might be thought that then directly sees the dish rather than a sensum. Yet, at this particular point or interval as maintained by some intermittence of space and time, implicates the relative set to which it is ascribed to a certain position therein: The dish will appear different in size as one is placed at different distances from the dish. So even if in all of these cases the dish appears round, it will; also, appear to have many different diameters. Hence, in these cases as well, the observer is said to directly see some sensum, and not the dish.

This argument concerning the dish can be generalized in two ways. First, more or less the same argument can be mounted for all other cases of seeing and across the full range of sensible qualities ~ textures and colours in addition to shapes and sizes. Second, one can utilize related relativity arguments for other sense modalities. With the argument thus completed, one will have reached the conclusion that all cases of non-hallucinatory perception, the observer directly perceives a sensum, and not an external physical object. Presumably in cases of hallucination a related result holds, so that one reaches the fully general result that in all cases of perceptual experience, what is directly perceived is a sensum or group of sensa, and not an external physical object, perceptual direct realism, therefore, is deemed false.

Yet, even if perceptual direct realism is refuted, this by itself does not generate a problem of the external world. We need to add that if no person ever directly perceives an external physical object, then no person ever gains immediate non-inferential knowledge of such objects. Armed with this additional premise, we can conclude that if there is knowledge of external objects, it is indirect and based upon immediate knowledge of sensa. We can then formulate the problem of the external world in another way:

Problems of the external world: can, secondly? ,

have knowledge of propositions about objects, and

events in the external world based upon propositions

about directly perceived sensa?

It is worth nothing the differences between the problems of the external world as expounded upon its first premise and the secondly proposing comments as listed of the problems of the external world, we may, perhaps, that we have knowledge of the external world only if propositions about objects and events in the external world that are inferrable from propositions about appearances.

Some philosophers have thought that if analytical phenomenalism was true, the situational causalities would be different. Analytic phenomenalism is the doctrine that every proposition about objects and events’ in the external worlds is fully analysable into, and thus is equivalent in meaning to, a group of inferrable propositions. The numbers of inferrable propositions making up the analysis in any single propositioned object and/or events in the external world would likely be enormous, perhaps, indefinitely many. Nevertheless, analytic phenomenalism might be of help in solving the perceptual direct realism of which the required deductions propositioned about objects and events in the external world from those that are inferrable from prepositions about appearances. For, given analytical phenomenalism there is indefinite many in the inferrable propositions about appearances in the analysis of each proposition taken about objects and events in the external world is apt to be inductive, even granting the truth of a analytical phenomenalism. Moreover, most of the inferrable propositions about appearances into which we might hope to analyse of the external world, then we have knowledge of the external world only if propositions about objects and events in the external world would be complex subjunctive conditionals such as that expressed by ‘If I were to seem to see something red, round and spherical, and if I were to seem to try to taste what I seem to see, then most likely I would seem to taste something sweet and slightly tart’. But propositionally inferrable appearances of this complex sort will not typically be immediately known. And thus knowledge of propositional objects and event of the external world will not generally be based on or upon immediate knowledge of such propositionally making appearances.

Consider upon the appearances expressed by ‘I seem to see something red, round, and spherical’ and ‘I seem to taste something sweet and slightly tart’. To infer cogently from these propositions to that expressed by ‘There is an apple before me’ we need additional information, such as that expressed by Apples, generally causes visual appearance of redness, roundness, and spherical shape and gustatory appearance of sweetness and tartness’. With this additional information. , the inference is a good on e, and it is likely to be true that there is an apple there relative to those premiered. The cogency of the inference, however, depends squarely on the additional premise, relative only to the stated inferrability placed upon appearances, it is not highly probable that there is an apple there.

Moreover, there is good reason to think that analytic phenomenalism is false. For each proposed translation of an object and eventfully external world into the inferrable propositions about appearances. Mainly enumerative induction is of no help in this regard, for that is an inference from premisses about observed objects in a certain set-class having some properties ‘F’ and ‘G’ to unobserved objects in the same set-class having properties ‘F’ and ‘G’, to unobserved objects in the same set-class properties ‘F’ and ‘G’. If satisfactory, then we have knowledge of the external world if propositions are inferrable from propositions about appearances, however, concerned considerations drawn upon appearances while objects and events of the external world concern for externalities of objects and interactive categories in events, are. So, the most likely inductive inference to consider is a causal one: We infer from certain effects, described by promotional appearances to their likely causes, described by external objects and event that profited emanation in the concerning propositional state in that they occur. But, here, too, the inference is apt to prove problematic. But in evaluating the claim that inference constitutes a legitimate and independent argument from, one must explore the question of whether it is a contingent fact that, at least, most phenomena have explanations and that is so, that a given criterion, simplicities, were usually the correct explanation, it is difficult to avoid the conclusion that if this is true it would be an empirical fact about our selves in discovery of an reference to the best explanation.

Defenders of direct realism have sometimes appealed to an inference to the best explanation to justify prepositions about objects and events in the external world, we might say that the best explanation of the appearances is that they are caused by external objects. However, even if this is true, as no doubt it is, it is unclear how establishing this general hypophysis helps justify specific ordination upon the proposition about objects and event in the external world, such as that these particular appearances of a proposition whose inferrable properties about appearances caused by the red apple.

The point here is a general one: Cogent inductive inference from the inferrable proposition about appearances to propositions about objects and events in the external world are available only with some added premiss expressing the requisite causal relation, or perhaps some other premiss describing some other sort of correlation between appearances and external objects. So there is no reason to think that indirect knowledge secured if the prepositions about its outstanding objectivity from realistic appearances, if so, epistemological direct realism must be denied. And since deductive and inductive inferences from appearance to objects and events in the external world are propositions which seem to exhaust the options, no solution to its argument that sustains us of having knowledge of propositions about objects and events in the external world based on or upon propositions which describe the external world as it appears at which point that is at hand. So unless there is some solution to this, it would appear that scepticism concerning knowledge of the external world would be the most reasonable position to take

If the argument leading to some additional premise as might conclude that if there is knowledge of external objects if is directly and based on or upon the immediate knowledge of sensa, such that having knowledge of propositions about objects and events in the external world based on or upon propositions about directly perceived sensa? Broadly speaking, there are two alternatives, perceptual give directions to, that realism, and, of course, perceptual Phenomenalism. In contrast to indirect realism, and perceptual phenomenalism is that perceptual phenomenalism rejects realism outright and holds instead that (1) physical objects are collections of sensa, (2) in all cases of perception, at least one sensa is directly perceived, and, (3) to perceive a physical object one directly perceives some of the sensa which is constituents of the collection making up that object.

Proponents of each of these positions try to solve the conditions not engendered to the species of additional persons ever of directly perceiving an external physical object, then no person ever gains immediate non-referential knowledge of such objects in different ways, in fact, if any the better able to solve this additional premise, that we would conclude that if there is knowledge of external objects than related doctrines for which times are aforementioned. The answer has seemed to most philosophers to be ‘no’, for in general indirect realists and phenomenalists have strategies we have already considered and rejected.

In thinking about the possibilities of such that we need to bear in mind that the term for propositions which describe presently directly perceived sensa. Indirect realism typically claims that the inference from its presently directly perceived sensa to an inductive one, specifically a causal inference from effects of causes. Inference of such a sort will perfectly cogent provides we can use a premiss which specifies that physical objects of a certain type are causally correlated with sensa of the sort currently directly perceived. Such a premiss will be justified, if at all, solely on the basis of propositions described presently directly perceived sensa. Certainly for the indirect realist one never directly perceives the causes of sensa. So, if one knows that, say, apples topically cause such-and-such visual sensa, one knows this only indirectly on the basis of knowledge of sensa. But no group of propositionally perceived sensa by itself supports any inferences to causal correlations of this sort. Consequently, indirect realists are in no p position to solve such categorically added premises for which knowledge is armed with additional premise, as containing of external objects, it is indirect and based on or upon immediate knowledge of sensa. The consequent solution of these that are by showing that propositions would be inductive and causal inference from effects of causes and show inductively how derivable for propositions which describe presently perceived sensa.

Phenomenalists have often supported their position, in part, by noting the difficulties facing indirect t realism, but phenomenalism is no better off with respect to inferrable prepositions about objects and events responsible for unspecific appearances. Phenomenalism construes physical objects as collections of sensa. So, to infer an inference from effects to causes is to infer a proposition about a collection from propositions about constituent members of the collective one, although not a causal one. Nonetheless, namely the inference in question will require a premise that such-and-such directly perceived sensa are constituents of some collection ‘C’, where ‘C’ is some physical object such as an apple. The problem comes with trying to justify such a premise. To do this, one will need some plausible account of what is mean t by claiming that physical objects are collections of sensa. To explicate this idea, however, phenomenalists have typically turned to analytical phenomenalism: Physical objects are collections of sensa in the sense that propositions about physical objects are analysable into propositions about sensa. And analytical phenomenalism we have seen, have been discredited.

If neither propositions about appearances nor propositions accorded of the external world can be easily solved, then scepticism about external world is a doctrine we would be forced to adopt. One might even say that it is here that we locate the real problem of the external world. ‘How can we avoid being forced into accepting scepticism’?

In avoiding scepticism, is to question the arguments which lead to both propositional inferences about the external world an appearances. The crucial question is whether any part of the argument from illusion really forces us to abandon the incorporate perceptual direct realism. To help see that the answer is ‘no’ we may note that a key premise in the relativity argument links how something appears with direct perception: The fact that the dish appears elliptical is supposed to entail that one directly perceives something which is elliptical. But is there an entailment present? Certainly we do not think that the proposition expressed by ‘The book appears worn and dusty and more than two hundred years’ old’ entails that the observer directly perceives something which is worn and dusty and more than two hundred years’ old. And there are countless other examples like this one, where we will resist the inference from a property ‘F’ appearing to someone to claim that ‘F’ is instantiated in some entity.

Proponents of the argument from illusion might complain that the inference they favour works only for certain adjectives, specifically for adjectives referring to non-relational sensible qualities such as colour, taste, shape, and the like. Such a move, however, requires an arrangement which shows why the inference works in these restricted cases and fails in all others. No such argument has ever been provided, and it is difficult to see what it might be.

If the argument from illusion is defused, the major threat facing a knowledge of objects and events in the external world primarily by perceiving them. Also, its theory is realist in addition that objects and events in the external world are typically directly perceived as are many of their characteristic features. Hence, there will no longer be any real motivation for it would appear that scepticism concerning knowledge of the external world would be the most reasonable position to take. Of course, even if perceptual directly realism is reinstated, this does not solve, by any means, the main reason for which that knowledge of objects in the external world seems to be dependent on some other knowledge, and so would not qualify as immediate and non-reference, along with many of their various features, exist independently of and are generally unaffected by perceivers and acts of perception in which they engage. That problem might arise even for one who accepts perceptual direct realism. But, there is reason to be suspicious in keeping with the argument that one would not know that one is seeing something blue if one failed to know that something looked blue. In this sense, there is a dependance of the former on the latter, what is not clear is whether the dependence is epistemic or semantic. It is the latter if, in order to understand what it is to see something blue, one must also understand what it is fort something to look blue. This may be true, even when the belief that one is seeing something blue is not epistemically dependent on or based upon the belief that something looks blue. Merely claiming, that there is a dependent relation does not discriminate between epistemic and semantic dependence. Moreover, there is reason to think it is not an epistemic dependence. For in general, observers rarely have beliefs by what means of objects appear, but this fact does not impugn their knowledge that they are seeing, e.g., as, in blue objects.

Along with ‘consciousness’, experience is the central focus of the philosophy of mind. Experience is easily thought of as a stream of private events, known only to their possessor, and baring at best problematic relationship to any other events, such as happening in an external world or similar stream of either possessors. The stream makes up the conscious life of the possessor. The stream makes up the conscious life of the possessor. With this picture there is a complete separation of mind and world, and in spite of great philosophical effort the gap, once opened, proves impossible to bridge both ‘idealism’ and ‘scepticism’ are common outcomes. The aim of much recent philosophy, therefore, is to articulate a less problematic conception of experience, making it objectively accessible, so that the facts about how a subject experiences the world are in principle as knowable as the facts about how the same subject digests food. A beginning on this task may be made by observing that experience have contents: ‘Content’ has become a technical term in philosophy for whatever it is a representation has that makes it semantically evaluable. Thus, a statement is something said to have a proposition or truth condition as its content: A term is sometimes said to have a concept as its content. Much less is known about how to characterize the contents of non-linguistic representations than is known about characterizing linguistic representations. ‘Content’ is a useful term precisely because it allows one to abstract away from questions about what semantic properties representations have, a representation’s content is just whatever it is that underwrites its semantic evaluation.

A great deal of philosophical effort has been lavished on the attempt to naturalize content, e.g., to explain in non-semantic, non-intentional terms what it is for something to be representation (have ‘content’), and what it is for something to give some particular content than some other. There appear to be only our types of theory that have been proposed: Theories that ground representation in (1) similarity, (2) covariance (3) functional role, and teleology.

Similarity theories hold that ‘r’ represents ‘χ’ in virtue of being similar to ‘χ’. This has seemed hopeless to most as a theory of mental representation because it appears to require that things in the brain must share properties with the thingos they represent: To represent a cat as furry appears to require something furry in the brain. Perhaps, a notion of similarity that is naturalized and does not involve property sharing can be worked out, but it is not obvious how.

Covariance theories hold that r’s representing ‘χ’ is grounded in the fact that r’s occurrence covaries with that of ‘χ’. This is most compelling when one thinks about detection systems: The firing of neural structure in the visual system is said to represent vertical orientations if its firing covaries with the occurrence of vertical lines in the visual field. Dretske (1981) and Fodor (1987) has, in different ways, attempted to promote this idea into a general theory of content.

Teleological theories hold that ‘r’ represents ‘χ’ if it is r’s function to indicate (i.e., covary with) ‘χ’. Teleological theories differ depending on the theory of functions they import. Perhaps, the most important distinction is that between historical theories and functions, as historical theories individuate functional states, hence content, in a way that is sensitive to the historical development of the state, i.e., to factors such as the way the state was ‘learned’, or the way it evolved. An historical theory might hold that the function of ‘r’ is to indicate ‘χ’ only if the capacity to token ‘r’ was developed (selected, learned) because it indicates ‘χ’. Thus, a state physically indistinguishable from ‘r’ (physical stares being a-historical) but lacking r’s historical origins would not represent ‘χ’ according to historical theories.

Theories of representational content may be classified according to whether they are atomistic or holistic and according to whether they are externalistic or internalistic. Primarily, the alternative was for something expressed or implied by the intendment for integrating the different use of the terms ‘internalism’ and ‘externalisms’ have to do with the issue of how the content of beliefs and thoughts is determined: According to an internalist view of content, the content of such intentional states depends only on the non-relational, internal properties of the individual’s mind or brain, and not at all on his physical and social environment. According to an externalist view, content is significantly affected by such external factors.

As with justification and knowledge, the traditional view of content has been strongly internalist in character. The main argument for externalisms derives from the philosophy of language, more specifically from the various phenomena pertaining to natural kind terms, indexical, etc., that motivate the views that have come to be known as ‘direct reference’ theories. Such phenomena seem, at least, to show that the belief of thought content that can be properly attributed to a person is dependent on facts about his environment -, e.g., whether he is on Earth or Twin Earth, what in fact he is pointing at, the classificatorial criteria employed by the experts in his social group etc. ~ not just on what is going on internally in his mind or brain.

An objection to externalist accounts of content is that to know the contents of our beliefs or thoughts ‘from the inside’, simply by reflection. If content is dependent on external factors, then knowledge of content should depend on knowledge of these factors ~ which will not in general is available to the person whose belief or thought is in question.

The adoption of an externalist account of mental content would seem to support an externalist way: If part or all of the justification in which if only part of the content of a belief is inaccessible to the believer, then both the justifying status of other beliefs in relation to that content and the status of the content as justifying further beliefs will be similarly inaccessible, thus contravening the internalist requirement for justification. An internalist must insist that there are no justification relations of these sorts, that only internally accessible content can be either or justifiable or justly anything else, but such a response appears lame unless it is coupled with an attempt to show that the externalist account of content is mistaken.

Atomistic theories take a representation’s content to be something that representation’s relation to other representations. What Fodor (1987) calls the crude causal theory, for example, takes a representation to be a
cow
~ a mental representation with the same content as the word ‘cow’ ~ if its tokens are caused by instantiations of the property of being-a-cow, and this is a condition that places no explicit constraints on how
cow
’s must or might relate to other representations. Holistic theories contrast with atomistic theories in taking the relations a representation bears to others to be essential to its content. According to functional role theories, a representation is a
cow
if it behaves like a
cow
behaves in inference.

Internalist theories take the content of a representation to be a matter determined by factors internal to the system that uses it. Thus, what Block (1986) calls ‘short-armed’ functional role theories are internalist. Externalist theories take the content of a representation to be determined, in part at least, by factors external to the system that uses it. Covariance theories, as well as teleological theories that invoker an historical theory of functions, takes content to be determined by ‘external’ factors. Externalist theories (sometimes called non-individualistic theories, following Burge, 1979) have the consequence that molecule for molecule identical cognitive systems might yet harbor representations with different contents. This has given rise to a controversy concerning ‘narrow’ content. If we assume some form of externalist theory is correct, then contents are, in the first instance ‘wide’ content, i.e., determined in part by factors external to the representing system. On the other hand, it seems clear that, on plausible assumptions about how to individuate psychological capacities, internally equivalent systems must have the same psychological capacities. Hence, it would appear that wide content cannot be relevant to characterizing psychological equivalence, philosophers attached to externalist theories of content have sometimes attempted to introduce ‘narrow’ content, i.e., an aspect or kind of content that is equivalent in internally equivalent systems. The simplest such theory is Fodor’s idea (1987) that narrow content is a function from contexts (i.e., from whatever the external factors are) to wide contents.



The actions made rational by content-involving states are actions individuated in part by reference to the agent’s relations to things and properties in his environment, wanting to see a particular movie and believing that building over there is a cinema showing it makes rational the action of walking in the direction of that building. Similarly, for the fundamental case of a subject who has knowledge about his environment, a crucial factor in masking rational the formation of particular attitudes is the way the world is around him. One may expect, then, that any theory that links the attributing of contents to states with rational intelligibility will be committed to the thesis that the content of a person’s states depends in part upon his relations to the world outside him we can call this thesis of externalism about content.

Externalism about content should steer a middle course. On the one hand, the relations of rational intelligibility involve not just things and properties in the world, but the way they are presented for being ~ an externalist should use some version of Frége’s notion of a mode of presentation. Moreover, many have argued that there exists its ‘sense’, or ‘mode of presentation’ (something ‘intention’ is used as well). After all, ‘is an equiangular triangle and is an equilateral triangle, pick out the same things not only in the actual world, but in all possible worlds, and so refer ~ insofar as to the same extension, same intension and (arguably from a causal point of view) the same property, but they differ in the way these referents are presented to the mind. On the other hand, the externalist for whom considerations of rational intelligibility are pertinent to the individuation =of content is likely to insist that we cannot dispense with the notion of something in the world ~ an object, property or relation ~ being presented in a certain way, if we dispense with the notion of something external being presented in a certain way, we are in danger of regarding attributions of content as having no consequences for how an individual relates to his environment, in a way that is quite contrary to our intuitive understanding of rational intelligibility.

Externalism comes in more and less extreme versions: Consider a thinker who sees a particular pear, and thinks a thought ‘that pear is ripe’, where the demonstrative way of thinking of the pear expressed by ‘that pear’ is made available to him by his perceiving the pear. Some philosophers, including Evans (1982) and McDowell (1984), have held that the thinker would be employing a different perceptually. Based way of thinking were he perceiving a different pear. But externalism need not be committed to this, in the perceptual state that makes available the way of thinking, the pear is presented for being in a particular direction from the thinker, at a particular distance, and as having certain properties. A position will still be externalist if it holds that what is involved in the pear’s being so presented is the collective role of these components of content in making intelligible in various circumstances the subject’s relations to environmental directions, distances and properties of objects. This can be held without commitment to the object-dependence of the way of thinking expressed by ‘that pear’. This less strenuous form of externalism must, though, addressed the epistemological argument offered in favour of the more extreme versions, to the effect that only they are sufficiently world-involving.

Externalism about content is a claim about dependence, and dependence comes in various kinds. The apparent dependence of the content of beliefs on factors external to the subject can be formulated as a failure of supervenience of belief content upon facts about what is the case within the boundaries of the subject’s body. In epistemology normative properties such as those of justification and reasonableness are often held to be supervening on the class of natural properties in a similar way. The interest of supervenience is that it promises a way of trying normative properties closely to natural ones without exactly reducing them to natural ones: It can be the basis of a sort of weak naturalism. This was the motivation behind Davidson’s (1917-2003) attempt to say that mental properties supervene into physical ones ~ an attempt which ran into severe difficulties. To claim that such supervenience fail is to make a modal claim: That there can be two persons the same in respect of their internal physical states (and so in respect to those of their disposition that is independent of content-involving states), who nevertheless differ in respect of which beliefs there have. Putnam’s (1926- ) celebrated example of a community of Twin Earth, where the water-like substance in lakes and rain is not H2O, but some different chemical compound XYZ ~ ‘water’ ~ illustrates such failure of supervenience. A molecule-for-molecule replica of you on twin earth has beliefs to the effect that ‘water’ is thus-and-so. Those with any chemical beliefs on twin earth may well not have any beliefs to the effect that water is thus-and-so, even if they are replicas of persons on earth who do have such beliefs. Burge emphasized that this phenomenon extends far beyond beliefs about natural kinds.

In the case of content-involving perceptual states, it is a much more delicate matter to argue for the failure of supervenience, the fundamental reason for this is that attribution of perceptual content is answerable not only to factors on the input side ~ what in certain fundamental cases causes the subject to be in the perceptual state ~ but also to factors on the output side ~ what the perceptual state is capable of helping to explain amongst the subject’s actions. If differences in perceptual content always involve differences in bodily described actions in suitable counterfactual circumstances, and if these different actions always have distinct neural bases, perhaps, there will after all be supervenience of content-involving perceptual states on internal states

This connects with another strand in the abstractive imagination, least of mention, of any thinker who has an idea of an objective spatial world ~ an idea of a world of objects and phenomena which can be perceived but which are not dependent upon being perceived for their existence ~ must be able to think of his perception of the world as simultaneously due to his position in the world, and to the condition of the world at that position. The very idea of a perceivable, objective spatial world brings with it the idea of the subject as in the world, with the course of his perceptions due to his changing position in the world and to the more or less stable way the world is. That also, of perception it is highly relevant to his psychological self-awareness to have of oneself as a perceiver of the environment.



However, one idea that has in recent times been thought by many philosophers and psychologists alike to offer promise in the connection is the idea that perception can be thought of as a species of information-processing, in which the stimulation of the sense-organs constitutes an input to subsequent processing, presumably of a computational form. The psychologist J.J. Gibson suggested that the senses should be construed as systems the function of which is to derive information from the stimulus-array, as to ‘hunt for’ such information. He thought, least of mention, that it was enough for a satisfactory psychological theory of perception that his logical theory of perception that his account should be restricted to the details of such information pick-up, without reference to other ‘inner’ processes such as concept-use. Although Gibson has been very influential in turning psychology away from the previously dominant sensation-based framework of ideas (of which gestalt psychology was really a special case), his claim that reliance on such a notion of information is enough has seemed incredible to many. Moreover, its notion of ordinary one to warrant the accusation that it presupposes the very idea of, for example, concept-possession and belief that implicate the claim to exclude. The idea of information espoused bu Gibson (though it has to be said that this claim has been disputed) is that of ‘information about’, not the technical one involved in information theory or that presupposed by the theory of computation.

There are nevertheless important links between these diverse uses, however, when I enter most intimately into what I call myself, I always stumble on some particular perception or other, of heat or cold, light or shade, love or hatred, pain or pleasure. I never catch myself at any time without a perception and can never observe anything but the perception. However, the idea is that specifying the content of as perceptual experience involves saying what ways of filling out a space around the origin with surfaces, solids, textures, light and so forth, are consistent with the correctness or veridicality of the experience. Such contents are not built from propositions, concepts, senses or continuants of material objects.

Where the term ‘content’ was once associated with the phrase ‘content of consciousness’ to pick out the subjective aspects of mental states, its use in the phrase ‘perceptual content’ is intended to pick out something more closely akin to its old ‘form’ the objective and publicly expressible aspects of mental states. The content of perceptual experience is how the world is represented to be. Perceptual experiences are then counted as illusory or veridical depending on whether the content is correct and the world is as represented. In as much as such a theory of perception can be taken to be answering the more traditional problems of perception. What relation is there between the content of a perceptual state and conscious experience? One proponent of an intentional approach to perception notoriously claims that it is ‘nothing but the acquiring of true or false beliefs concerning the current state of the organism’s body or environment, but the complaint remains that we cannot give an adequate account of conscious perception, as given to the ‘nothing but’ element of this account. However, an intentional theory of perception need not be allied with any general theory of ‘consciousness’, one which explains what the difference is between conscious and unconscious states. If it is to provide an alternative to a sense-data theory, the theory needs only claim that where experience is conscious. Its content is constitutive, at least in part, of the phenomenological character of that experience. This claim is consistent with a wide variety of theories of consciousness, evens the view that no account can be given.

An intentional theory is also consistent with either affirming or denying the presence of subjective features in experience. Among traditional sense-data theorists of experience. H.H. Price attributed in addition an intentional content to perceptual consciousness. Whereby, attributive subjective properties to experience ~ in which case, labelled sensational properties, in the Qualia ~ as well as intentional content. One might call a theory of perception that insisted that all features of what an experience is like ae determined by its intentional content, a purely intentional theory of perception.

Mental events, states or processes with content include seeing the door is shut, believing you are being followed and calculating the square root of 2. What centrally distinguishes states, events or processes ~ henceforth, simply stares ~ with content is that they involve reference to objects, properties or relations. A mental state exists a specific condition for a state with content a specific condition for a state with content to refer to certain things. When the state has correctness or fulfilment by whether its referents have the properties the content specifies for them.

This highly generic characteristic of content permits many subdivisions. It does not in itself restrict contents to conceptualized content, and it permits contents built from Frége’s sense as well as Russellian contents built from objects and properties. It leaves open the possibility that unconscious states, as well as conscious states, has contents. It equally, allows the states identified by an empirical computational psychology to have content. A correct philosophical understanding of this general notion of content is fundamental not only to the philosophy of mind and psychology, but also to the theory of knowledge and to metaphysics.

Perceptions make it rational for a person to form corresponding beliefs and make it rational to draw certain inferences. Belief s and desire s make rational the formation of particular intentions, and the performance o the appropriate actions. People are frequently irrational of course, but a governing ideal of this approach is that for any family of content, there is some minimal core of rational transition to or from states involving them, a core that a person must respect if his states are to be attributed with those contents of all rational interpretative relations. To be rational, a set of beliefs, desires, and actions as well s perceptions, decisions must fit together in various ways. If they do not, in the extreme case they fail to constitute a mind at all ~ no rationality, no agent. This core notion of rationality in philosophy f mind thus concerns a cluster of personal identity conditions, that is, holistic coherence requirements upon the system of elements comprising a person’s mind, it is as well as in philosophy where it is often succumbing to functionalism about content and meaning appears to lead to holism. In general, transitions between mental states and between mental states and behaviour depend on the contents of the mental states themselves. In consideration that I infer from sharks being in the water to the conclusion that people shouldn’t be swimming. Suppose I first think that sharks are dangerous, but then change my mind, coming to think that sharks are not dangerous. However, the content that the first belief affirms can’t be the same as the content that the second belief denies, because the transition relations, e.g., the inference form sharks being in the water to what people should do, so, I changed mt mind functionalist reply is to say that some transitions are relevant to content individuation, whereby others are not. Appeal to a traditional analytic clear/synthetic distinction clearly won’t do. For example, ‘dogs’ ‘and cats’ would have the same content on such a view. It could not be analytic that dogs bark or that cats meow, since we can imagine a non-barking breed of dog and a non-meaning breed of cat. If ‘Dogs are animals’ is analytic, as ‘Cats are animals’. If ‘Cats are adult puppies’, Dogs are not cats’ ~ but then cats are not dogs. So a functionalist’s account will not find traditional analytic inferences of ‘dogs’ from the meaning of ‘cat’. Other functionalist acceptation of ‘holism’ for a ‘narrow content’, attempting to accommodate intuitions about the stability of content be appealing too wide content.

Within the clarity made of inference it is unusual to find it said that, an inference is a (perhaps very complex) act of thought by virtue of which act (1) I pass from a set of one or more propositions or statements to a proposition or statement and (2) it appears that the latter are true in the former is or are. This psychological characterization has occurred widely in the literature under more of less inessential variations.

It is natural to desire a better characterization of inference, but attempts to do so by construing a fuller psychological explanation fail to comprehend the grounds on which inference will be objectively valid ~ a point elaborated made by Gottlob Frége. And attempts to a better understand the nature about inference through the device of the representation of inference by formal-logical calculations to the informal inference they are supposed to represent or reconstruct, and (2) leaves us worried about the sense of such formal derivation. Are these derivations inferences? And aren’t informal inferences needed in order to apply the rules governing the constructions of forma derivation (inferring that this operation is an application of that formal rule)? These are concerns cultivated by, for example, of Wittgenstein. That, insofar as coming up with a good and adequate characterization of inference ~ and even working out what would count as a good and adequate characterization ~ is a hard and by no means nearly solved philosophical problem.

It is still, of ascribing states with content to an actual person has to proceed simultaneously with attribution of a wide range of non-rational states and capacities. In general, we cannot understand a person’s reasons for acting as he does without knowing the array of emotions and sensations to which he is subject: What he remembers and what he forgets, an how he reasons beyond the confines of minimal rationality. Even the content-involving perceptual states, which play a fundamental role in individuating content, cannot be understood purely in terms relating to minimal rationality. A perception of the world for being a certain way is not (and could not be) under a subject’s rational control. Though it is true and important that perceptions give for forming beliefs, the beliefs for which they fundamentally provide reason ~ observational beliefs about the environment ~ have contents which can only be elucidated by inferring which can only be elucidated by inferring back to perceptual experience. In this respect (as in others), perceptual states defer from those beliefs and desires that are individuated by mentioning what they provide reasons for judging or doing: For frequently these latter judgements and actions can be individuated without reference back to the states that provide reasons for them.

What is the significance for theories of content to the fact that it is almost certainly adaptive for members of a species to have a system of states with representational content which are capable of influencing their actions which are capable? According to teleological theories of content, a constitutive account of content ~ one which says what it is for a state to have a given content ~ must make use of the notions of natural function and teleology. The intuitive idea is that for a belie f state to have a given content ‘p’ is for the belief-forming mechanism which produced it to have the function (perhaps derivatively) of producing that state only when it is the case that ‘p’. But if content itself proves to resist elucidation in terms of natural function and selection, it is still a very attractive view that selection must be mentioned ~ such as a sentence ~ with a particular content, even though that content itself may be individuated by other means.

Contents are normally specified by ‘that . . .’ clauses, and it is natural to suppose that a content has the same kind of sequential and hierarchical structure as the sentence that specifies it. This supposition would by widely accepted for conceptual content. It is, however, a substantive thesis that all content is conceptual. One way of treating one sort of perceptual content is to regard the content as determined by a spatial type, the type under which the region of space around the perceiver must fall if the experience with that content is to represent the environment correctly. The type involves a specification of surfaces and features in the environment, and their distances and direction from the perceiver’s body as origin. Supporters of the view that the legitimacy of using these spatial types in giving the content of experience does not undermine the thesis that all content is conceptual, such supporters will say that the spatial type is just a way of capturing what can equally be captured by conceptual components such as ‘that distance’, or ‘that direction’, where these demonstratives are made available by the perception in question.

In specifying representative realism the significance this theory holds that (1) there is a world whose existence and nature are independent of it, (2) perceiving an object located in that external world necessarily involves causally interacting with that object, and (3) the information acquired in perceiving an object is indirect: It is information most immediately about the perceptual experience caused in us by the object, and only derivatively about the object itself. Traditionally, representative realism has been allied with an act/object analysis of sensory experience. In terms of representative realism, objects of perception (of which we are ‘independently aware’) are always distinct from objects of experience (of which we are ‘directly aware’) Meinongians, however, may simply that object of perception as existing objects of experience.

Armstrong (1926- ) not only sought to explain perception without recourse to sense-data or subjective qualities but also sought to equate the intentionality of perception with that of belief. There are two aspects to this: the first is to suggest that the only attitude toward a content involved in perception is that of believing, and the second is to claim that the only content involved in perceiving is that which a belief may have. The former suggestion faces an immediate problem, recognized by Armstrong, of the possibility of having a perceptual experience without acquiring the correspondence belief. One such case is where the subject already possesses the requisite belief ~ rather than leading to the acquisition of, belief. The more problematic case is that of disbelief in perception. Where a subject has a perceptual experience but refrains from acquiring the correspondence belief. For example, someone familiar with Muller-Lyer illusion, in which lines of equal length appear unequal, is likely to acquire the belief that the lines are unequal on encountering a recognizable example of the illusion. Despite that, the lines may still appear unequal to them.

Armstrong seeks to encompass such cases by talk of dispositions to acquire beliefs and talk of potentially acquiring beliefs. On his account this is all we need say to the psychological state enjoyed. However, once we admit that the disbelieving perceivers still enjoy a conscious occurrent experience, characterizing it in terms of a disposition to acquire a belief seems inadequate. There are two further worries. One may object that the content of perceptual experiences may play a role in explaining why a subject disbelievers in the first place: Someone may fail to acquire a perceptual belief precisely because how things appear to her is inconsistent with her prior beliefs about the world. Secondly, some philosophers have claimed that there can be perception without any correspondence belief. Cases of disbelief in perception are still examples of perceptual experience that impinge on belief: Where a sophisticated perceiver does not acquire the belief that the Müller-Lyer lines are unequal, she will still acquire a belief about how things look to her. Dretske (1969) argues for a notion of non-epistemic seeing on which it is possible for a subject to be perceiving something whole lacking any belief about it because she has failed to notice what is apparent to her. If we assume that such non-epistemic seeing, nevertheless, involve conscious experience e it would seem to provide another reason to reject Armstrong’s view and admit that if perceptual experiences are intentional states then they are a distinct attitude-type from that of belief. However, even if one rejects Armstrong’s equation of perceiving with acquiring beliefs or disposition to believe, one may still accept that he is right about the functional links between experience and belief, and the authority that experiences has over belief, an authority which, can nevertheless be overcome.

It is probably true that philosophers have shown much less interest in the subject of the imagination during the last fifteen tears or so than in the period just before that. It is certainly true that more books about the imagination have been written by those concerned with literature and the arts than have been written by philosophers in general and by those concerned with the philosophy of mind in particularly. This is understandable in that the imagination and imaginativeness figure prominently in artistic processes, especially in romantic art. Still, those two high priests of romanticism, Wordsworth and Coleridge, made large claims for the role played by the imagination in views of reality, although Coleridge’s thinking on this was influenced by his reading of the German philosopher of the late eighteenth and early nineteenth centuries, particularly Kant and Schelling. Coleridge distinguished between primary and secondary imagination, both of them in some sense productive, as opposed too merely reproductive. Primary imagination is involved in all perception of the world in accordance with a theory which, as Coleridge derived from Kant, while secondary imagination, the poetic imagination, is creative from the materials that perception provides. It is this poetic imagination which exemplifies imaginativeness in the most obvious way.

Being imaginative is a function of thought, but to use one’s imagination in this way is not just a matter of thinking in novel ways. Someone who, like Einstein for example, presents a new way of thinking about the world need not be by reason of this supremely imaginative (though of course, he may be). The use of new concepts or a new way of using already existing concepts are not in themselves an exemplification of the imagination. What seems crucial to the imagination is that it involves a series of perspectives, new ways of seeing things, in a sense of ‘seeing’ that need not be literal. It thus involves, whether directly or indirectly, some connection with perception, but in different ways. To make clear in the similarities and differences between seeing proper and seeing with the mind’s eye, as it is sometimes put. This will involve some consideration of the nature and role of images, least of mention, that there is no general agreement among philosophers about how to settle neurophysiological problems in the imagery of self.

Connections between the imagination and perception are evident in the ways that many classical philosophers have dealt with the imagination. One of the earliest examples of this, the treatment of ‘phantasia’ (usually translated as ‘imagination’) in Aristotles ‘De Anima III. 3. seems to regard the imagination as a sort of half-way house between perception and thought, but in a way which makes it covers appearances in general, so that the chapter in question has as much to do with perceptual appearances, including illusions, as it ha s to do with, say. Imagery. Yet, Aristotle also emphasizes that imagining is in some sense voluntary, and that when we imagine a terrifying scene we are not necessarily terrified, any more than we need be when we see terrible things in a picture. How that fits in with the idea that an illusion is or can be a function of the imagination is less than clear. Yet, some subsequent philosophers, Kant on particular. Followed in recent times by P.F. Strawson have maintained that all perception involves the imagination, in some sense of that term, in that some bridge is required between abstract thoughts and their perceptual instance. This comes out in Kant’s treatment of what he calls the ‘schematism’, where he rightly argues that someone might have an abstractive understanding of the concept of a dog without being able to recognize or identify any dogs. It is also clear that someone might be able to classify all dogs together without any understanding of what a dog is. The bridge that needs to be provided to link these two abilities Kant attributes to the imagination.

In so arguing Kant goes, as he so often does, beyond Hume who thought of the imagination in two connected ways. First, there are the facts that there exist. Hume thinks, ideas which are either copies of impressions provided by the senses or derived from these. Ideas of imagination are distinguished from those of memory, and both of these from impression and sense, by their lesser vivacity. Second, the imagination is involved in the processes, mainly associated of ideas, which take one form on ideas to another, and which Hume uses to explain, for example, our tendency to think of objects as having no impression on them, ideas or less images, is the mental process which takes one from one idea to another and thereby explains our tendency to believe things go beyond what the senses immediately justify. The role which Kant gives to the imagination in relation to perception in general is obviously a wider and fundamental role than that Hume allows. Indeed, one might take Kant to be saying that were there not the role that he, Kant insists on there would be no place for the role which Hume gives it. Kant also allows for a free use of the imagination in connection with the arts and the perceptions of beauty, and this is a more specified role than that involved in perception overall.

In the retinal vision by the seeing of things we normally see them as such-and-such, are to be construed and in how it relates s to a number of other aspects of the mind ‘s functioning ~ sensation, concept and other things of other aspects of the mind’s functioning ~ sensation, concepts, and other things involved in our understanding of things, belief and judgement, the imagination, our action is related to the world around us, and the causal processes involved in the physics, biology and psychology of perception. Some of the last were central to the considerations that Aristotle raised about perception in his ‘De Anima’.

Nevertheless, there are also special, imaginative ways of seeing things, which Wittgenstein (1889-1951) emphasized in his treatment of ‘see-as’ in his ‘Philosophical Investigations II. Xi. And on a piece paper as standing up, lying down, hanging from its apex and so on is a form of ‘seeing-as’ which is both more special and more sophisticated than simply seeing it as a triangle. Both involve the application of concepts to the objects of perception, but the way in which this is done in the two cases is quite different. One might say that in the second case one has to adopt a certain perceptive, a certain point of view, and if that is right it links up with what had been said earlier about the relation and difference between thinking imaginatively and thinking in novel ways.

Wittgenstein (1953) used the phrase ‘an echo of a thought is sight’ in relation to these special ways of seeing things, which he called ‘seeing aspects’. Roger Scruton has spoken of the part played in it all by ‘unasserted thought’, but the phrase used by Wittgenstein brings out more clearly one connection between thought and a form of sense-perception. Wittgenstein *1953) also compares the concepts of an aspect and that of seeing-as with the concept of an image, and this brings out a point about the imagination that has not been much evident in what has been said so far ~ that imagining something is typically a matter of picturing it in the mind and that this involves images in some way, however, the picture view of images has come under heavy philosophical attack. First, there have been challenges to the sense of the view: Mental images are not with real eyes: They cannot be hung on real walls and they have no objective weight or colour. What, the, can it mean to say, that images are pictorial? Secondly, there have been arguments that purport to show that the view is false. Perhaps, the best known of these is founded on the charge that the picture theory cannot satisfactorily explain the independency of many mental images. Finally, there have been attacks on the evidential underpinning of the theory. Historically, the philosophical claim that images are picture-like rested primarily on an appeal to introspection. And today less about the mind than was traditionally supposed. This attitude toward introspection has manifested itself in the case of imagery in the view that what introspection really shows about visual images is not that they are pictorial but only that what goes on in imagery is experimentally much like what goes on in seeing. This aspect is crucial for the philosophy of mind, since it raises the question of the status of images, and in particular whether they constitute private objects or stares in some way. Sartre (1905-80), in his early work on the imagination emphasized, following Husserl (1859-1938), that images are forms of consciousness of an object, but in such a way that they ‘present’ the object as not being: Wherefore, he said, the image ‘posits its object as nothingness’, such a characterization brings out something about the role of the form of consciousness of which the having of imagery may be a part, in picturing something the images are not themselves the object of consciousness. The account does less, however, to bring out clearly just what images are or how they function.

As part of an attemptive grappling about the picturing and seeing with the mind’s eye, Ryle (1900-76 ), has argued that in picturing, say, Lake Ontario, in having it before the mind’s eye, we are not confronted with a mental picture of Lake Ontario: Images are not seen. We nevertheless, can ‘see’ Lake Ontario, and the question is what this ‘seeing’ is, if it is not seeing in any direct sense. One of the things that may make this question difficult to answer is the fact that people’s images and their capacity for imagery vary, and this variation is not directly related to their capacity for imaginativeness. While an image may function in some way as a ‘presentation’ in a train of imaginative thought, such thought does not always depend on that: Images may occur in thought, which are not really representational at all, are not, strictly speaking, ‘of’ anything. If the images are representational, can one discover things from one’s images that one would not know from otherwise? Many people would answer ‘no’, especially if their images are generally fragmentary, but it is not clear that this is true for everyone. What is more, and this affects the second point, fragmentary imagery which is at best ancillary to process of though in which it occurs may not be in any obvious sense representational, even if the thought itself is ‘of’ something.

Another problem with the question what it is to ‘see’ Lake Ontario with the mind’s eye is that the ‘seeing’ in question may or may not be a direct function of ‘memory’. For one who has seen Lake Ontario, imaging it may be simply a matter of reproduction in some form in the original vision, and the vision may be reproduced unintentionally and without any recollection of what it is a ‘vision’ of. For one who has never been it the task of imagining it depends most obviously on the knowledge of what sort of thing Lake Ontario is and perhaps on experiences which are relevant to that knowledge. It would be surprising, to say the least, if imaginative power could produce a ‘seeing’ that was not constructed from any previous seeing. But that the ‘seeing’ is not itself a seeing in the straightforward sense is clear, and on this negative point what Ryle says, and other s have said, seems clearly right. As to what ‘seeing’ is in a positive way, Ryle answers that it involves fancying something and that this can be assimilated to pretending. Fancying that one is seeing Lake Ontario is thus, at least, like pretending that one is doing that thing. But is it?

Along the same course or lines, there is in fact a great difference between say, imaging that one is a tree and pretending to be a tree. Pretending normally involves doing something, and even when there is no explicit action on the part of the pretender, as when he or she pretends that something or other is the case, there is at all events an implication of possible action. Pretending to be a tree may involve little more that standing stock-still with one’s arms spread out like branches. To imagine being a tree (something that is founded that some people deny being possible, which is to my mind a failure of imagination) need imply no action whatever, (Imagining being a tree is different in this respect from imagining that one is a tree, where this means believing falsely, that one is a tree, one can imagine being a tree without this committing one to any beliefs on that score). Yet, of imagining being a tree does seem to involve adopting the hypothetical perspective of a tree, contemplating perhaps, that it is like to be a fixture in the ground with roots growing downward and with branches (somewhat like arms) blown by the wind and with birds perching on them.

Imagining something seems in general to involve change of identity on the part of something or other, and in imagining being something else, such as a tree, the partial change of identity contemplated is in oneself. The fact that the change of identity contemplated cannot be and thus, completely do not admit of gainsaying, the point that it is a change of identity which is being contemplated. One might raise the question whether something about the ‘self’ is involved in all imaginings. Berkeley (1685-17530 even suggests that imagining a solitary unperceived tree involves a contradiction, in that a imagine that is to imagine oneself perceiving it. In fact, there is a difference between imagining a object, solitary or not, and imagining oneself seeing that object. The latter certainly involve putting themselves imaginatively in the situation pictured: The former involves contemplating the object from a point of view that from that point of view which one would oneself has if one were viewing that point of view to which reference has already been made, in a way that clearly distinguishes picturing something from merely thinking of it.

This does not rule out the possibility that an imagine might come into one’s mind which one recognizes as some kind of depiction of a scene. But when actually picturing a scene, it would not be right to say that one imagines the scene by way of a contemplation of an image which plays the part of as picture of it. Moreover, it is possible to imagine a scene without any images occurring, the natural interpretation of which would be that they are pictures of that scene. It is possible for one imagining say, the GTA is to report on request the occurrences of images which are not in any sense pictures of the GTA -, not of that particular city and perhaps not even of a city at all. That would not entail that he or she was not imagining the GTA: A report to or associated with the GTA, thought by others to be of the GTA.

This raises a question which is asked by Wittgenstein (1953) -, ‘What makes my image of him into an image of him’? To which Wittgenstein replies ‘Not its looking like him’, and further, he suggests that a person’s account of what his imagery represents is decisive. Certainly it is so when the process of imagination which involves the imagery is one that the person engages in intentionality. The same is not true, as Wittgenstein implicitly acknowledges in the same context, if the imagery simply comes to mind without there being any intention, in that case, one might not even know what the image is an image of.

Nevertheless, all this complicates the question what the status of mental images is. However, it might seem that they stand in relation to imagining as ‘sensations’ stand to perception, except that the occurrence of sensations is a passive set-organization of specific presentiments, while the occurrence of an image can be intentional, and in the context of an active flight of imagination is likely to be so. Sensations give perceptions a certain phenomenal character, providing they’re sensuous, as opposed to conceptual content. Intentional action has been interesting symmetric and asymmetric to perception. Like perceptual experience, the experiential component of intentional action is causally self-referential. If, for example, I can now walk to my car, then the condition of satisfaction of the preset experience is that there are certain bodily movements, and that this very experience of acting cause those bodily movements. Furthering, like perceptual experience, the experience of acting is topically a conscious mental event, is that perception is always concept-dependent at least in the sense that perceivers must be concept possessors and users, and almost certainly the sense that perception entails concept-use in its application to objects. It is, at least, arguable that those organisms that react in a biologically useful way to something but that is such that the attribution of concepts that they are implausible, should not be said to perceive those objects, however, much the objects figure causally in their behaviour. There are, nevertheless, important links between these diverse uses. We might call a theory which attributes to perceptual states as content in the new sense as ‘an intentional theory’ of perception. On such a view, perceptual states represent to the subject how her environment and body are. The content of perceptional experiences is how the world is presented to be. Perceptual experiences are then counted as illusory or veridical depending on whether the content is correct and the world is as represented. In as such as such a theory of perception can be taken to be answering the more traditional problems of perception, such will deal with the content of consciousness. The ruminative contemplation, where with concepts looms largely and has, perhaps the overriding role, it still seems necessary for our thought to be given a focus in thought-occurrences such as images. These have sometimes been characterized as symbols which are the material of thought, but the reference to symbols is not really illuminating. Nonetheless, while a period of thought in which nothing of this kind occurs is possible, the general direction of thought seems to depend on such things occurring from time to time. The necessary correlations that are cognizant, insofar as when we get a feeling, or an ‘impression’, thereof: Which of us attribute a necessity to the relation between things of two particular kinds of things. For example, an observed correlation between things of two kinds can be seen to produce in everyone a propensity to expect a thing to the second sort given an experience of a thing on the first sort. That of saying, there is no necessity in the relations between things that happen in the world, but, given our experience and the way our minds naturally work, we cannot help thinking that there is. In the case of the imagination images seem even more crucial, in that without therm it would be difficult, to say, at least, for the point of view or perspective which is important for the imagination to be given a focus.

Of the same lines that it would be difficult for this to be so, than impossible, since it is clear that entertaining a description of a scene, without there being anything that a vision of it, could sometimes give that perceptive. The question still arises whether a description could always do quite what, and an image can do in this respect. The point is connected with an issue over which there has been some argument among psychologists, such as S.M. Kosslyn and Z.W. Pylyshyn, concerning what are termed ‘analogue’ versus ‘propositional’ theories of representation. This is an argument concerning whether the process of imagery is what Pylyshyn (1986) calls ‘cognitively penetrable’, i.e., such that its function is affected by beliefs or other intellectual processes expressible in propositions, or whether, it can be independent of cognitive processes although capable itself of affecting the mental life because of the pictorial nature of images ( the ‘analogue medium’). One example, which has embarked upon that argument, is that in which people are asked whether two asymmetrically presented figures can be made to coincide, the decision on which may entail some kind of material rotation of one or more of the figures. Those defending the ‘analogue’ theory, point to the fact that there is some relation between the time taken and the degree of the rotation required, this suggests that some processes involving changing images are identifying with. For some who has little or no imagery this suggestion, may seem unintelligible. Is it enough for one to go through an intellectual working out of the possibilities, as based on features of the figures that are judged relevant? This could not be said to be unimaginative as long as the intellectual process involved reference to perceptive or points of view in relation to the figures, the possibility of which the thinker might be able to appreciate. Such an account of the process of imagination cannot be ruled out, although there are conceivable situations in which the ‘analogue’ process of using images might be easier. Or, at least, it might be easier for those who have imagery most like the actual perception of a scene: For others situation might be difficult.

The extreme of the former position is probably provided by those who have so-called ‘eidetic’ imagery, where having an image of a scene is just like seeing it, and where, if it is a function of memory as it most likely is, it is clearly possible to find out details of the scene imagined by introspection of the image. The opposite extreme is typified by those for whom imagery, to the extent it occurs at all, is at best ancillary to propositionally styled thought. But, to repeat the point made unasserted, will not count as imagination unless it provides a series of perspectives on its object. Because images are or can be perceptual analogues and have a phenomenal character analogous to what sensations provide in perception they are, most obviously suited. In the working of the mind, to the provision of those perspectives. Bu t in a wider sense, imagination enters the picture whenever some link between thought and perception is required, as well as making possible imaginative forms of seeing-as. It may thus justifiably be regarded as a bridge between perception and thought.

The plausibility to have a firm conviction in the reality of something as, perhaps, as worthy of belief and have no doubt or unquestionably understood in the appreciation to view as plausible or likely to apprehend the existence or meaning of comprehensibility whereas, an understandable vocation as to be cognizant of things knowably sensible. To a better understanding, an analogous relationship may prove, in, at least, the explanation for the parallels that obtain between the ‘objects of contents of speech acts’ and the ‘objects or contents of belief’. Furthermore, the object of believing, like the object of saying, can have semantic properties, for example:

What Jones believes is true.

And:

What Jones believes entails, what Smith believes.

One plausible hypophysis, then, is that the object of belief is the same sort of entity as what is uttered in speech acts (or what is written down).

The second theory also seems supported by the argument of which our concerns conscribe in the determination of thought, for which our ability to think certain thoughts appears intrinsically connected with the ability to think certain others. For example, the ability to think that John hit Mary goes hand in hand with the ability to think that Mary hits John, but not with the ability to think that Toronto is overcrowded. Why is this so? The ability to produce or understand certain sentences are intrinsically connected with the ability to produce or understand certain others. For example, there are no native speakers of English who know how to say ‘John hits Mary’, but who do not know how to say ‘Mary hits John’. Similarly, there are no native speakers who understand the former sentence but not the latter. These facts are easily explained if sentences have a syntactic and semantic structure, but if sentences are taken to be atomic, these facts are a complete mystery. What is true for sentences, are true also for thoughts? Thinking thoughts involving manipulating mental representations. If mental representations with a propositional content have a semantic and syntactic structure like that of sentences, it is no accident that one who is able to think that John hit’s Mary is thereby, able to think that Mary hits John. Furthermore, it is no accident that one who can think these thoughts need not thereby be able to think thoughts, having different components ~ for example, the thought that Toronto is overcrowded. And what goes here for thought goes for belief and the other propositional attitudes.

If concepts of the simple (observational) sort were internal physical structures that had in this sense, an information-carrying function, a function they acquired during learning, then instances as such types would have a content that (like a belief) could be either true or false. After learning, tokens of these structure types, when caused by some sensory stimulation, would ‘say’ (i.e., mean) what it was their function to ‘tell’ (inform about). They would therefore, quality as beliefs ~ at least of the simple observational sort.

Any information-carrying structure carries all kinds of information. If, for example, it carrier’s information ‘A’, it must also carry the information that ‘A’ or ‘B’. As I conceived of it, learning was supposed to be a process in which a single piece if this information is selected for special treatment, thereby becoming the semantic content ~ the meaning ~ of subsequent tokens of that structure type. Just as we conventionally give artefacts and instruments information-providing functions, thereby making their activities and states ~ pointer readers, flashing lights, and so on ~ representations of the conditions, so learning converts neural states that carry information ~ ‘pointer’s readers’ in the head, so to speak ~ into structures that have the function to providing some vital piece of the information they carry is also presumed to serve as the meanings of linguistic items, underwriting relations of translation, definition, synonymy, antinomy and semantic implications. Much work in the semantics of natural language takes itself to be addressing conceptual structure.

Concepts have also been thought to be the proper objects of ‘philosophical analysis’. ‘Analytic’ philosophers when they ask about the nature of justice, knowledge or piety and expect to discover answers by means of introspective reflection, yet the expectation that one sort of thing could serve all these tasks went hand in hand with what has come to be called the ‘Classical View’ of concepts, according of conditions that are individually necessary and jointly sufficient for their satisfaction, which are known to any competent user of them, the standard example is the especially simple one [bachelor], which seems to be identified to [eligible unmarried male]. A more interesting, but problematic one has been [knowledge], whose analysis was traditionally thought to be [justified true belief].

The notional representation that treat relations as a subclass of property brings to contrast with property is ‘concept’, but one must be very careful, since ‘concept’, has =been used by philosophers and psychologists to serve many different purposes. One use has it that certain factors of conceiving of some aspect of the world. As such, concepts have a kind of subjectivity as having to contain the different individuals might, for example, have different concepts of birds, one thinking of them primarily as flying creatures and the other as feathered. Concepts in this sense are often described as a species of ‘mental representation’, and as such they stand in sharp contrast to the notion of a property, since a property is something existing in the world. However, it is possible to think of a concept as neither mental nor linguistic and this would allow, though it doesn’t dictate, that concepts and properties are the same kind of thing. Nonetheless, the function of learning is naturally to develop, as things inasmuch as in doing, in some natural way, either (in the case of the senses) from their selectional history or (in the casse of thought) from individual learning. The result is a network of internal representations that have, in different ways, the power to represent: Experiences and beliefs.

This does, however, leave a question about the role of the senses in this total cognitive enterprise. If it is learning that, by way of concepts, is the source of the representational powers of thought, from whence comes the representational powers of experience? Or should we even think of experience in representational terms? We can have false beliefs, but are there false experiences? On this account, then, experience and thought are both representational. The difference resides in the source of heir representational powers, learning in the case of thoughts, evolution in the case of experience.

Though, perception is always concept-dependent, at least in the sense that perceivers must be concept possessors and users, and almost certainly in the sense that perception entails concept-use in its application to objects. It is at least, arguable that those organisms that react in a biologically useful way to something, but that is such that the attribution of concepts to them is implausible, should not be said to perceive those objects, however, much is as there is much that the object figures causally in their behaviour. Moreover, that consciousness presents the object in such a way that the experience has certain phenomenal character, which derived from the sensations which the causal processes involved set up. This is most evident is the case of ‘touch’ (which being a ‘contact sense’ provides a more obvious occasion for speaking of sensations than casually ‘distant senses’ such as sight. Our tactual awareness of the texture of a surface is, to use a metaphor, ‘coloured’ by the nature of the sensations that the surface produces in our skin, and which we can be explicitly aware of if our attention is drawn to them (something that gives one indication of how attention too is involved in perception).

It has been argued, that the phenomenal character of n experience is detachable from its contentful content in the sense that an experience of the same phenomenal character could occur even if the appropriate concepts were not available. Certainly the reverse is true ~ that a concept-mediated awareness of an object could occur without any sensation-mediated experience ~ as in an awareness of something absent from us. It is also the case, however, that the look of something can be completely changed by the realization that it is to be seen as ‘χ’ rather than ‘y’. To the extent that, which is so, the phenomenal character of a perceptual experience should be viewed as the result of the way in which sensations produced in us by objects blends with our ways of thinking of and understanding those objects (which, it should be noted, are things in the world and should not be confused with the sensations which they produce).

In the study o ff other parts of the natural world, we agree to be satisfied with post-Newtonian ‘best theory’ arguments: There is no privileged category of evidence that provides criteria for theoretical constructions. In the study of humans above the neck, however, naturalistic theory does not suffice: We must seek ‘philosophical explanations’, require that theoretical posits specified terms of categories of evidence selected by the philosopher (as, in the radically upon unformulated notions such as ‘access in principle’ that have no place in naturalistic inquiry.

However, one evaluates these ideas, that clearly involve demands beyond naturalism, hence, a form of methodological/epistemological dualism. In the absence of further justification, it seems to me fair to conclude, that inability to provide ‘philosophical explanation’ or a concept of ‘rule-following’ that relies on access to consciousness (perhaps ‘in principle’) is a merit of a naturalistic approach, not a defect.

A standard paradigm in the study of language, given its classic form by Frége, holds that there is a ‘store of thoughts’ that are a common human possession and a common public language in which these thoughts are expressed. Furthermore, this language is based on a fundamental relation between words and things ~ reference or denotation ~ along with some mode of fixing reference )sense, meaning). The notion of a common public language has never been explained, and seems untenable. It is also far from clear why one should assume the existence of a common store of thoughts: The very existence of thoughts had been plausibly questioned, as a misreading of surface grammar, a century earlier.

Only those who share a common world can communicate, only those who communicate can have the concept of an inter-subjective, objective world. As a number of things follow, if only those who communicate have the concept or notion of an objective world, only those who communicate can doubt whether an external world exists. Yet it is impossible seriously (consistently) to doubt the existence of other people with thoughts, or the existence of an external world, since to communicate is to recognize the existence of other people in a common world. Language, that is, communication with others, is thus essential to propositional thought. This is not because it is necessary to have the words to express a thought (for it is not); it is because the ground of the sense of objectivity is inter-subjectivity, and without the sense of objectivity, of the distinction between true and false, between what is thought to be and what is the case, there can be nothing rightly called ‘thought’.

Since words are also about things, it is natural to ask how their intentionality is connected in that of thoughts. Two views have been advocated: One view takes thought content to be self-subsistent relative to linguistic content, with the latter dependent on or upon the former. The other view takes thought content to be derivative upon linguistic content, so that there can be no thought without a bedrock of language. Appeals to language at this point are apt to founder on circularity, since words take on the powers of concepts only insofar as their expression may be that of gesturing them. Thus, there seems little philosophical illumination to be got from making thought depend upon language. Nonetheless, it is not entirely clear what it amounts to assert or deny, that there is an inner language of thought. If it means merely that concepts (thought-constituents) are structured in such a way as to be isomorphic with spoken language, then the claim is trivially true, given some natural assumption. But if it means that concepts just are ‘syntactic’ items orchestrated into strings of the same, then the claim is acceptable only in so far as syntax is an adequate basis for meaning ~ which, on the face of it, it is not. Concepts in doubt have combinatorial powers comparable to those of words, but the question is whether anything else can plausibly be meant by the hypothesis of an inner language.

Yet, it appears undeniable that the spoken language does not have autonomous intentionality, but instead derives its meaning from the thoughts of speakers ~ though language may augment one’s conceptual capacities. So thought cannot post-date spoken language. The truth seems to be that in human psychology speech and thought are interdependent in many ways, but that there is no conceptual necessity about this. The only ‘language’ on which thought essentially depends is that of the structured system of concepts itself: Thought depends on or upon there being isolable concepts that can join with others to produce complete propositions. But this is merely to draw attention to a property of any system of concepts must have; it is not to say what concepts are or how they succeed in moving between thoughts as they are done.

Finally, there is the old question of whether, or to what extent, a creature who does not understand a natural language can have thoughts. Now it seems pretty compelling that higher mammals and humans raised without language have their behaviour controlled by mental states that are sufficiently like our beliefs, desires and intentions to share those labels. It also seems easy to imagine non-communicating creatures who have sophisticated mental lives (they build weapons, dams, bridges, have clever hunting devices, etc.). at the same time, ascriptions of particular contents to non-language-using creatures typically seem exercises in loose speaking (does the dog really believe that there is a bone in the yard?) And it is no accident that, as a matter of fact, creatures, however, do not understand a natural languages have at best primitive mental lives. There is no accepted explanation of these facts. It is possible that the primitive mental failure to master natural languages, but the better explanation may be Chomsky’s, that animals lack a special language faculty to our species, as, perhaps, the insecurity that is felt, may at best resemble the deeper of latencies that cradles his instinctual primitivities, that have contributively distributed the valuing qualities that amount in the result to an ‘approach-avoidance’ theory. As regards the wise normal human raised without language; this might simply be due to the ignorance and lack of intellectual stimulation such a person would be predetermined to. It also might be that higher thought requires a neural language with a structure comparable to that of a natural language, and that such neural languages are somehow acquired: As the child learns its native language. Finally, the ascription states of languageless creatures are a difficult topic that needs more attention. It is possible that as we learn more about the logic of our ascriptions of propositional content, we will realize that these ascriptions are egocentrically based on a similarity to the language in which we express our beliefs. We might then learn that we have no principled basis for ascribing propositional content to a creature who does not speak something a lot like one of our natural languages, or who does not have internal states with natural-language-like structure. It is somewhat surprising how little we know about thought’s dependence on language.

The relation between language and thought is philosophy’s chicken-or-egg problem. Language and thought are evidently importantly related, but how exactly are they related? Does language come first and make thought possible, or is it vice versa? Or are they on a par, each making the other possible.

When the question is stated this generally, however, no unqualified answer is possible. In some respects thought is prior, and in other respects neither is prior. For example, it is arguable that a language is an abstract pairing of expressions and meaning, a function in the set-theoretic sense from expressions onto meaning. This makes sense of the fact that Esperanto is a language no one speaks, and it explains why it is that, while it is a contingent fact that ‘La neige est blanche’ means that snow is white among the French. It is a necessary truth that it means that in French. But if natural languages such as French and English are abstract objects in this sense, then they exist in possible worlds in which there are no thinkers in this respect, then, language as well as such notions as meaning and truth in a language, is prior to thought.

But even if languages are construed as abstract expression-meaning pairings, they are construed that way as abstractions from actual linguistic practice ~ from the use of language in communicative behaviour ~ and there remains a clear sense in which language is dependent on thought. The sequence of inscribes ‘Naples is south of Rome’ means among us that Naples is south of Rome. This is a contingent fact, but dependent on the way we use ‘Naples’. Rome and the other part of that sentence. Had our linguistic practices been different, ‘Naples is south of Rome’ means among us that Naples is south of Rome has something to do with the beliefs and intentions underlying our use of the words and structures that compose the sentence. More generally, it is a platitude that the semantic features that inscribe and sounds have in a population of speakers are, at least, partly determined by the ‘propositional attitudes’ those speakers have in using those inscriptions and sounds or in using the parts and structures that compose them. This is the same platitude, of course, which says that meaning depends at least partly on use: For the use in question is intentional use in communicative behaviour. So, here, is one clear sense in which language is dependent on thought: Thought is required to imbue inscriptions and sounds with the semantic features they have in populations of speakers.

The sense in which language does depend on thought can be wedded to the sense ion which language does not depend on thought in the ways that: We can say that a sequence of ascriptions or sounds (or, whatever) ‘σ’ means ‘q’ in a language ‘L’, construed as a function from expressions onto meaning, iff L(σ) = q. this notion of meaning-in-a-language, like the notion of a language, is a mere set-theoretic notion that is independent of thought in that it presupposes nothing about the propositional attitudes of language users: ‘σ’ can mean ‘q’ in ‘L’ even if ‘L’ has never been used? But then we can say that ‘σ’ also means ‘q’ in a population ‘P’ jus t in case members of ‘P’ use some language in which ‘σ’, means ‘q’: That is, just in case some such language is a language of ‘P’. The question of moment then becomes: What relation must a population ‘P’ bear to a language ‘L’ in order for it to be the case that ‘L’ is a language of ‘P’, a language members of ‘P’ actually speak? Whatever the answer to this question is, this much seems right: In order for a language to be a language of a population of speakers, those speakers in their produce sentences of the language in their communicative behaviour. Since such behaviour is intentional, we know that the notion of a language

‘s being the language of a population of speakers presupposes the notion of thought. And since that notion presupposes the notion of thought, we also know that the same is true of the correct account of the semantic features expressions have in populations of speakers.

This is a pretty thin result, not one likely to be disputed, and the difficult questions remain. We know that there is some relation ‘R’ such that a language ‘L’ is used by a population ‘P’ iff ‘L’ bears ‘R’ to ‘P’. Let us call this relation, whatever it turns out to be, the ‘actual-language reflation’. We know that to explain the actual-language relation is to explain the semantic features expressions have among those who are apt to produce those expressions. And we know that any account of the relation must require language users to have certain propositional attitudes. But how exactly is the actual language relation to be explained in terms of the propositional attitude of language users? And what sort of dependence might those propositional attitudes in turn have those propositional attitudes in turn have on language or on the semantic features that are fixed by the actual-language relation? Let us, least of mention, begin once again, as in the relation of language to thought, before turning to the relation of thought to language.

All must agree that the actual-language relation, and with it the semantic features linguistic items have among speakers, is at least, partly determined by the propositional attitudes of language users. This still leaves plenty of room for philosophers to disagree both about the extent of the determination and the nature of the determining propositional attitude. At one end of the determination spectrum, we have those who hold that the actual-language relation is wholly definable in terms of non-semantic propositional attitudes. This position in logical space is most famously occupied by the programme, sometimes called ‘intention-based semantics’, of the late Paul Grice and others. The foundational notion in this enterprise is a certain notion of speaker meaning. It is the species of communicative behaviour reported when we say, for example, that in uttering ‘ll pleut’, Pierre meant that it was raining, or that in waving her hand, the Queen meant that you were to leave the room, intentional-based semantics seeks to define this notion of speaker meaning wholly in terms of communicators’ audience-directed intentions and without recourse to any semantic notion. Then it seeks to define the actual-language relation in terms of the now-defined notion of speaker meaning, together with certain ancillary notions such as that of a conventional regularity or practice, they defined wholly in terms of non-semantic propositional attitudes. The definition of the actual-language relation in terms of speaker meaning will require the prior definition in terms of speaker meaning of other agent-semantic notions, such as the notions of speaker reference and notions of illocutionary act, and this, too, is part of the intention-based semantics.

Some philosophers object to the intentional-based semantics because they think it precludes a dependence of thought on the communicative use of language. This is a mistake. Even if the intentional-based semantic definitions are given a strong reductionist reading, as saying that public-language semantic properties (i.e., those semantic properties that supervenes on use in communicative behaviour) it might still be that one could not have propositional attitudes unless one had mastery of a public-language. However, our generating causal explanatory y generalizations, and subject to no more than the epistemic indeterminacy of other such terms. The causal explanatory approach to reason-giving explanations also requires an account of the intentional content of our psychological states, which makes it possible for such content to be doing such work. By the early 1970's, and many physicalists looked for a way of characterizing the primary and priority of the physical that is free from reductionist implications. As we have in attestation, the key attraction of supervenience to physicalists has been its promise to deliver dependence without reduction. For example, of moral theory has seemed encouraging as Moore and Hare, who made much of the supervenience of the moral on the naturalistic, were at the same time, strong critics of ethical naturalism, the principal reductionist position in ethical theory. And there has been a broad consensus among ethical theorists that Moore and Hare were right, that the moral, or more broadly the normative, is supervening on the non-moral without being reducible to it. Whether or not this is plausible (that is a separate question), it would be no more logically puzzling than the idea that one could not have any propositional attitudes unless one had one’s with certain sorts of contents. there is no pressing reason to think that the semantic needs to be definable in terms of the psychological. Many intention-based semantic theorists have been motivated by a strong version of ‘physicalism’, which requires the reduction of all intentional properties (i.e., all semantic and propositional-attitude properties) too physical, or at least, topic-neutral or functional properties, for it is plausible that there could be no reduction of the semantic and the psychological to the physical without a prior reduction of the semantic to the psychological. But it is arguable that such a strong version of physicalism is not what is required in order to fit the intentional into the natural order.

So, the most reasonable view about the actual-language relation is that it requires language users to have certain propositional attitudes, but there is no prospect of defining the relation wholly in terms of non-semantic propositional attitudes. It is further plausible that any account of the actual-language relation must appeal to speech acts such as speaker meaning, where the correct account of these speech acts is irreducibly semantic (they will fail to supervene on the non-semantic propositional altitudes of speakers in the way that intentions fail to supervene on an agent’s beliefs and desires). If this is right, it would still leave a further issue about the ‘definability’ of the actual-language relation, and if so, will any irreducibly semantic notions enter into that definition other than the sorts of speech act notions already alluded to? These questions have not been much discussed in the literature as there is neither an established answer nor competing school of thought. Such that the things in philosophy that can be defined, and that speech act notions are the only irreducibly semantic notions the definition must appeal to.

Our attention is now to consider on or upon the dependence of thought on language, as this the claim that propositional attitudes are relations to linguistic items which obtain at least, partly by virtue of the content those items have among language users. This position does not imply that believers have to be language users, but it does make language an essential ingredient in the concept of belief. However, we might then learn that we have no principled basis for ascribing propositional content to who does not speak something. A lot like, does not have internal states with natural-language-like structure. It is somewhat surprising how little we know about thought’s dependence on language.

The Scottish philosopher, born in Edinburgh, David Hume (1711-76 ) whose theory of knowledge starts from the distinction between perception and thought. When we see, hear, feel, etc. (In general, perceive) something we are ware of something immediately present to the mind through the senses. But we can also think and believe and reason about things which are not present to our senses at the time, e.g., objects and events in the past, the future or the present beyond our current perceptual experience. Such beliefs make it possible for us too deliberate and so act on the basis of information we have acquired about the world.

For Hume all mental activity involves the presence before the mind o some mental entity. Perception is said to differ for thought only in that the kinds of things that are present to the mind in each case are present to the mind in each case are different. In the case of perception it is an ‘impression’: In the case of thought, although what is thought about is absent, what is present to the mind is an ‘idea’ of whatever is thought about. The only difference between an impression and its corresponding idea is the greater ‘force and liveliness’ with which it ‘strikes upon the mind’.

All the things that we can think or believe or reason about is either ‘relations of ideas’ or ‘matters of fact’. Each of the former (e.g., that three times five equals half of thirty) holds necessarily: Its negation implies a contradiction, such truths are ‘discoverable by the operation of pure thought, without dependence on what is anywhere existent in the universe. Hume has no systematic theory of this kind of knowledge: What is or is not included in a given idea, and how we know whether it is, is taken as largely unproblematic. each ‘matter of fact’ is contingent: Its negation is distinctly conceivable and represents a possibility. That the sun will not rise tomorrow are no less intelligible and no more imply a contradiction than the proposition that it will rise. Thought alone is therefore, never sufficient to assure us of the truth of any matter of fact. Sense experience is needed. Only what is directly present to the senses at a given moment is known by perception. A belief in a matter of fact which is not present at the time must therefore be arrived at by a transition of some kind from present impressions to a belief in the matter of fact in question. Hume’s theory of knowledge is primarily an explanation of how that transition is in fact made. It takes the form of an empirical ‘science of human nature’ which is to be based of careful observation of what human beings do and what happens to them.

Its leading into some tangible value, which approves inversely qualifying, in that thoughts have contents carried by mental representations. Now, there are different representations, pictures, maps, models, and words ~ to name only some. Exactly what sort of representation is mental representation? Insofar as our understanding of cognizant connectionism will necessarily have implications for philosophy of mind. Two areas in particular on which it is likely to have impact are the analysis of the mind as a representational system and the analysis of intentional idioms. That is more that imagery has played an enormously important role in philosophy conceptions of the mind. The most popular view of images prior to this century has been what we might call ‘the picture theory’. According to this view, held by such diverse philosophers as Aristotle, Descartes, and Locke, mental images ~ specifically in the way they represent objects in the world. Despite its widespread acceptance, the picture theory of mental images was left largely unexplained in the traditional philosophical literature. Admittedly, most of those accepted the theory held that mental images copy or resemble what the present, but little more was said. Sensationalism, distinguishes itself as a version of representationalist by positing that mental representations are themselves linguistic expressions within a ‘language of thought’. While some sententialists conjecture that the language of thought is just the thinker’s spoken language internalized. An unarticulated, internal; language in which the computations supposedly definitive of cognition occur. Sententialism is as a natural consequence to take hold a provocative thesis.

Thought, in having contents, posse’s semantic properties, yet, that does not imply that they lack an unspoken, internal, mental language. Sententialism need not insist that the language of thought be any natural spoken language like Chinese or English. Rather it simply proses that psychological states that admit of the sort of semantic properties are likely relations to the sort of structured representations commonly found in, but not isolated to, public languages. This is certainly not to say that all psychological states in all sorts of psychological agents must be relations to mental sentences. Rather the idea is that thinking ~ at least, the kind Peter Abelard (1079-1142) exemplifies ~ involves the processing of internally complex representations. Their semantic properties are sentences to those of their parts much in the manner in which the meanings and truth conditions of complex public sentences are dependent upon the semantic features of their components. Abelard might also exploit various kinds of mental representations and associated processes. A sententialists may allow that in some of his cognitive adventures Abelard rotates mental images or recalcitrates weights on connections among internally undifferentiated networked nodes. Sententialism is simply the thesis that some kinds of cognitive phenomena are best explained by the hypothesis of a mental language. There is, then, no principled reason of non-verbal creatures precludes the language of thought.

It is tempting too sleek over the representational theory by speaking of a language thought, nonetheless, that Fodor argues that representation and the inferential manipulation of representations require a medium of representation, least of mention, in human subjects than in computers. Say, that physically realized thoughts and mental representations are ‘linguistic’, such that of (1) they are composed of parts and are syntactically structured: (2) Their simplest parts refer to or denote things and properties in the world, (3) their meanings as wholes are determined by the semantical properties of their basic parts together with the grammatical rules that have generated their overall syntactic structures, (4) they have truth-conditions, that is, putative states of affairs in the world that would make them true, and accordingly they are true or false depending on the way the world happens actually to be: (5) They bear logical relations of entailment or implication to each other. In this way, they have according to the representational theory: Human beings have systems of physical states that serve as the elements of a lexicon or vocabulary, and human beings (somehow) physically realize rules that combine strings of those elements into configuration having the plexuities of representational contents that common sense associates with the propositional altitudes. And that is why thoughts and beliefs are true or false just as English sentences are, though a ‘language of thought’ may differ sharply in its grammar from any natural language.

Thought and language, in philosophy are evidently importantly related, but how exactly are they related? Does language come first and make thought possible or vice versa? Or are they on a par, each making the other possible?

When the question is stated this generally, has nonetheless no unqualified answer is possible. In some respect’s language is prior, in other respects thought is prior. For example, it is arguable that a language is an abstract pairing of expressions and meanings, a function, in the set-theoretic sense, from expressions onto meanings. This makes sense of the fact that Esperanto is a language no one speaks, and it explains why it is that, while it is a contingent fact that, ‘snow is white’, it is a necessary truth that it means that snow is white. However, if natural languages such as French and English are abstract objects in this sense, then they exist whether or not anyone speaks them: They even exist in possible worlds in which there are no thinkers. Once, again, language, as well as such notions as meaning and truth in a language, is prior to thought.

Yet, even if languages are construed as abstract expression-meaning pairings, they are construed that way as abstractions from actual linguistic practice ~ from the use of language in communicative behaviour ~ and there remains a clear sense in which language is dependent on thought. The sequence of succession is that, ‘Naples is south of Rome’ mans among us that Naples is south of Rome. This is a contingent fact, dependent on the way we use ‘Naples’, ‘Rome’ and the other parts of that sentence. Had our linguistic practices been different, ‘Naples is south of Rome’ might have meant something entirely different or nothing at all among us. Plainly, the fact that ‘Naples is south of Rome’ means among us that Naples is south of Rome has something to do with the ‘beliefs’ and ‘intentions’ underlying our use of the words and structure that compose the sentence. More generally, it is a platitude that the semantic features that decide on or upon the mark and sounds have in population of speakers ate, at least, partly determined by the propositional altitudes, those speakers have in using those marks and sounds, or in using the parts and structure that compose them. This is the same platitude, of course, which says that meaning depends at least partly on use: For the use in question is intentional use in communicative behaviour. So here is one clear sense in which is required to imbue marks and sounds with the semantic features they have in populations of speakers.

We know that there is some relation R such that a language L is used by a population P iff L bears R to P. This relation, however, of whatever it turns out to be, the actual-language relation is to explain the semantic features expressions, least of mention, have among those who are apt to produce those expressions, and we know that any account of the relation must require language users to have certain propositional attitudes. But how exactly is the actual-language relation to be explained in terms of the propositional attitudes of language users? And what sort of dependence might those propositional attitudes in turn have on language or on the semantic features that are fixed by the actual-language relation?

Some philosophers object to intention-based semantics only because they think it precludes a dependence of thought on the communicative use of language. This is a mistake. Even if intention-based semantic definitions are given a strong reductionist reading, as saying that public-language semantic properties (i.e., those semantic properties that supervene on us in communicative behaviour) just are psychological properties. It might still be that one could not have propositional attitudes unless one had mastery of a public language. The idea of supervenience is usually thought to have originated in moral theory, in the works of such philosopher s as G.E. Moore and R.M. Hare, nonetheless, Hare, for example, claimed that ethical predicates are ‘supervenient predicates’ in the same sense that no two things (persons, acts, states of affairs) could be exactly alike in all descriptive or naturalistic respects but unlike in that some ethical predicate (‘good’, right’, etc.) truly applies to one but not to the other. That is, there could be no difference in a moral respect without a difference in some description, or non-moral respect. following Moore and Hare, from whom he avowedly borrowed the idea of supervenience, Davidson went on to assert that supervenience in the sense is consistent with the irreducibility of the supervenient to their ‘subvenient’, or ‘base’, properties. ‘Dependence or supervenience of this kind does not entail reducibility through law or definition . . . ’.

Thus, three ideas have come to be closely associated with supervenience: (1) ‘Property covariation’ (if two things are indiscernible in base properties, they must be indiscernible in supervenience properties). (2) ‘Dependence’ (supervenient properties are dependent on, or determined by, their subvenient bases, and (3) ‘Non-reducibility’ (property covariation and dependence involved in supervenience cannot reducible to their base properties). Whether or not this is plausible (that is, a separate question), it would be no more logically puzzling that the idea that one could not have propositional attitudes unless one had ones with certain sorts of content, Tyler Burge’s insight, that the contents of one’s thoughts is partially determined by the meaning of one’s words on one’s linguistic community is perfectly consistent with any intention-based semantics, reduction of the semantic to the psychological. Nevertheless, there is reason to be sceptical of the intention-based semantic programme.

So the most reasonable view about the actual-language relation is that it requires language users to have certain propositional attitudes, but there is no prospect of defining the relation wholly in terms of non-semantic propositional attitudes. It is further plausible that any account of the actual-language relation, must appeal to speech acts such as speaker meaning, where the correct account of these speech acts is irreducibly semantic (they will fail to supervene on the non-semantic propositional attitudes of speakers in the way that intentions fail to supervene on an agent’s beliefs and desires). Is it possible to define the actual-language relation, and if so, will any irreducibly semantic notions enter into that definition other than the sorts of speech act notions already alluded to? These questions have not been much discussed in the literature. There are neither an established answer nor competing schools of thought. However, the actual-language relation is one of the few things in philosophy that can be defined, and that speech act notions are the only irreducibly semantic notions the definition must appeal to (Schiffer, 1993).

An substantiated dependence of thought on language seems unobtainably approachable, however, a useful point is an acclaimed dependence that propositional attitudes are relations to linguistic items which obtain, in, at least, in part, by virtue of the content those items have among language users. This position does not imply that believers have to be language users, but it does make language an essential ingredient in the concept of belief. The position is motivated by two considerations: (a) The supposition that believing is a relation to thing believed, which things have truth values and stand in logical relations to one another, and (b) the desire not to take things believed to be propositions ~ abstract, mind and language-independent objects that have essentially the truth conditions they have. As to say that (as well motivated: The relational construal of propositional attitudes is probably the best way to account for the quantification in ‘Harvey believes something irregular about you’. But there are problems with taking linguistic items, than propositions, as the objects of belief. In that, if ‘Harvey believes that irregularities are founded grounds held to, abnormality’ is represented along the lines of Harvey, and abnormal associations founded to irregularity, then one could know the truth expressed by the sentence about Harvey without knowing the content of his belief: For one could know that he stands in the belief relation to ‘irregularities are abnormal’ without knowing its content. This is unacceptable, as if Harvey believes that irregularity stems from abnormality, then what he believes ~ the reference of ‘That irregularity is abnormal’ ~ is that irregularities are abnormal. But what is this thing, which irregularities are abnormal? Well, it is abstract, in that it has no spatial locality: It is mind and language independent, in that it exists in possible world in which whose displacement is neither the thinkers nor speakers, and necessarily, it is true iff irregularly is abnormal. In short, it is a proposition ~ an abstract mind and-language thing that has a truth condition and has essentially the truth condition it has.

A more plausible way that thought depends on or upon language, which is suggested by the topical thesis that we think in a ‘language of thought’. As, perhaps, this is nothing more than the vague idea that the neural states that realize our thoughts ‘have elements and structure in a way that is analogous to the way in which sentences have elements and structure’. But we can get a more literal rendering by relating it to the abstractive conception of language already recommended. On this conception, a language is a function from ‘expressions’ ~ sequence of marks or sounds or neural states or whatever ~ onto meanings, which meanings will include the propositions our propositional-attitude relations relates us to. We could then read the language of thought hypothesis as the claim that having in a certain relation to a language whose expressions are neural states. There would mow be more than one ‘actual-language relation’. One might be called the ’public-language relation’, since it makes a language the instrument of communication of a population of speakers. Another relation might be called the ‘language-of-thought relation’ because standing in the relation to a language makes it one’s ‘Lingus mentis’. Since the abstract notion of a language has been so weakly construed, it is hard to see how the minimal language-of-thought proposal just sketched could fail to be true. At the same time, it has been given no interesting work to do. In trying to give it more interesting work, further dependencies of thought on language might come into play. For example, it has been claimed that the language of thought of a public-language user is the public language she uses: her neural sentences in something like her spoken sentences. For another example, it might be claimed that even if one’s language of thought is distinct from one’s public language, the language-of-thought relation makes presuppositions about the public-language relation in ways that make the content of one’s thoughts dependent on the meaning of one’s words in one’s public-language community.

Tyler Burge has in fact shown that there is as sense in which thought content is dependent on the meaning of words in one’s linguistic community (Burge, 1979). Alfred, for instance, uses ‘arthritis’ under the misconception that arthritis is not confined to the joints, he also applies the word to rheumatoid ailments not in the joints. Noticing an ailment in his thigh that is symptomatically like the disease in his hands and ankles, he mentions by saying to his doctor, ‘I have arthritis in the thigh’. Here Alfred is expressing his false belief that he has arthritis in the thigh. But now consider a counterfactual situation that differs in just one respect (and whatever it entails): Alfred would be expressing a true belief when he says ‘I have arthritis in the thigh’. Since the proposition he believes is true while the proposition that he has arthritis in the thigh is false, he believes some other proposition. This shows that standing in the belief relation to a proposition can be partly determined by the meaning of words in one’s public language. The Burge phenomenon seem real, but it would be nice to have a deep explanation of why thought content should be dependent on language in this way.

Finally, there is the old question of whether, or to what extent, a creature who does not understand a natural language can have thoughts. Now it seems pretty compelling that higher mammals and humans raised without language have their behaviour controlled by mental states that are sufficiently like our beliefs, desires and intentions to share those labels. It also seems easy to imagine non-communicating creatures who have sophisticated mental lives (they build weapons, dams, bridges, have clever hunting devices, etc.) At the same time, ascriptions of particular contents to non-language-using creatures typically seem exercises in loose speaking (does the dog really believe that there is a bone in the yard?) It is no accident that as a matter of fact, creatures who do not understand a natural language have at best, primitive mental lives. There is no accepted explanation of these facts. It is possible that the primitive mental lives of animals account for their failure to master natural language, but the better explanation may be Chomsky’s, that animals lack a special language faculty unique to our species. As regards the inevitable primitive mental life of an otherwise language, this might simply be due to the ignorance and lack of intellectual stimulation such a person would be doomed to. As such, it might require a neural language with a structure comparable to that of a natural language, and that such neural languages are somewhat acquire, as the child learns its native language. Finally, the ascription of content to the propositional attitudes states of language creatures is a difficult topic that needs more attention. It is possible that we as we learn more about the logic of our ascriptions of propositional content, we will realize that these ascriptions are egocentrically based on a similarity to the language in which we express our beliefs. We might then learn that we have no principled basis for ascribing propositional content to a creature who does not speak languages, or who does not have internal states with natural-language-like structure. It is somewhat surprising how little we know about thought’s dependence on language.

All of this suggests a specific ‘mental organ’, to use Chomsky’s phrase, that has evolved in the human cognitive system specifically in order to make language possible. The specific structure of this organ simultaneously constrains the range of possible human languages and guides the learning of the child’s target language, later, making rapid on-line language processing possible. The principles represented in this organ constitute the innate linguistic knowledge of the human being. Additional evidence for the early operation of such an innate language acquisition module is derived from the many infant studies that show that infants selectively attend to sound-streams that are prosodically appropriate that have pauses at clausal boundaries, and that contain linguistically permissible phonological sequences.

A particularly strong form of the innateness hypothesis in the psycholinguistic domain is Fodor’s (1975, 1987), ‘Language of Thought’ hypothesis. Fodor argues not only that the language learning and processing faculty is innate, but that the human representational system exploits an innate language of thought which has all of the expressive power of any learnable human language. Hence, he argues, all concepts are in fact innate, in virtue of the representational power of the language of thought. This remarkable doctrine is hence even stronger than classical rationalist doctrine of innate ideas: Whereas, Chomsky echoes Descartes in arguing that the most general concepts required for language learning are innate, while allowing that more specific concepts are acquired, Fodor echoes Plato in arguing that every concept we ever ‘learn’ is in fact innate.

Fodor defends this view by arguing that the process of language learning is a process of hypothesis formation and testing, where among the hypotheses that must be formulated are meaning postulates for each term in the language being acquired. But in order to formulate and test a hypothesis of the form ‘χ’ means ‘y’, where ‘χ’ denotes a term in the target language, prior to the acquisition of that language, the language learner. Fodor argues, must have the resources necessary to express ‘y’. Therefore, there must be, in the language of thought, a predicate available co-extensive with each predicate in any language that a human can learn. Fodor also argues for the language of thought thesis by noting that the language in which the human information cannot be a human spoken language, since that would, contrary to fact, privilege one of the world’s languages as the most easily acquired. Moreover, it cannot be, he argues, that each of us thinks in our own native language since that would (a) predict that we could not think prior to acquiring a language, contrary to the original argument, and (b) would mean that psychology would be radically different for speakers of different languages. Hence, Fodor argues, there must be a non-conventional language of thought, and the facts that the mind is ‘wired’ in mastery of its predicates together with its expressive completeness entail that all concepts are innate.

The dissertating disputation about whether there are innate qualities that infer on or upon the innate values whereby ideas are much older than previously imagined. Plato in the ‘Meno’ (the learning paradox), famously argues that all of our knowledge is innate. Descartes (1596-1650) and Leibniz (1646-1716) defended the view that the mind contains innate ideas: Berkeley (1685-1753), Hume (1711-76) and Locke (1632-1704) attacked it. In fact, as we now conceive the great debate between European Rationalism and British empiricism in the seventeenth and eighteenth centuries, the doctrine of innate ideas is a central effectuality of contention: Rationalists typically claim that knowledge is impossible without a significant stock of general innate ‘concepts’ or judgements, empiricists argued that all ideas are acquired from experience. This debate is replayed with more empirical content and with considerably greater conceptual complexities in contemporary cognitive science, most particularly within the domain of psycholinguistic theory and cognitive developmental theory. Although Chomsky is recognized as one of the main forces in the overthrow of behaviourism and in the initiation of the ‘cognitive era’. His relation between psycholinguistics and cognitive psychology has always been an uneasy one. The term ‘psycholinguistics’ is often taken to refer primarily to psychological work on language that is influenced by ideas from linguistic theory. Mainstream cognitive psychologists, for example when they write textbooks, oftentimes prefer the term ‘psychology of language’ the difference is not, however, merely in a name, least be of mention, that both Fodor and Chomsky, who argue that all concepts, or all of linguistic knowledge is innate, lend themselves to this interpretation, against empiricists who argue that there is no innate appeal in explaining the acquisition of language or the facts of cognitive development. But this debate would be a silly and a sterile for obvious reasons, something is innate. Brains are innate, and the structure of the brain must constrain the nature of cognitive and linguistic development to dome degree. Equally obviously, something is learned and is learned as opposed too merely grown as limbs or hair grow. For not all of the world’s citizens end up speaking English, or knowing the Special Theory of Relativity. The interesting questions then all concern exactly what is innate, to what degree it counts as knowledge, and what is learned, and what degree its content and structure are determined by innately specified cognitive structures. And that is a great deal to debate about.

Innatist argue that the very presence of linguistic universals argue for the innateness of linguistic knowledge, but more importantly and more compelling that the fact that these universals are, from the standpoint of communicative efficiency, or from the standpoint of any plausible simplicity criterion, adventitious. There are many conceivable grammars, and those determined by universal grammar are not ipso facto the most efficient or the simplest. Nonetheless, all human language satisfy the constraints of universal grammar. Since neither the communicative environment nor the commutative task can explain this phenomenon. It is reasonable to suppose that it is explained by the structure of the mind ~ and, therefore, by fact that the principles of universal grammar lie innate in the mind and constrain the language that a human can acquire.

Linguistic empiricists, answer that there are alternative possible explanations of the existence of such adventitious universal properties of human languages. For one thing, such universals could be explained, Putnam (1975, 1992) argues, by appeal to a common ancestral language, and the inheritance of features of that language by its descendants. Or it might turn out that despite the lack of direct evidence at present the features of universal grammar in fact do serve either the goals of communicative efficacy or simplicity according to a metric of psychological importance. Finally, empiricist point out, he very existence of universal grammar might be a trivial logical artefact (Quine, 1968): for one thing, any finite set of structures will have some feature s in common. Since there are a finite number of languages, it follows trivially that there are features they all share. Moreover, it is argued, many features of universal grammar are interdependent. So in fact the set of functional principles shared by the world’s languages may be rather small. Hence, even if these are innately determined, the amount of innate knowledge thereby required may be quite small as compared with the total corpus of general linguistic knowledge acquired by the first language learner.

These replies are rendered less plausible, innatists argue, when one considers the fact that the error’s language learners make in acquiring their first language seem to be driven far more by abstract features of grammar than by any available input data. So, despite receiving correct examples of irregular plurals or past tense forms for verbs, and despite having correctly formed the irregular forms for those words, children will often incorrectly regularize irregular verbs once acquiring mastery of the rule governing regulars in their language. And in general, not only the correct inductions of linguistic rules by young language learners, but what is more important, given the absence of confirmatory data and the presence of refuting data, children’s erroneous inductions are always consistent with universal grammar, often simply representing the incorrect setting of a parameter in the grammar. More generally, innatists argue, that all grammatical rules that have ever been observed satisfy the structure-dependence constraint. That is, many linguists and psycholinguists argue that all known grammatical rules of all the world’s languages, including the fragmentary languages of young children must be stated as rules governing hierarchical sentence structures, and not governing, say, sequence of words. Many of these, such as the constituent-command constraint governing anaphor, are highly abstract indeed, and appear to be respected by even very young children (Solan, 1983 & Crain, 1991). Such constraints may, innatists argue, be necessary conditions of learning natural language I the absence of specific instruction, modelling and correction conditions in which all first language learning acquire their native languages.

An important empiricist answer for these observations derives from recent studies of ‘connectionist’ models of the first language acquisition (Rummelhart & McClelland, 1986, 1987). Connectionist systems, not previously trained to represent any sunset of universal grammar that induce grammar which include a large set of regular forms and a few irregulars also tend to over-regularize, exhibiting the same U-shape learning curve seen in human language acquirers. It is also noteworthy that conceptionist learning systems that induce grammatical systems acquire ‘accidentally’ rules on which they are not explicitly trained, but which are consistent with those upon which they are trained, suggesting that s children acquire position of their grammar, they may accidentally ‘learn’ other consistent rules, which may be correct in other human language, but which then must be ‘unlearned’ in their home language. Yet, such ‘empiricist’ language acquisition systems have yet to demonstrate their ability to induce a sufficiently wide range of the rules hypothesized to be comprised by universal grammar to constitute a definite empirical argument for the possibility of natural language acquisition in the absence of a powerful set of innate constraints.

The poverty of the stimulus argument has been of enormous influence in innateness debates, though its soundness is hotly contested. Chomsky notes that (1) the examples of the target language to which the language learner is exposed are always jointly compatible with an infinite number of alternative grammars, and so vastly undermine the grammar, of the language, and (2) the corpus always contains many examples of ungrammatical sentences, which should in fact, serve as falsifiers of any empirically induced correct grammar of the language, also (3) there is, in general, no explicit reinforcement of correct utterances or correction of incorrect utterances, either by the learner or by those in the immediate training environment. Therefore, he argues, since it is impossible to explain the learning of the correct grammar ~ a task accomplished by all normal children within a very few years ~ on the basis of any available data or known learning algorithms, it must be that the grammar is innately specified, and is merely ‘triggered’ by relevant environmental cues.

Opponents of the linguistic innateness hypothesis, however, point out that the circumstance that Chomsky notes in this argument is hardly specific to language. As well known from arguments due to Hume (1978). Wittgenstein (1953), Goodman (1972) and Kripke (1982), in all cases of empirical abduction, and of training in the use of a word, data under-determine theories. This moral is emphasized by Quine (1954, 1960) as the principle of the undertermination of theory by data. But we, nonetheless, do abduce adequate theories in science, and we do lean the meaning of words. And it would be bizarre to suggest that all correct scientific theories or the facts of lexical semantics are innate.

But, innatists reply, that when the empiricist relies on the underdetermination of theory by data as a counterexample, a significant disanalogousness with language acquisition is ignored: The abduction of scientific theories is a difficult, labourious process, taking a sophisticated theorist a great deal of time and deliberate effort. First language acquisition, by contrast, is accomplished effortlessly and very quickly by a small child. The enormous relative ease with which such a complex and abstractive domain is mastered by such a naïve ‘theorist’ is evidence for the innateness of the knowledge achieved.

Empiricists such as Putnam (1926- ) have rejoined that innateness under-estimate the amount of time that language learning actually takes, focussing only on the number of years from the apparent onset of acquisition to the achievement of relative mastery over the grammar. Instead of noting how short this interval, they argue, one should count the total number of hours spent listening to language and speaking during this time. That number is in fact, quite large, and is comparable to the number of hours of study and practice required in the acquisition of skills that are not argued to derive from innate structures, such as chess playing or musical composition, hence, they argue once the correct temporal parameters are taken into consideration, language learning looks like one more case of human skill acquisition than like a special unfolding of innate knowledge.

Innatists, however, note that while the ease with which most such skills are acquired depends on general intelligence, language, is learned with roughly equal speed, and too roughly the same level of general syntactic mastery regardless of general intelligence. In fact, even significantly retarded individuals, assuming no special language deficit, acquire their native language on a time-scale and to a degree comparable to that of normally intelligent children. The language acquisition faculty hence, appears to allow access to a sophisticated body of knowledge independent of the sophistication of the general knowledge of the language learner. This is, language learning and utilization mechanisms are not outside of language processing. They are informationally encapsulated ~ only linguistic information is relevant to language acquisition and processing. They are mandatory ~ language learning and language processing are automatic. Moreover, language is subserved by specific dedicated neural structures, damage to which predictably and systematically impairs linguistic functioning, and not general cognitive functioning.

Again, the issues at stake in the debate concerning the innateness of such general concepts pertaining to the physical world cannot be s stark a dispute between an innate and one according to which all empirical knowledge is innate. Rather the important ~ and again, always empirical questions concern just what is innate, and just ‘what’ is acquired, and how innate equipment interacts with the world to produce experience. ‘There can be no doubt that all our knowledge begins with experience . . . experience it does not follow that all arises out of experience’.

Philosophically, the unconscious mind postulated by psychoanalysis is controversial, since it requires thinking in terms of a partitioned mind and applying a mental vocabulary (intentions, desires, repression) to a part to which we have no conscious access. The problem is whether this merely uses a harmless spatial metaphor of the mind, or whether it involves a philosophical misunderstanding of mental ascription. Other philosophical reservations about psychoanalysis concern the apparently arbitrary and unfalsifiable nature on the interpretative schemes employed. Basically, least of mention, the method of psychoanalysis or psychoanalytic therapy for psychological disorders was pioneered by Sigmund Freud (1856-1939), the method relies on or upon an interpretation of what a patient says while ‘freely associating’ or reporting what comes to mind in connection with topics suggested by the analyst. The interpretation proceeds according to the scheme favoured by the analyst, and reveals ideas dominating the unconscious, but previously inadmissible to the conscious mind of the subject. When these are confronted, improvement can be expected. The widespread practice of psychoanalysis is not matched by established data on such rate of improvement.

Nonetheless, the task of analysing psychoanalytic explanation is complicated is initially in several ways. One concerns the relation of theory to practice. There are various perspectives on the relation of psychoanalysis, the therapeutic practice, to the theoretical apparatus built around it, and these lead to different views of psychoanalysis’ claim to cognitive status. The second concerns psychoanalysis’ legitimation. The way that psychoanalytic explanation is understood has immediate implications for one’s view of its truth or acceptability, and this of course a notoriously controversial matter. The third is exegetical. Any philosophical; account of psychoanalysis must of course start with Freud himself, but it will inevitably privilege some strands of his thought at the expense of others, and in so doing favour particular post-Freudian developments over others.

Freud clearly regarded psychoanalysis as engaged principally in the task of explanation, and held fast to his claims for its truth in the course of alterations in his view of the efficacy of psychoanalysis’ advocates have, under pressure, retreated to the view that psychoanalytic theory has merely instrumental value, as facilitating psychoanalytic therapy: But this is not the natural view, which is that explanation is the autonomous goal of psychoanalysis, and that its propositions are truth-evaluable. Accordingly, it seems that preference should be given to whatever reconstruction of psychoanalytic theory does most to advance its claim to truth. Within, of course, exegetical constraints (what a reconstruction offers must be visibly present in Freud’s writings.)

Viewed in these terms, psychoanalytic explanation is an ‘extension’ of ordinary psychology, one that is warranted by demands for explanation generated from within ordinary psychology itself. This has several crucial ramifications. It eliminates, as ill-conceived, the question of psychoanalysis’ scientific status ~ an issue much discussed, as proponents of different philosophies of science have argued for and against psychoanalysis’ agreement with the canons of scientific method, and its degree or lack of correspondence. Demands that psychoanalytic explanation should be demonstrated to receive inductive support, commit itself to testable psychological laws, and contribute effectively to the prediction of action, have then no more pertinence than the same demands pressed on ordinary psychology ~ which is not very great. When the conditions for legitimacy are appropriately scaled down. It is extremely likely that psychoanalysis succeeds in meeting hem: For psychoanalysis does deepen our understanding of psychological laws, improve the predictability of action in principle, and receive inductive support on the special sense which is appropriate to interpretative practices.

Furthermore, to the extent that psychoanalysis may be seen as structured by and serving well-defined needs for explanation, there is proportionately diminished reason for thinking that its legitimation turns on the analysand’s assent to psychoanalytic interpretation, or the transformative power (whatever it may be) of these. Certainly it is true that psychoanalytic explanation has a reflective dimension lacked by explanations in the physical sciences: Psychoanalysis understands its object, the mind, in the very terms that the mind employs in its unconscious workings (such as its belief in its own omnipotence). But this point does not in any way count against the objectivity of psychoanalytic explanation. It does not imply that what it is for a psychoanalytic explanation to be true should be identified, pragmatically, with the fact that an interpretation may, for the analysand who gains self-knowledge, have the function of translating their directed-causes to set about unconscious mentality into a proper conceptual form. Nor does it imply that psychoanalysis’ attribution of unconscious content needs to be understood in anything less than full-bloodedly realistic terms. =truth in psychoanalysis may be taken to consist in correspondence with an independent mental reality, a reality that is both endorsed with ‘subjectivity’ and in many respects puzzling to its owner.

In the twentieth-century, the last major, self-consciously naturalistic school of philosophy was American ‘pragmatism’ as exemplified particularly in the works of John Dewey (1859-1952). The pragmatists replaced traditional metaphysics and epistemology with theories and methods of the sciences, and grounded their view of human life in Darwin’s biology. Following the second world war, pragmatism was eclipsed by logical positivism and what might be called ‘scientific’ positivism, a philosophy of science as the defining characteristic of all scientific statements. Ernst Mach is frequently regarded as the founder of logical positivism, however, in his book The Conservation of Energy, that only the objects of sense experience have any role in science: The task of physics is ‘the discovery of the laws of the connection of sensations (perceptions): And ‘the intuition of space is bound up with the organization of the senses . . . (so that) we are not justified in ascribing spatial properties to things which are not perceived by the senses’. Thus, for Mach, our knowledge of the physical world is derived entirely from sense experience, and the content of science is entirely characterized by the relationships among the data of our experience.

Nevertheless, pragmatism is a going concern in philosophy of science. It is often aligned with the view that scientific theories are not true or false, but are better or worse instruments for prediction and control. For Charles Peirce (1839-1914) identifies truth itself with a kind of instrumentality. A true belief is the very best we could do by way of accounting for the experiences we have, predicting the future course of experience, etc.

Peirce (1834-1914) called the sort of inference which concludes that all A’s are B’s because there are no known instances to the contrary ‘crude induction’. It assumes that future experience will not be ‘utterly at variance’ with past experience. This is, Peirce says, the only kind of induction in which we are able to infer the truth of a universal generalization. Its flaw is that ‘it is liable at any moment to be utterly shattered by a single experience’, which is to say, that warranted belief is possible only at the observational level. Induction tells us what theories are empirically successful, and thereby what explanations are successful. But the success of an explanation cannot, for historical reasons, be taken as an indicator of its truth.

The thesis that the goal of inquiry is permanently settled belief, and the thesis that the scientific attitude is a disinterested desire for truth, are united by Peirce’s definition of ‘true’. He does not think it false to say that truth is correspondence to reality, but shallow ~ a merely nominal definition, giving no insight into the concept. His pragmatic definition identifies the truth with the hypothetical ideal, which would be the final outcome of scientific inquiry were it to continue indefinitely. ‘Truth is that concordance of . . . [a] statement beliefs’: any truth more perfect than this destined conclusion, any reality more absolute than what is thought in it, is a fiction of metaphysics’. These reveal something both of the subtlety and of the potential for tension, without Peirce’s philosophy. His account of reality aims at a delicate compromise between the undesirable extremes of transcendentalism and idealism, his account of truth at a delicate compromise between the twin desiderata of objectivity and (in-principle) accessibility.

The question of what is and what is not philosophy is not a simply a query of classification. In philosophy, the concepts with which we approach the world themselves become the topic of enquiry. A philosophy of a discipline such as history, physics, or law seeks not so much to solve historical, physical, or legal questions, as to study the concepts that structure such thinking. And to lay bare their foundations and presuppositions. In this sense philosophy is what happens when a practice becomes self-conscious. The borderline between such ‘second-order’ reflection, and, ways of practising the first-order discipline itself, is not always clear: Philosophical problems may be tamed by the advance of a discipline, and the conduct of a discipline may be swayed by philosophical reflection. But the doctrine neglects the fact that self-consciousness and reflection co-exist with activity. At different times there has been more or less optimism about the possibility of a pure or ‘first’ philosophy, taking from the stand-point from which other intellectual practices can be impartially assessed and subjected to logical evaluation and correction, in that he task of the philosopher of a discipline would then be to reveal the correct method and to unmask counterfeits. Although this belief lay behind much ‘positivist’ philosophy of science, few philosophers now subscribe to it. The contemporary spirit of the subject is hostile to any such possibility, and prefers to see philosophical reflection as continuous with the best practising employment of intellectual fields of rationalizations intended reasons for enquiry.

Nonetheless, the last two decades have been an intermittent interval of extraordinary change in psychology. Cognitive psychology, which focuses on higher mental processes like reasoning, decision making, problem solving, language processing and higher-level visual processing, has become a ~ perhaps the ~ dominant paradigm among experimental psychologists, while behaviouristic oriented approaches have gradually fallen into disfavour. Largely as a result of this paradigm shift, the level of interaction between the disciplines of philosophy and psychology has increased dramatically.

One of the central goals of the philosophy of science is to provide explicit and systematic accounts of the theories and explanatory strategies exploited in the sciences. Another common goal is to construct philosophically illuminating analyses or explications of central theoretical concepts invoked in one or another science. In the philosophy of biology, for example, there is a rich literature aimed at understanding teleological explanations, and there has been a great deal of work on the structure of evolutionary theory and on such crucial conceptual perspectives proposed in biological function.

Typically, a functional explanation in biology says that an organ ‘χ’ is present in an animal because ‘χ’ has function ‘F’. What does that mean?

Some philosophers maintain that an activity of an organ counts as a function only if the ancestors of the organ’s owner were naturally selected partly because they had similar organs that performed the same activity. Thus, the historical-causal property, having conferred a selective advantage, is not just evidence that ‘F’ is a function, it is constitutive of F’s being purposively functional.

If this reductive analysis is right, a functional explanation turns out to be sketchy causal explanation of the origin of ‘χ’. It makes the explanation scientifically respectable. The ‘because’ indicates a weak relation of partial causal contribution.

However, this construal is not satisfying intuitively. To say that ‘χ’ is present because it has, a function is normally taken to mean, roughly, that ‘χ’ is present it is supposed to do something useful. Yet, this normal interpretation immediately makes the explanation scientifically problematic, because the claim that ‘χ’ is supposed to do something useful appears to be normative and non-objective.

The philosophy of physics is another area in which studies of this sort have been actively pursued. In undertaking this work, philosophers need not and do not assume that there is anything wrong with the science they are studying. Their goal is simply to provide accounts of the theories, concepts and explanatorial strategies that scientists are using ~ accounts that are more explicit, systematic and philosophically sophisticated than the often rather rough-and-ready accounts offered by the scientists themselves.

This account of intentionality is characteristic to perception and action, so that the paradigms that are usually founded of belief or sometimes beliefs and desires are key to understanding intentionality whose representation in a special sense of that word that we can explain intentional states in general, as having both a propositional content and a psychological mode, and the psychological mode which determines the direction with which the intentional state represents its conditions of satisfaction. These considerations are characteristic of all those intentional states with propositional content which do not have a mind-to-world or world-to-mind direction: All of these contain beliefs and desires, and the component beliefs and desires do have an initial direction of fit.

Once, again, of intentionality that the paradigm cases discussed are usually beliefs or sometimes beliefs and desires. However, the biologically most basic forms of intentionality are in perception and intentional action. These also have certain formal features which are not common to beliefs and desires. Consider a case of perception. Suppose I see my hand in front of my face. What are the conditions of satisfaction? First, the perceptual experience of the hand in front of my face has as its condition of satisfaction that there is a hand in front of my face. Thus far the condition of satisfaction is the same as the belief that there is a hand in front of my face. Bu t with perceptual experience there is this difference: In order that the intentional content be satisfied, the fact that there is a hand in front of my face must cause the very experience whose intentional content is that there is a hand in front of my face. This has the consequence that perception has a special kind of condition of satisfaction that we might describe as ‘causally self-referential’. The full conditions of satisfaction of the perceptual experience are, first, that there be a hand in front of my face, and second, that there is a hand in front of my face caused the very experience of whose conditions of satisfaction it forms a part. We can represent this in our canonical form as:

Visual experience (that there is a hand in front of my face

` and the fact that there is a hand in front of my face is causing

this very experience.)

Furthermore, visual experience have a kind o conscious immediacy not characteristic of beliefs and desires. A person can literally be said to have beliefs and desires while sound asleep. But one can only have visual experiences of a non-pathological kind when one is fully awake and conscious because the visual experiences are the forms of consciousness.

Event memory is a kind of halfway house between the perceptual experience and the belief. Memory, like perceptual experience Has the causally self-referential feature. Unless the memory is caused by the event, of which it is the memory. It is not a case of satisfied memory, but unlike the visual experience, it need not be conscious. One can be said to remember something while sound asleep. Beliefs, memory and perception all have the mind-to-world direction and memory and perception have the world-to-mind direction of causation.

Increasingly, proponents of the intentional theory of perception argue that perceptual experience is to be differentiated from belief not only in terms of attitude, but also in terms of the kind of content the experience is an attitude toward ascribing contents to be in a certain set-class of content-involving states is for attributes of these states to make the subject as rationally intelligible as possibility, in the circumstances. In one form or another, this idea is found in the writings of Davidson (1917-2003), who introduced the position known as ‘anomalous monism’ in the philosophy of mind, instigating a vigorous debate over the relation between mental and physical descriptions of persons, and the possibility of genuine explanation of events in terms of psychological properties. Although Davidson is a defender of the doctrine of the ‘indeterminacy of radical translation and the ‘indisputability of references, his approach has seemed too many to offer some hope of identifying meaning as a respectable notion, even within a broadly ‘extentionalized’ approach to language. Davidson is also known for rejection of the idea of a ‘conceptual scheme’, thought of as something peculiar to one language or one way of looking at the world, arguing that where the possibility of translation stops so does the coherence of the idea that there is anything to translate.

Intentional action has interesting symmetries and asymmetries to perception. Like perceptual experiences, the experiential component of intentional action is causally self-referential. If, for example, I am now walking to my car, then the condition of walking to my car, then experience is that satisfaction of the present experience is that there be certain bodily movements, and that this very experience of acting cause those bodily movements. What is more, like perceptual experience, the experience of acting is typically a conscious mental event. However, unlike the perception memory, the direction of the experience of acting is world-to-mind. My intention will only be fully carried out if the world changes so as to match the content of the intention (hence world-to-mind direction (hence world-to-mind proves directional) and the intention will only be fully satisfied if the intention itself causes the rest of the condition of satisfaction, hence, mind-to-world direction of causation.

Increasingly, proponents of the intentional theory of perception argue that perceptual representational experience is to be differentiated from belief not only in terms of attitude, but, in terms of the kind of content that experience is an attitude toward a better understanding a person’s reasons for the array of emotions and sensations to which he ids subject: What he remembers and what he forges, and how he reasons beyond the confines of minimal rationality. Even the content-involving perceptual states, which take into consideration, a fundamental role in individuating content. This, however, cannot be understood purely in terms relational to minimal rationality. A perception of the world for being a certain way is not, and could not be, under a subject’s rational control. Though it is true and rational that perceptions give reasons for forming beliefs, the beliefs for which they fundamentally provide reasons ~ observational beliefs about the environment ~ have contents which can only be elucidated by referring back to perceptual representations belonging of experience. In this respect (as in others), perceptual states differ from those beliefs and desires that are individuated by mentioning that they provide reasons for judging or doing: For frequently, these latter judgements and actions can be individuated without reference back to the states that provide reasons for them.

We are acutely aware of the effects of our own memory, its successes and its failures, so that we have the impression that we know something about how it functionally operates. But, with memory, as with most mental functions, what we are aware of is the outcome of its operation and not the operation itself. To our introspections, the essence of memory is language based and intentional. When we appear as a witness in court then the truth, as we are seen to report it is what we say about what we intentionally retrieve. This is, however, a very restricted view o memory albeit, with a distinguished history. William James (1842-1910), an American psychologist and philosopher, whose own emotional needs gave him an abiding interest in problems of religion, freedom, and ethics: The popularity of these themes and his lucid and accessible style made James the most influential American philosopher of the beginning of the 20th century. Nonetheless, James said, that ‘Memory proper is the knowledge of a former state of mind after it has already once dropped from consciousness, or rather it is the knowledge of an event, or fact, of which meantime we have not been thinking, with the additional consciousness that we have thought or experienced it before’.

One clue to the underlying structure of our memory system might be its evolutionary history. We have no reason to suppose that a special memory system evolved recently or to consider linguistic aspects of memory and intentional recall as primary. Instead, we might assume that such features are later additions to a much more primitive filing system. From this perspective one would view memory as having the primary function of enabling us (the organism as a whole, that is, not the conscious self) to interpret the perceptual world and helping us to organize our responses to changes that place in the world.

Considerations or other aspects in the content of memory are those with which contain the capacity to remember: to (1) recall past experiences, and (2) retain knowledge that was acquired in the past. It would be a mistake to omit (1), for not any instance of remembering something is an instance of retaining knowledge. Suppose that as a young child you saw the Sky Dome in Toronto, but you did not know at the time which building it was. Later you learn what the Sky Dome is, and you remember having seen it when you were a child. This is an example of obtaining knowledge of a past fact ~ by recalling a past experience, but not an example of retaining knowledge because at the time you were seeing it you did not know you were since you did not know what the Sky Dome was or represented. Furthermore, it would be a mistake to omit (2), for not any instance of remembering something is an instance of recalling the past, let alone a past experience. For example, by remembering my telephone number, I retain knowledge of a past fact, and by remembering the date of the next elections, of a future fact.

According to Aristotle (De Memoria), memory cannot exist without imagery: We remember past experiences by recalling images that represent therm. This theory ~ the representative theory of memory ~ was also held by David Hume and Bertrand Russell (1921). It is subject to three objections, the first of which was recognized by Aristotle himself. That if what I remember is an image present to me now, how can it be that what I remember belongs to the past, how can it be that it is an image now present to my mind? According to the second objection, we cannot tell the difference between images that represent actual memories and those that are mere figments of the imagination. Hume suggested two criteria to distinguish between these two kinds of images, vivacity and orderliness, and Russell a third, an accompanying feeling of familiarity. Critics of the representative theory would argue that these criteria are not good enough that they do not allow us to distinguish reliably between true memories and mere imagination. This objection is not decisive, as it only calls for a refinement of the proposed criteria. Nevertheless, the representative theory succumbs to the third objection, which is fatal: Remembering something does not require an image. In remembering their dates of birth, or telephone numbers, people do not, at least not normally, have an image of anything. In developing an account of memory, we must, therefore, proceed without making images an essential ingredient. One way of accomplishing this is to take the thing that is remembered to be a proposition, the content of which may be about the past, present, or future. Doing so would provide us with an answer to the problem pointed out by Aristotle. If the position we remember is a truth about the past, then we remember the past by virtue of having a cognation of something present ~ the proposition that is remembered.

What, then, are the necessary and sufficient conditions of remembering a proposition, of remembering that ‘p’? To begin with, believing that ‘p’ is not a necessary condition, for at a given moment ‘t’, I, may not be aware of the fact that I still remember that ‘p’ and thus, do not believe that ‘p’ at ‘t’. It is possible that I remember that ‘p’ but, perhaps because I gullibly trust another person’s judgement, unreasonably disbelieve that ‘p’. It will, however, be helpful to focus on the narrower question: Under which conditions is S’s belief that ‘p’ an instance of remembering that ‘p’? It is such an instance only if ‘S’ either (1) previously came to know that ‘p’, or (2) had an experience that put ‘S’ in a position subsequently to come to know that ‘p’. Call this the ‘original input condition’. Suppose, having learned in the past that 12 x 12 = 144 but subsequently having forgotten it. I now come to know again that 12 x 12 = 144 by using a pocket t calculator. Here the original input condition is fulfilled, but obviously this is not an example of remembering that 12 x 12 = 144. Thus, a further condition is necessary: For S’s belief that ‘p’ to be a case of remembering that ‘p’, the belief must be connected in the right way with the original input. Call this the ‘connection condition’. According to Carl Ginet (1988), the connection must be ‘epistemic’, at any time since the original input at which S acquires evidence sufficient for knowing that ‘p’, ‘S’ already knew that ‘p’. Critics would dispute that a purely epistemic account of the connection condition will suffice. They would insist that the connection be causal: For ‘S’ to remember that ‘p’, there must be an uninterrupted causal chain connecting the original input with the present belief.

Not every case of remembering that ‘p’ is one of knowing that ‘p’, although I remember that ‘p’ I might not believe that ‘p’, and I might not be justified in believing that ‘p’, for I might have information that undermines or casts doubt on ‘p’. When, however, do we know something by remembering it? What are the necessary and sufficient conditions of knowing that ‘p’ on the basis of memory? Applying the traditional conception of knowledge, we may say that ‘S’ knows that ‘p’ on the basis of memory just in case (1) ‘S’ clearly and distinctly remembers that ‘p’: (2) ‘S’ believes that ‘p’ and (3) ‘S’ is justified in believing that ‘p’. (Since (1) entail ss that ‘p’ is true, adding a condition requiring p’s truth is not necessary.) Whether this account of memory knowledge is correct, and how it is to be fleshed out in detail, are questions which concern the nature of knowledge and epistemic justification in general, and thus, will give rise too much controversy.

Memory knowledge is possible only if memory is a source of justification. Common=sense assumes it is. We naturally believe that, unless there are specific reasons for doubt, we believe that we do remember that we seem to remember, unless it is undermined or even contradicted by our background beliefs. Thus, we trust that we have knowledge of the past, however, would argue that this trust is ill-founded. According to a famous argument by Bertrand Russell (1927), it is logically possible that the world sprang into existence five minutes ago, complete with our memories and evidence, since as fossils and petrified trees, suggesting a past of millions of years. If it is, then, there is no logical guarantee that we actually do remember what we seem to remember. Consequently, so the sceptics would argue, there is no reason to trust memory. Some philosophers have replied to this line of reasoning by trying to establish that memory is necessarily reliable that it is logically impossible for the majority of our memory beliefs to be false. Alternatively, our commonsense view may be defended by pointing out that the unreasonable to trust memory ~ does not follow from its premise, memory fails to provide us with a guarantee that we seem to remember is true. For the argument to be valid, it would have to be supplemented with a further premise: For a belief to be justified, its justifying reason must guarantee its truth. Many contemporary epistemologists would dismiss this premise as unreasonably strict. One of the chief reasons for resisting it is that accepting it is harder more reasonable than our trust in particular, clear and vivid deliverance of memory. To the contrary, accepting these as true would actually appear less error prone than accepting an abstract philosophical principle which implies that our acceptance of such deliverance is justified.

These altering distinctions of forms of memory is a crude one, and seems uncategorized by the varying degrees of enabling such terms as ‘conscious’ and ‘explicit’ are so cloud-covered. Their shadowy implication, is well known, according to Schacter, McAndrews and Moscovitch, 1988, have in accordance with, the memory loss or amnesia is an inability to remember recent experiences (even from the very recent past) and to learn various but limited resultants amounts in types of information, and dilate upon features from selective brain damage that leaves perceptual, linguistic, and intellectual skills abounding with the overflowing emptiness of being and nothingness. Memory deficit misfunction have traditionally been studied using techniques designed to elicit explicit memories. So, for example, memory-loose persons in that these amnesic people, might be instructed or otherwise asked to think back to a learning episode and either recall information from that intermittent interval of their lives, or say whether a presented item had previously been encountered in the episodic period of learning. That being said, is that the very same persons who performed uncollectible afflicted in the loose of decayed or deadened or lifeless memory cells. The acquisition of skills is a case in point, and there is considerable experimental evidence showing the consensus of particular amnesic implications over a series of learning episodes. Although, a striking example is the densely amnesic unfortunates, who learned how to use a personal computer over numerous sessions, despite declaring at the beginning of each session that he had never used a computer before. In addition to this sort of capacity to learn over a succession of episodes, amnesics have performed well on single-short-lived episodes (such as completing previously shown words given to phraselogic 3-letter cues). So just as these amnesic people clearly reveal the difference between conscious and nonconscious memory, but similar dissociations can be observed in normal subjects, as when performances on indirect tasks reveal the effects of prior events that are not remembered.

Basely, the memory, as that of enabling us to interpret the perceptual world and helping us to organize our responses to the challenges of change, that take place in the world. For both functions we have to accumulate experiences in a memory system in such a way as to enable the productive excess of that experience at the appropriate times. The memory, then, can be seen as the repository of experience. Of course, beyond a certain age, we are able to use our memories in different ways, both to store information and to retrieve it. Language is vital in this respect and it might be argued that much of the socialization and the whole of schooling are devoted to just such an extension of an evolutionary (relatively) straightforward system. It will follow that most of the operation of our memory system is preconscious. That is to say, consciousness only has access to the product of the memory processes and not to the processes themselves. The aspects of memory that we are conscious of can be seen as the final state in a complex and hidden set-class of operations.

How should we think about the structure of memory? The dominant metaphor is that of association. Words, ideas, and, emotions are seen for being linked together in an endless, shapeless, and formless entanglement. That is, the way our memory can appear to us if we attempt to reflect on it directly. However, it would be a mistake to dwell too much on the problems of consciousness and imagine that theory represent the inner sanctions of structure. For a cognitive psychologist interested in natural memory phenomena there were a number of reasons for bing deeply dissatisfied with theories based on associative set-classes with which are entangling nets. One ubiquitous class of memory failure seemed particularly troublesome. This is the experience of being able to recall a great deal of what is knowable of an person other than their name. One such referent classification would entail, that ‘I know the face, but I just can’t place the name’, if someone else produced the name, we may have, as, perhaps, been able to retrieve the rest of the information needed.

How might various theories of memory account for this phenomenon? First we can take an associative network approach, and the idealized associative network, concepts, such as the concept of a person, are represented as nodes, with associated nodes being connected through links. Generally speaking, the links define the nature of the relationship between nodes, e.g., the subject-predicate distinction. Suppose that the name of the person we are trying to recall is Bill Smith. We would have a Bill Smith node (or a node corresponding to Bill Smith) with all the available information concerning Bill Smith being linked to form some kind of propositional Smith’s name. Now, failure to retrieve Bill Smith’s name, while at the same time Bill Smith, would have to due to an inability to traverse the links to the Bill Smith node. However, this seems contradictory ~ content addressability. That is to say, given that anyone constituent of a propositional representation can be accessed, the propositional node, and consequently all the other nodes link to it, should also be accessible. Thus, if we are able to recall where Bill Smith lives, where he works, whom he is married to, then, we should, in principle, be able to access the node representing his name. To account for the inability to do so, some sort of temporality ‘blocking’ of content addressability would seem to be needed. Alternatively, directionality of links would hae to be specified, though this would have to be done on a morally justified basis.

Next, we are to consider schema approaches. In that, schema models stipulate that there are abstract representations, i.e., schemata, in which all invariant information concerning any particular thing are represented. So that we would have a person schema for Bill Smith that would contain all the invariant information about him. This would include his name, personality traits, attitudes, where he lived, whether he had a family, etc. It is not clear how one would deal with our example, least of mention, since some one’s name is the quintessentially invariant property, then, given that it is known. It would have to be represented in the schema or out-line for that person. And, from our example, we knew that other invariant information, as well as variant, non-schematic information, e.g., the last talk he had given, were available for recall. This must be taken as evidence that the schema for Bill Smith was accessed. Why, then, were we unable to recall one particular piece of information that would have to be represented in the schema we clearly had access to? We would have to assume that within the person-schema or out-line for Bill Smith are sub-schema, one of which contained Bill Smith’s name, another containing the name of his wife, and so forth. We would further have to assume that access to the sub-schemata was independent and that, at the time in question, the one containing information about Bill Smith’s name was temporarily inaccessible. Unfortunately the concept of temporary inaccessibility is without precedent in schema theory and does not seem to be independently motivated.

Nonetheless, there are two other set-classes of memory problem that do not fit comfortably into the conventional frameworks. One is that of not being able to recall an event in spite of most detailed cues. This is commonly found when one partner is attempting to remind the other of a shared experience. Finally, we all have to experience of a memory being triggered spontaneously by something that was just an irrelevant part of the background for an event. Common triggers of such experiences are specific locales in town or country, scents and certain pieces of music.

What we learn from these kinds of events are that we need a model with which readily allows of their containing properties:

(1) Not all knowledge is directly retrievable;

(2) The central parts of an episode do not

necessarily cue recall of that episode;

(3) Peripheral cues, which are non-essential parts

of the contexts, can cue recall.

In response to these requirements, the frameworks of reference within which the model is couched is that of information processing. In trying to solve the problem, we first supposed, that memory consists of discrete units, or ‘records’, each containing information relevant to an ‘event’, an event being, for example, a person or a personal experience. Information contained in a record could take any number of forms, with no restrictions being placed on the way information is presented, on the amount being represented or on the number of records that could contain the same nominal information. Attached to each of these records would be some kind of access key. The function of this access key, is singular: It enables the retrieval of the record and nothing more. Only when the particular access key is used can the record, and the information contained therein, be retrieved. As with the record we felt that any type of information could be contained in the access key. However, two features would distinguish it from the record. First, the contents of the access key would be in a different form to that of the record, e.g., represented in a phonological or other central code. Second, the contents of the access key would not be retrievable.

The nature of the match required between the ‘description’ and a ‘head recording’ will be a function of the type of information in the description. If the task is to find the definition of a word or information on a named individual then a precise match may be required at least for the verbal part of the description. We assume that the ‘head recordings’ are searched in parallel. On many occasions there will be more than one head recording that matches the description. However, we require that only one record be retrieved at a time. What is more, evidence in support of this assumption is summarized in Morton, Hammersley and Bekerian (1985). The data indicate that the more recent of two possibilities, in that records are retrieved. We conclude first that once a match is made the search process terminates and secondly, that the matching process is biassed in favour of the more recent of headings. There is, of course, no guarantee that the retrieved records will contain the information that is sought. The records my be incomplete or wrong. However, in such cases, or in the case that no record had been retrieved, there are two options: Either the search is continued or it is abandoned. If the search is to be continued then a new description will have to be formed, since searching again with the same description would result in the same outcome as before. Thus, there has to be a list of criteria upon which a new description can be based.

Retrieval depends on or upon a match between the description and the heading record. The relationship between the given cue and the description is open. It is clear that there needs to be a process of description formation which will pick out the most likely descriptors from the given cue. Clearly, for the search process to be rational the set of descriptors and the set-class of head recordings should overlap. The only reasonable state of affairs would be that the creation of head recordings and the creation of descriptions is the responsibility of the same mechanism.

There are various ways of classifying mental activities and states. One useful distinction is that between the propositional attitudes and everything else. A propositional attitude in one whose description takes a sentence as complement of the verb. Belief is a propositional attitude: One believes (truly or falsely as the case may be), that there are cookies in the jar. That there are cookies in the jar is the proposition expressed by the sentence following the verb. Knowing, judging, inferring, concluding and doubts are also propositional attitudes: One knows, judges, infers, concludes, or doubts that a certain proposition (the one expressed by the sentential complement) is true.

Though the propositions are not always explicit, hope, fear, expectation. Intention, and a great many others terms are also (usually) taken to describe propositional attitudes, one hopes that (is afraid that, etc.) there are cookies in the jar. Wanting a cookie is, or can be construed as, a propositional attitude: Wanting that one has (or eat or whatever) a cookie, intending to eat a cookie is intending that one will eat a cookie.

Propositional attitudes involve the possession and use of concepts and are, in this sense, representational. One must have some knowledge or understanding of what χ’s are in order to think, believe or hope that something is ‘χ’. In order to want a cookie, intend to eat one must, in some way, know or understand what a cookie is. One must have this concept. There is a sense in which one can want to eat a cookie without knowing what a cookie is ~ if, for example, one mistakenly thinks there are muffins in the jar and, as a result wants to eat what is in the jar (= cookies). But this sense is hardly relevant, for in this sense one can want to eat the cookies in the jar without wanting to eat any cookies. For this reason(and this sense) the propositional attitudes are cognitive: They require or presuppose a level of understanding and knowledge, this kind of understanding and knowledge required to possess the concepts involved in occupying the propositional state.

Thought there is sometimes disagreement about their proper analysis, non-propositional mental states, yet do not, at least on the surface, take propositions as their object. Being in pain, being thirsty, smelling the flowers and feeling sad are introspectively prominent mental states that do not, like the propositional attitudes, require the application or use of concepts. One doesn’t have to understand what pain or thirst is to experience pain or thirst. Assuming that pain and thirst are conscious phenomena, one must, of course, be conscious or aware of the pain or thirst to experience them, but awareness of must be carefully distinguished from awareness that. One can be aware of ‘χ’, ~ thirst or a toothache ~ without being aware that, that, e.g., thirst or a toothache, is that like beliefs that and knowledge that, are a propositional attitude, awareness of is not.

As the examples, pain, thirst, tickles, itches, hungers are meant to suggest, the non-propositional states have a felt or experienced [‘phenomenal’] quality to them that is absent in the case of the propositional attitudes. Aside from whom it is we believe to be playing the tuba, believing that John is playing the tuba is much the same as believing that Joan is playing the tuba. These are different propositional states, different beliefs, yet, they are distinguished entirely in terms of their propositional content ~ in terms of what they are beliefs about. Contrast this with the difference between hearing John play the tuba and seeing him play the tuba. Hearing John play the tuba and seeing John play the tubas differ, not just (as do beliefs) in what they are of or about (for these experiences are, in fact, of the same thing: John playing the tuba), but in their qualitative character, the one involves a visual, the other an auditory, experience. The difference between seeing John play the tuba and hearing John play the tuba, is then, a sensory not a cognitive deviation.

Some mental states are a combination of sensory and cognitive elements, e.g., as fears and terror, sadness and anger, feeling joy and depression, are ordinarily thought of in this way sensations are: Not in terms of what propositions (if any) they represent, but (like visual and auditory experience) in their intrinsic character, as they are felt to the someone experiencing them. But when we describe a person for being afraid that, sad that, upset that (as opposed too merely thinking or knowing that) so-and-so happened, we typically mean to be describing the kind of sensory (feeling or emotional) quality accompanying the cognitive state. Being afraid that the dog is going to bite me is both to think (that he might bite me) ~ a cognitive state ~ and feel fear or apprehension (sensory) at the prospect.

The perceptual verbs exhibit this kind of mixture, this duality between the sensory and the cognitive. Verbs like ‘to hear’, ‘to say’, and ‘to feel’ is [often] used to describe propositional (cognitive) states, but they describe these states in terms of the way (sensory) one comes to be in them. Seeing that there are two cookies left by seeing. Feeling that there are two cookies left is coming to know this in a different way, by having tactile experiences (sensations).

On this model of the sensory-cognitive distinction (at least it is realized in perceptual phenomena). Sensations are a pre-conceptual, a pre-cognitive, vehicle of sensory information. The terms ‘sensation’ and ‘sense-data’ (or simply ‘experience’) were (and, in some circles, still are) used to describe this early phase of perceptual processing. It is currently more fashionable to speak of this sensory component in perception as the percept, the sensory information store, is generally the same: An acknowledgement of a stage in perceptual processing in which the incoming information is embodied in ‘raw’ sensory (pre-categorical, pre-recognitional) forms. This early phase of the process is comparatively modular ~ relatively immune to, and insulated from, cognitive influence. The emergence of a propositional [cognitive] states ~ seeing that an object is red ~ depends, then, on the earlier occurrence of a conscious, but nonetheless, non-propositional condition, seeing (under the right condition, of course) the red object. The sensory phase of this process constitutes the delivery of information (about the red object) in a particular form (visual): Cognitive mechanisms are then responsible for extracting and using this information ~ for generating the belief (knowledge) that the object is red. (The belief of blindness suggests that this information can be delivered, perhaps in degraded form, at a non-conscious level.)

To speak of sensations of red objects, tubas and so forth, is to say that these sensations carry information about an object’s colour, its shape, orientation, and position and (in the case of audition) information about acoustic qualities such as pitch, timbre, volume. It is not to say that the sensations share the properties of the objects they are sensations of or that they have the properties they carry information about. Auditory sensations are not loud and visual sensations are not coloured. Sensations are bearers of nonconceptualized information, and the bearer of the information that something is red need not itself be red. It need not even be the sort of thing that could be red: It might be a certain pattern of neuronal events in the brain. Nonetheless, the sensation, though not itself red, will (being the normal bearer of the information) typically produce in the subject who undergoes the experience a belief, or tendency to believe, that something red is being experienced. Hence the existence of hallucinations.

Just as there are theories of the mind that would deny the existence of any state of mind whose essence was purely qualitative (i.e., did not consists of the state’s extrinsic, causal, properties) there are theories of perception and knowledge ~ cognitive theories ~ that denies a sensory component to ordinary sense perception. The sensor y dimension (the look, feel, smell, taste of things) is (if it is not altogether denied) identified with some cognitive condition (knowledge or belief) of the experienced. All seeing (not to mention hearing, smelling and feeling) becomes a form of believing or knowing. As a result, organisms that cannot know cannot have experiences. Often, to avoid these striking counterintuitive results, implicit or otherwise unobtrusive (and, typically, undetectable) forms of believing or, knowing.

Aside, though, from introspective evidence (closing and opening one’s eyes, if it changes beliefs at all, doesn’t just change beliefs, it eliminates and restores a distinctive kind of conscionable experience), there is a variety of empirical evidence for the existence of a stage in perceptual processing that is conscious without being cognitive (in any recognizable sense). For example, experiments with brief visual displays reveal that when subjects are exposed for very brief (50 msec.) Intervals to information-rich stimuli, there is persistence (at the conscious level) of what is called an image or visual icon that embodies more information about the stimulus than the subject can cognitively process or report on. Subjects cab exploit the information in this persisting icon by reporting on any part of the absent array of numbers (the y can, for instance, reports of the top three numbers, the middle three or the bottom three). They cannot, however, identify all nine numbers. The y report seeing all nine, and the y can identify any one of the nine, but they cannot identify all nine. Knowledge and brief, recognition and identification ~ these cognitive states, though present for any two or three numbers in the array, are absent for all nine numbers in the array. Yet, the image carries information about all nine numbers (how else accounts for subjects’ ability to identify any number in the absent array?) Obviously, then, information is there, in the experience itself, whether or not it is, or even can be. As psychologists conclude, there is a limit on the information processing capacities of the latter (cognitive) mechanisms that are not shared by the sensory stages themselves.

Perceptual knowledge is knowledge acquired by or through the senses. This includes most of what we know. Some would say it includes everything we know. We cross intersections when we see the light turn green, head for the kitchen when we smell the roast burning, squeeze the fruit to determine its ripeness, and climb out of bed when we hear the alarm, ring. In each case we come to know something ~ that the light has turned green, that the roast is burning, that the melon is overripe, and that it is time to get up ~ that the light has turned green ~ by use of the eyes. Feeling that the melon is overripe in coming to know a fact ~ that the melon is overripe ~ by one’s sense of touch. In each case the resulting knowledge is somehow based on, derived from or grounded in the sort of experience that characterizes the sense modality in question.

Seeing a rotten kumquat is not at all like the experience of smelling, tasting or feeling a rotten kumquat. Yet all these experiences can result in the same knowledge ~ Knowledge that the kumquat is rotten. Although the experiences are much different, they must, if they are to yield knowledge, embody information about the kumquat: The information that it is rotten. Seeing that the fruit is rotten differs from smelling that it is rotten, not in what is known, but how it is known. In each case, the information has the same source ~ the rotten kumquat -, but it is, so top speak, delivered via different channels and coded and re-coded in different experiential neuronal excitations as stimulated sense attractions.

It is important to avoid confusing perceptual knowledge of facts, e.g., that the kumquat is rotten, with the perception of objects, e.g., rotten kumquats. It is one thing to see (taste, smell, feel) a rotten kumquat, and quite another to know (by seeing or tasting) that it is a rotten kumquat. Some people, after all, don not know what kumquats look like. They see a kumquat but do not realize (do mot see that) it is a kumquat. Again, some people do not know what a kumquat smell like. They smell a rotten kumquat and ~ thinking, perhaps, that this is a way this strange fruit is supposed to smell ~ does not realize from the smell, i.e., do not smell that it is a rotted kumquat. In such cases people see and smell rotten kumquats ~ and in this sense perceive rotten kumquat ~ and never know that they are kumquats ~ let alone rotten kumquats. They cannot, not at least by seeing and smelling, and not until they have learned something about (rotten) kumquats. Since the topic as such is incorporated in the perceptual knowledge ~ knowing, by sensory means, that something if ‘F’ -, we will be primary concerned with the question of what more, beyond the perception of F’s, is needed to see that (and thereby know that) they are ‘F’. The question is, however, not how we see kumquats (for even the ignorant can do this) but, how we know (if, that in itself, that we do) that, that is what we see.

Much of our perceptual knowledge is indirect, dependent or derived. By this is that it is meant that the facts we describe ourselves as learning, as coming to know, by perceptual means are pieces of knowledge that depend on our coming to know something else, some other fat, in a more direct way. We see, by the gauge, that we need gas, see, by the newspapers, that our team has lost again, or see, by her expression that is nervous. This derived or dependent sort of obtainable knowledge is particularly prevalent in the case of vision but it occurs, to a lesser degree, in every sense modality. We install bells and other noise-makers so that we can, for example, hear (by the bells) that someone is at the door and (by the alarm) that its time to get away. When we obtain knowledge in this way. It is clear that unless one sees ~ hence, comes to know. Something about the gauge (that it reads ‘empty’), the newspaper (which is says) and the person’s expression, one would not see (hence, know) what one is described as coming to know by perceptual means. If one cannot hear that the bell is ringing, one cannot ~ not at least in this way ~ hear that one’s visitors have arrived. In such cases one sees (hears, smells, etc.) that ‘a’ is ‘F’, coming to know thereby that ‘a’ is ‘F’, by seeing (hearing, etc.) that some other condition, b’s being ‘G’, obtains. When this occurs, the knowledge (that ‘a’ is ‘F’) is derived, or dependent on, the more basic perceptual knowledge that ‘b’ is ‘G’.

Though perceptual knowledge about objects is often, in this way, dependent on knowledge of fats about different objects, the derived knowledge is sometimes about the same object. That is, we see that ‘a’ is ‘F’ by seeing, not that some other object is ‘G’, but that ‘a’ itself is ‘G’. We see, by her expression, that she is nervous. She tells that the fabric is silk (not polyester) by the characteristic ‘greasy’ feel of the fabric itself (not, as I do, by what is printed on the label). We tell whether it is an oak tree, a Porsche, a geranium, an igneous rock or a misprint by its shape, colour, texture, size, behaviour and distinctive markings. Perceptual knowledge of this sort is also deprived ~ derived from the more basic facts (about ‘a’) we use to make the identification. In this case the perceptual knowledge is still indirect because, although the same object is involved, the facts we come to know about it are different from the facts that enable us to know it.

Derived knowledge is sometimes described as inferential, but this is misleading, at the conscious level there is no passage of the mind from premise to conclusion, no reasoning, no problem-solving. The observer, the one who sees that ‘a’ is ‘F’ by seeing that ‘b’ (or ‘a’ itself) is ‘G’, need not be (and typically is not) aware of any process of inference, any passage of the mind from one belief to another. The resulting knowledge, though logically derivative, is psychologically immediate. I could see that she was getting angry: so, I moved my hand. I did not, ~ at least not at any conscious level ~ infers (from her expression and behaviour) that she was getting angry. I could (or, so it seemed to me) see that she was getting angry. It is this psychological immediacy that makes indirect perceptual knowledge a species of perceptual knowledge.

The psychological immediacy that characterises so much of our perceptual knowledge ~ even (sometimes) the most indirect and derived forms of it ~ does not mean that learning is not required to know in this way. One is not born with (may, in fact, never develop) the ability to recognize daffodils, muskrats and angry companions. It is only after a long experience that one is able visually to identify such things. Beginners may do something corresponding to inference: They recognize relevant features of trees, birds, and flowers, factures they already know how to perceptually identify, and then infer (conclude), on the basis of what they see, and under the guidance of more expert observers, that it’s an oak a finch or a geranium. But the experts (and we are all experts on many aspects of our familiar surroundings) do not typically go through such a process. The expert just sees that it’s an oak, a finch or a geranium. The perceptual knowledge of the expert is still dependent, of course, since even an expert cannot see what kind of flower it is if she cannot first see its colour and shape, but it is to say, that the expert has developed identificatory skills that no longer require the sort of conscious inferential process that characterize a beginner’s efforts.

Coming to know that ‘a’ is ‘F’ by seeing that ‘b’ is ‘G’ obviously requires some background assumption on the part of the observer, an assumption to the effect that ‘a’ is ‘F’ (or perhaps only probable ‘F’) when ‘b’ is ‘G’. If one does not assume (as taken to be granted) that the gauge is properly connected, and does not, thereby assume that it would not register ‘empty’, unless the tank was nearly empty, then even if one could see that it registered ‘empty’, one would not learn ( hence, would not see) that one needed gas. At least, one would not see it by consulting the gauge. Likewise, in trying to identify birds, its no use being able to see their markings if one doesn’t know something about which birds have which marks ~ sometimes of the form: A bird with these markings is (probably) a finch.

It would seem, moreover, that these background assumptions, if they are to yield knowledge that ‘a’ is ‘F’, as they must if the observer is to see (by b’s being ‘G’) that ‘a’ is ‘F’, must they qualify as knowledge. For if this background fact is not known, if it is not known whether ‘a’ is ‘F’ when ‘b’ is ‘G’, then the knowledge of b’s being ‘G’, taken by itself, powerless to generate the knowledge that ‘a; is ‘F?’. If the conclusion is to be known to be true, both the premises used to reach that conclusion must be known to be true. Or so it would seem.

Externalism/internalism are most generally accepted of this distinction if that a theory of justification is internalist, if and only if it requires that all of the factors needed for a belief to be epistemically justified for a given person be cognitively accessible to that person. Internal to his cognitive perspective, and external, if it allows that, at least, part of the justifying factor need not be thus accessible, so they can be external to the believers’ cognitive perspective, beyond his understanding. As complex issues well beyond our perception to the knowledge or an understanding, however, epistemologists often use the distinction between internalist and externalist theories of epistemic justification without offering any very explicit explication.

The externalism/internalism distinction has been mainly applied to theories of epistemic justification. It has also been applied in a closely related way to accounts of knowledge and in a rather different way to accounts of belief and thought content.

The internalist requirement of cognitive accessibility can be interpreted in at least two ways: A strong version of internalism required that the believer actually be aware of the justifying factor in order to be justified: While a weaker version would require only that he be capable of becoming aware of them by focussing his attention appropriately, but without the need for any change of position, new information etc. Though the phrase ‘cognitively accessible’ suggests the weak for internalism, wherefore, the idea that epistemic justification requires that the believer actually have in his cognitive possession a reason for thinking that the belief is true.

It should be carefully noticed that when internalism is construed by either that the justifying factors literally are internal mental states of the person or that the internalism. On whether actual awareness of the justifying elements or only the capacity to become aware of them is required, comparatively, the consistency and usually through a common conformity brings upon some coherentists views that could also be internalist, if both the belief and other states with which a justification belief is required to cohere and the coherence relations themselves are reflectively accessible. In spite of its apparency, it is necessary, because on at least some views, e.g., a direct realist view of perception, something other than a mental state of the believer can be cognitively accessible, not sufficient, because there are views according to which at least, some mental states need not be actual (strong versions) or even possible (weak versions) objects of cognitive awareness.

Obviously too, a view that was externalist in relation to a strong version of internalism (by not requiring that the believer actually be aware of all justifying factors) could still be internalist in relation to a weak version (by requiring that, at least, be capable of becoming aware of them).

The most prominent recent externalist views have been versions of ‘reliabilism’, whose main requirement for justification is roughly that the beliefs be produced in a way or to a considerable degree in which of subject matter conducting a process that makes of objectively likely that the belief is true. What makes such a view externalist is the absence of any requirement that the person for whom the belief is justified have any sort of cognitive access to the relation of reliability in question. Lacking such access, such a person will in general have no reason for thinking that the belief is true or likely to be true, but will, on such an account, nonetheless, be epistemically justified in accepting it. Thus, such a view arguably marks a major break from the modern epistemological tradition, stemming from Descartes, which identifies epistemic justification with having a reason, perhaps, even a conclusive reason, for thinking that the belief is true. An epistemologist working within this tradition is likely to feel that the externalist, than offering a competing account of the same concept of epistemic justification with which the traditional epistemologist is concerned, has simply changed the subject.

An alterative to giving an externalist account of epistemic justification, one which may be more defensible while still accommodating many of the same motivating concerns, is especially given to some externalists account of knowledge directly, without relying on an intermediate account of justification. Such a view will obviously have to reject the justified true belief account of knowledge, holding instead that knowledge is true belief which satisfies the chosen externalist condition, e.g., is a result of a reliable process, and, perhaps, further conditions as well. This makes it possible for such a view to retain an internalist account of epistemic justification, though the centralities are seriously diminished. Such an externalist account of knowledge can accommodate the common-sense conviction that animals, young children and unsophisticated adults possess knowledge though not the weaker conviction that such individuals are epistemically justified in their belief. It is also, at least. Vulnerable to internalist counter-examples, since the intuitions involved there pertains more clearly to justification than to knowledge, least of mention, as with justification and knowledge, the traditional view of content has been strongly internalist in character. An objection to externalist accounts of content is that they seem unable to do justice to our ability to know the content of our beliefs or thoughts ‘from the inside’, simply by reflection. So, then, the adoption of an externalist account of mental content would seem as if part of all of the content of a belief is inaccessible to the believer, then both the justifying status of other beliefs in relation to that content and the status of the content as justifying further beliefs will be similarly inaccessible, thus contravening the internalist requirements for justification.

To understand the way this is supposed to work, consider an ordinary example, ‘S’ identifies a banana (learns that it is a banana) by noting its shape and colour ~ perhaps, even tasting and smelling it (to make sure it’s not wax). In this case the perceptual knowledge that is a banana is (the direct realist admits) indirect, dependence on S’s perceptual knowledge of its shape, colour, smell, and taste. ‘S’ learns that it is a banana by seeing that it is yellow, banana-shaped, etc. Nonetheless, S’s perception of the banana’s colour and shape is direct. ‘S’ does not see that the object is yellow, for example, by seeing, knowing, believing anything more basic ~ either that we are knowing or aware of the banana or anything else, e.g., his own sensations of the banana. ‘S’ has learned to identify such features, of course, but when ‘S’ learned to do is not an inference, even a unconscious inference, from other things be believes. What ‘S’ acquired was a cognitive skill, a dispositions to believe of yellow objects he saw that they were yellow. The exercise of this skill does not require, and in no way depends on having of any other beliefs. S’s identificatorial successes will depend on his operating in certain special conditions, of course, ‘S’ will not, perhaps, be able to visually identify yellow objects in drastically reduced lighting, at funny viewing angles, or when afflicted with certain nervous disorders. But these facts about ‘S’ can see that something is yellow does not show that his perceptual knowledge (that ‘a’ is yellow) in any way deepens on a belief )let alone knowledge) that he is in such special conditions. It merely shows that direct perceptual knowledge is the result of exercising a skill, an identificatorial skill, that like any skill. Requires certain conditions for its successful exercise. An expert basketball player cannot shoot accurately in a hurricane. He needs normal conditions to do what he has learned to do. So also, with individuals who have developed perceptual (cognitive) skills. They need normal conditions to do what they have learned to do. They need normal conditions to see, for example, that something is yellow. But they do not, any more than the basketball player, have to know they are in these conditions to do what being in these conditions enables them to do.

This means, of course, that for a direct realist direct perceptual knowledge is fallible and corrigible. Whether ‘S’ sees that ‘a’ is ‘F’ depends on his being caused to believe that ’a’ is ‘F’ in conditions that are appropriate for an exercise of that cognitive skill. If conditions are right, then ‘S’ sees (hence, knows) that ‘a’ is ‘F’. If they aren’t, he doesn’t. Whether or not ‘S’ knows depends, then, not on what else, if anything, ‘S’ believes, but on the circumferences in which ‘S’ comes to believe. This being so, this type of direct realism is a form of externalism, direct perception of objective facts, pure perceptual knowledge of external events, is made possible because what is needed, by way of justification for such knowledge has been reduced. Background knowledge ~ and, in particular, the knowledge that the experience does, and suffices for knowing ~ is not needed.

This means that the foundations of knowledge are fallible. Nonetheless, though fallible, they are in no way derived. That is what makes them foundations. Even if they are brittle, as foundations sometimes are, everything else rests upon them

The theory of representative realism holds that (1) there is a world whose existence and nature is independent of us and of our perceptual experience of it, and (2) perceiving an object located in that external world necessarily involves causally interacting with that object, (3) the information acquired in perceiving an object is indirect: It is information most immediately about the perceptual experience caused in us by the object, and only derivatively about the object itself:

Clause 1. Makes representative realism a species of realism.

Clause 2. Makes it a species of causal theory of perception.

Clause 3. Makes it a species of representative as opposed

to direct realism.

Traditionally, representative realism has been allied with an act/object analysis of sensory experience. Its act/object analysis is traditionally a major plank in arguments for representative realism. According to the act/object analysis of experience with content involves an object of experience to which the subject is related by an act of awareness (the event of experiencing that object). This is meant to apply not only to perceptions, which have material objects (whatever is perceived), but also to experiences like hallucinations and dream experiences, which do not. Such experiences nonetheless, appear to represent something. And their objects are supposed to be whatever it is that they represent. Act/object theorists may differ on the nature of objects of experience, which have been treated as properties, Meinongian objects (which may not exist or have any form of being), and, more commonly, private mental entities with sensory qualities. (The term ‘sense-data’ is now usually applied to the latter, but has also been used as a general term for objects of sense experiences, as in the work of G.E. Moore.) Act/object theorists may also differ on the relationship between objects of experience and objects of perception. In terms of representative realism, objects of perception (of which we are ‘indirectly aware’). Meinongians, however, may simply treat objects of perception as existing objects of experience.

Realism in any area of thought is the doctrine that certain entities allegedly associated with that area are indeed real. Common sense realism ~ sometimes called ‘realism’, without qualification ~ says that ordinary things like chairs and trees and people are real. Scientific realism says that theoretical posits like electrons and fields of force and quarks are equally real. And psychological realism says mental states like pain and beliefs are real. Realism can be upheld ~ and opposed ~ in all such areas, as it can with differently or more finely drawn provinces of discourse: For example, with discourse about colours, about the past, about possibility and necessity, or about matters of moral right and wrong. The realist in any such area insists on the reality of the entities in question in the discourse.

If realism itself can be given a fairly quick characterization, it is more difficult to chart the various forms of opposition, for they are abound in. Some opponents deny that there are any distinctive posits associated with the area of discourse under dispute: A good example is the emotivist doctrine that moral discourse does not posit values but serves only, like applause and exclamation, to express feelings. Other opponents deny that entities posited by the relevant discourse exist, or, at least, exists independently of our thinking about them: Here the standard example is ‘idealism’. And others again, insist that the entities associated with the discourse in question are tailored to our human capacities and interests and, to that extent, are as much a product of invention as a matter of discovery.

Nevertheless, one us e of terms such as ‘looks’, ‘seems’, and ‘feels’ is to express opinion. ‘It looks as if the Conservative Party will win the next election’ expresses an opinion about the party’s chances and does not describe a particular kind of perceptual experience. We can, however, use such terms to describe perceptual experience divorced from any opinion to which the experience may incline us. A straight-stick half in water looks bent, and does so to people completely familiar with this illusion who has, therefore, no inclination to hold that the stick is in fact bent. Such users of ‘looks’, ‘seems’, ‘taste’, etc. are commonly called ‘phenomenological’.

The act/object theory holds that the sensory experience recorded by sentence employing sense is a matter of being directly acquainted with something which actually bears the red to me. I am acquainted with a red expanse (in my visual field): When something tastes bitter to me I am directly acquainted with a sensation with the property of being bitter, and so on and so forth. (If you do not understand the term ‘directly acquainted’, stick a pin into your finger. The relation you will then bear to your pain, as opposed to the relation of concern you might bear to another’s pain when told about it, is an instance e of direct acquaintance e in the intended sense.)

The act/object account of sensory experience combines with various considerations traditionally grouped under the head of the argument for illusion to provide arguments for representative realism, or more precisely for the clause in it that contents that our senorily derived information about the world comes indirectly, that what we are most directly acquainted with is not an aspect of the world but an aspect for our mental sensory responses to it. Consider, for instance, the aforementioned refractive illusion, that of a straight stick in water looking bent. The act/object account holds that in this case we are directly acquainted with a bent shape. This shape, so the argument runs, cannot be the stick as it is straight, and thus, must be a mental item, commonly called a sense-datum. And, ion general sense-data-visual, tactual, etc. ~ is held to be the objects of direct acquaintance. Perhaps the most striking uses of the act/object analysis to bolster representative realism turns on what modern science tells us about the fundamental nature of the physical world. Modern science tells us that the objects of the physical world around us are literally made up of enormously many, widely separated, tiny particles whose nature can be given in terms of a small number of properties like mass, charge, spin and so on. (These properties are commonly called the primary qualities, as primary and secondary qualities represent a metaphysical distinction with which really belong to objects in the world and qualities which only appear to belong to them, or which human beings only believe to belong to them, because of the effects those objects produce ion human beings, typically through the sense organs, that is to say, something that does not hold everywhere by nature, but is producing in or contributed by human beings in their interaction with a world which really contains only atoms of certain kinds in a void. To think that some objects in the world are coloured, or sweet ort bitter is to attribute to objects qualities which on this view they do not actually possess. Rather, it is only that some of the qualities which are imputed to objects, e.g., colour, sweetness,

bitterness, which are not possessed by those objects. But, of course, that is not how the objects look to us, not how they present to our senses. They look continuous and coloured. What then, can be these coloured expanses with which we are directly acquainted, be other than mental sense-data?

Two objections dominate the literature on representative realism: One goes back to Berkeley (1685-1753) and is that representative realism lead straight to scepticism about the external world, the other is that the act/object account of sensory awareness is to be rejected in favour of an adverbial account.

Traditional representative realism is a ‘veil of perception’ doctrine, in Bennett’s (1971) phrase. Lock e’s idea (1632-1704) was that the physical world was revealed by science to be in essence colourless, odourless, tasteless and silent and that we perceive it by, to put it metaphorically, throwing a veil over it by means of our senses. It is the veil we see, in the strictest sense of ‘see’. This does not mean that we do not really see the objects around us. It means that we see an object in virtue of seeing the veil, the sense-data, causally related in the right way to that object, an obvious questions to ask, therefore, is what justified, in that we believe that there is anything behind the veil, and if we are somehow justified in believing that there is something behind the veil, so that we can be confident of what it is like?

One intuition that lies at the heart of the realist’s account of objectivity is that, in the last analysis, the objectivity of a belief is to be explained by appeal to the independent existence of the entities it concerns: Epistemological objectivity, this is, is to b e analysed in terms of ontological notions of objectivity. A judgement or beliefs is epistemological notions of objectivity, if and only if it stands in some specified reflation to an independently existing determinate reality. Frége (1848-1925), for example, believed that arithmetic could comprise objective knowledge only if the numbers it refers to, the propositions it consists of, the functions it employs, and the truth-values it aims at, are all mind-independent entities. And conversely, within a realist framework, to show that the members of a given class of judgements are merely subjective, it is sufficient to show that there exists no independent reality that those judgements characterize or refer to.

Thus, it is favourably argued that if values are not part of the fabric of the world, then moral subjectivity is inescapable. For the realist, the, of epistemological notions of objectivity is to be elucidated by appeal to the existence of determinate facts, objects, properties, events and the like, which exit or obtain independent of any cognitive access we may have to them. And one of the strongest impulses toward platonic realism ~ the theoretical commitment to the existence of abstract objects like sets, numbers, and propositions ~ stems from the widespread belief that only if such things exist in their own right can, we allow that logic, arithmetic and science are indeed objective. Though ‘Platonist’ realism in a sense accounts for mathematical knowledge, it postulates such a gulf between both the ontology and the epistemology of science and that of mathematics that realism is often said to make the applicability of mathematics in natural science into an inexplicable mystery

This picture is rejected by anti-realists. The possibility that our beliefs and theories are objectively true is not, according to them, capable of being rendered intelligible by invoking the nature and existence of reality as it is in and of itself. If our conception of epistemological objective notions is minimal, requiring only ‘presumptive universality’, then alternative, non-realist analysers of it can seem possible ~ and eve n attractive. Such analyses have construed the objectivity of an arbitrary judgement as a function of its coherence with other judgements, of its possession of grounds that warrant it. Of its conformity to the a prior rules that constitute understanding, of its verifiability (or falsifiability), or if its permanent presence in the mind of God. On e intuitive common to a variety of different anti-realist theories is such that for our assertions to be objective, for our beliefs to comprise genuine knowledge, those assertions and beliefs must be, among other things, rational, justifiable, coherent, communicable and intelligible. But it is hard, the anti-realist claims, to see how such properties as these can be explained by appeal to entities as they are on and of themselves. On the contrary, according to most forms of anti-realism, it is only the basis of ontological subjective notions like ‘the way reality seems to us’, ‘the evidence that is available to us’, ‘the criteria we apply’, ‘the experience we undergo’ or ‘the concepts we have acquired’ that epistemological notions of objectivity of our beliefs can possibly be explained.

The reason by which a belief is justified must be accessible in principle to the subject hold that belief, as Externalists deny this requirement, proposing that this makes knowing too difficult to achieve in most normal contexts. The internalist-Externalists debate is sometimes also viewed as a debate between those who think that knowledge can be naturalized (Externalists) and those who do not (internalist) naturalists hold that the evaluative notions used in epistemology can be explained in terms of non-evaluative concepts ~ for example, that justification can be explained in terms of something like reliability. They deny a special normative realm of language that is theoretically different from the kinds of concepts used in factual scientific discourse. Non-naturalists deny this and hold to the essential difference between normative and the factual: The former can never be derived from or constituted by the latter. So internalists tend to think of reason and rationality as non-explicable in natural, descriptive terms, whereas, Externalists think such an explanation is possible.

Although the reason, . . . to what we think to be the truth. The sceptic uses an argumentive strategy to show the alternatives strategies that we do not genuinely have knowledge and we should therefore suspend judgement. But, unlike the sceptics, many other philosophers maintain that more than one of the alternatives are acceptable and can constitute genuine knowledge. However, it seems dubitable to have invoked hypothetical sceptics in their work to explore the nature of knowledge. These philosophers did no doubt that we have knowledge, but thought that by testing knowledge as severely as one can, one gets clearer about what counts as knowledge and greater insight results. Hence there are underlying differences in what counts as knowledge for the sceptic and other philosophical appearances. As traditional epistemology has been occupied with dissassociative kinds of debate that led to a dogmatism. Various types of beliefs were proposed as candidates for sceptic-proof knowledge, for example, those beliefs that are immediately derive by many as immune to doubt. Nevertheless, that they all had in common was that empirical knowledge began with the data of the senses, that this was safe from scepticism and that a further superstructure of knowledge was to be built on this firm basis.

It might well be observed that this reply to scepticism fares better as a justification for believing in the existence of external objects, than as a justification of the views we have about their nature. It is incredible that nothing independent of us is responsible for the manifest patterns displayed by our sense-data, but granting this leaves open many possibilities about the nature of the hypnotized external reality. Direct realists often make much of the apparent advantage that their view has in the question of the nature of the external world. The fact of the matter is, though, that it is much harder to arrive at tenable views about the nature of external reality than it is to defend the view that there is an external reality of some kind or other. The history of human thought about the nature of the external world is littered with what are now seen (with the benefit of hindsight) to be egregious errors ~ the four element theory, phlogiston, the crystal spheres, vitalism, and so on. It can hardly be an objection to a theory that makes the question of the nature of external reality much harder than the question of its existence.

Contemporary philosophy of mind, following cognitive science, uses the term ‘representation’ to mean just about anything that can be semantically evaluated. Thus, representations may be said to be true, to refer, to be accurate, and so forth. Representation thus conceived comes in many varieties. The most familiar are pictures, three-dimensional models, e.g., statues, scale model, linguistic text (including mathematical formulas) and various hybrids of these such as diagrams, maps, graphs and tables. It is an open question in cognitive science whether mental representation, which is our real topic, but at which time it falls within any of these or any-other familiar provinces.

The representational theory of cognition and thought is uncontroversial in contemporary cognitive science that cognitive processes are processes that manipulate representations. This idea seems nearly inevitable. What makes the difference between processes that are cognitive-solving a problem, say and those that are not-a patellar reflex, for example-is just that cognitive processes are epistemically assessable? A solution procedure can be justified or correct, as a reflex cannot. Since only things with content can be epistemically assessed, processes appear to count as cognitive only in so far as they implicate representations.

It is tempting to think that thoughts are the mind’s representations: Are not thoughts just those mental states that have semantic content? This is, no doubt, harmless enough provided we keep in mind that cognitive science may attribute to thought’s properties and contents that are foreign too common-sense. First, most of the representations hypothesized by cognitive science do not correspond to anything common-sense would recognize as thoughts. Standard psycholinguistic theory, for instance, hypothesizes the construction of representations of the syntactic structures of the utterances one hears and understands. Yet we are not aware of, and non-specialists do not even understand, the structures represented. Thus, cognitive science may attribute thoughts where common-sense would not. Second, cognitive science may find it useful to individuate thoughts in ways foreign too common-sense.

However, concepts occupy mental states having content: A belief may have the content that I will catch the train, or a hope may have the content that the prime minister will resign. A concept is something which is capable of being a constituent of such contents. More specifically, a concept is a way of thinking of something-a particular object, or property, or relation, or some other entity.

Several different concepts may each be ways of thinking of the same object. A person may think of himself in the first-person pronoun, or think of himself as the spouse of Julie Smith, or as the person located in a certain room now. More generally, a concept ‘c’ is such-and-such, without believing ‘d’ is such-and-such. As words can be combined to form structured sentences, concepts have also been conceived as combinable into structured complex contents. When these complex contents are expressed in English by ‘that . . . ‘ clauses, as in our opening examples, they will be capable of being true or false, depending on the way the world is.

A fundamental question for philosophy is: What individuates a given concept-that is, what makes it the one it is, than any other concept? One answer, which has been developed in great detail, is that it is impossible to give a non-trivial answer to this question (Schiffer, 1987). An alternative approach, favoured by most, addresses the question by starting from the idea that a concept is individuated by the condition which must be satisfied if a thinker is to possess that concept and to be capable of having beliefs and other contributing attributes whose contents contain it as a constituent. So, to take a simple case, one could propose that the logical concept -‘and’- is individuated by this condition: It is the unique concept ‘C’ to posses which a thinker has to find these forms of inference compelling, without basing them on any further inference or information: From any two premisses ‘A’ and ‘B’, ‘ABC’ can be inferred, and from any premiss ‘ABC’, each, of ‘A’ and ‘B’ can be inferred. Again, a relatively observational concept such as ‘round’ can be individuated in part by stating that the thinker finds specified contents containing it compelling when he has certain kinds of perception, and in part by relating those judgements containing the concept and which are not based on perception to those judgements that are. A statement which individuates a concept by saying what is required for a thinker to possess it can be described as giving the ‘possession condition’ for the concept.

A possession condition for a particular concept may actually make use of that concept. The possession condition for ‘and’, does not. We can also expect to use relatively observational concepts in specifying the kind of experiences, least of mention, to which have to be made in defence of the possession conditions for relatively observational concepts. What we must avoid is mention of the concept in question as such within the content of the attributes attributed to the thinker in the possession condition. Otherwise we would be presupposed possession of the concept in an account which was meant to elucidate its possession. In talking of what the thinker finds compelling, the possession conditions can also respect an insight of the later Wittgenstein: That a thinker’s mastery of a concept is inextricably tied to how he finds it natural to go on in new cases in applying the concept.

Sometimes a family of concepts has this property: It is not possible to master any one of the members of the family without mastering the others. Two of the families which plausibly have this status are these: The family consisting of some simple concepts 0, 1, 2, . . . of the natural numbers and the corresponding concepts of numerical quantifiers there are 0, so-and-so’s, there is 1 so-and-so, . . . , and the family consisting of the concepts ‘belief’ and ‘desire’. Such families have come to be known as ‘local holism’. A local holism does not prevent the individuation of a concept by its possession condition. Rather, it demands that all the concepts in the family be individuated simultaneously. So one would say something of this form: Belief and desire form the unique pair of concepts C1 and C2 such that for a thinker to poses them are to meet such-and-such condition involving the thinker, C1 and C2. For these and other possession conditions to individuate properly, it is necessary that there be some ranking of the concept treated. The possession conditions for concepts higher in the ranking must presuppose only possession of concepts at the same or lower levels in the ranking.

A possession condition may in various way’s make a thinker’s possession of a particular concept dependent on or upon his relations to his environment. Many possession conditions will mention the links between a concept and the thinker’s perceptual experience. Perceptual experience represents the world for being a certain way. It is arguable that the only satisfactory explanation of what it is for perceptual experience to represent the world in a particular way must refer to the complex relations of the experience to the subject’s environment. If this is so, then mention of such experiences in a possession condition will make possession of that concept dependent in part upon the environmental relations to the thinker. Burge (1979) has also argued from intuitions about particular examples that, even though the thinker’s non-environmental properties and relations remain constant, the conceptual content of his mental state can vary if the thinker’s social environment is varied. A possession condition which properly individuates such a concept must take into account his linguistic relations.

Concepts have a normative dimension, a fact strongly emphasized by Kripke. For any judgement whose content involves a given concept, there is a ‘correctness condition’ for that judgement, a condition which is dependent in part on or upon the identity of the concept. The normative character of concepts also extends into the territory of a thinker’s reasons for making judgements. A thinker’s visual perception can give him good reason for judging ‘That man is bald’; even if the man he sees is Rostropovich. All these normative connections must be explained by a theory of concepts. One approach to these matters is to look to the possession condition for a concept, and consider how the referent of the concept is fixed from it, together with the world. One proposal is that the referent of the concept is that object, or property, or function . . . which makes the practices of judgement and inference in the possession condition always lead to true judgements and truth-preserving inferences. This proposal would explain why certain reasons are necessarily good reasons for judging given contents. Provided the possession condition permits us to say what it is about a thinker’s previous judgements that make it the case that he is employing one concept than another, this proposal would also have another virtue. It would also allow us to say how the correctness condition is determined for a judgement in which the concept is applied to newly encountered objects. The judgement is correct if the new object had the property which in fact makes the judgement practices in the possession condition yield true judgements, or truth-preserving inferences.

What is more that innate ideas have been variously defined by philosophers either as ideas consciously presented, to the mind prior to sense experience (the-dispositional sense), or as ideas which we have an innate disposition to form, though we need not be actually aware of them at any particular time, e.g., as babies ~ in cases in a dispositional sense?

Understood in either way they were invoked to account for our recognition of certain truths without recourse to experiential truths without recourse verification, such as those of mathematics, or justify certain moral and religious claims which were held to be capable of being known by introspection of our innate ideas. Examples of such supposed truths might include ‘murder is wrong’ or ‘God exists’.

One difficulty with the doctrine is that it is sometimes formulated as one about concepts or ideas which are held to be innate and at other times as one about a source of propositional knowledge. In so far as concepts are taken to be innate, the doctrine relates primarily to claim about meaning: Our idea of God, for example, is taken as a source for the meaning of the word God. When innate ideas are understood propositionally, supposed innateness is taken as evidence for their truth. However, this clearly rests the assumption that innate prepositions have an unimpeachable source, usually taken to be God, but then any appeal to innate ideas to justify the existence of God is circular. Despite such difficulties the doctrine of innate ideas had a long and influential history until the eighteenth century and the concept has in recent decades been revitalized through its employment in Noam Chomsky’s influential account of the mind’s linguistic capabilities.

The attraction of the theory has been felt strongly by those philosophers who have been unable to give an alternative account of our capacity to recognize that some proposition cannot be justified solely on the basis of an appeal to sense experience. Wherefore, Plato argued that, for example, recognition of mathematical truths could only be explained on the assumption of some form of recollection. Since there was no plausible post-natal source the recollection must refer back to a pre-natal acquisition of knowledge. Thus understood, the doctrine of innate ideas supposed the thoughts that there were important truths innate in human beings and the senses hindered their proper apprehension.

The ascetic implications of the doctrine were important in Christian philosophy throughout the Middle Ages and the doctrine featured powerfully in scholastic teaching until its displacement by Locke’s philosophy in the eighteenth century. It had in the meantime acquired modern expression in the philosophy of Descartes who argued that we can come to know certain important truths before we have any empirical knowledge at all. Our ideas of God, for example, and our coming to recognize that God must necessarily exist, are, Descartes held, logically independent of sense experience. In England the Cambridge Platonists such as Henry Moore and Ralph Cudworth added considerable support.

Locke’s rejection of innate ideas and his alternative empiricist account was powerful enough to displace the doctrine from philosophy almost totally. Leibniz, in his critique of Locke, attempted to defend it with a sophisticated dispositional version of the theory, but it attracted few followers.

The empiricist alternative to innate ideas as an explanation of the certainty of propositions was in the direction of construing all necessary truths as analytic. Kant’s refinement of the classification of propositions with the fourfold distinction, analytic/synthetic and a priori/a posteriori did nothing to encourage a return to the innate idea’s doctrine, which slipped from view. The doctrine may fruitfully be understood as the production of confusion between explaining the genesis of ideas or concepts and the basis for regarding some propositions as necessarily true.

Nevertheless, according to Kant, our knowledge arises from two fundamentally different faculties of the mind, sensibility and understanding. He criticized his predecessors for running these faculties together. Leibniz for treating sensing as a confused mode of understanding and Locke for treating understanding as an abstracted mode of sensing. Kant held that each of the faculties operates with its own distinctive type of mental representation. Concepts, the instruments of the understanding, are mental representations that apply potentially to many things in virtue of their possession of a common feature. Intuitions, the instrument of sensibility, are representations that refer to just one thing and to that thing is played in Russell’s philosophy by ‘acquaintance’ though intuition’s objects are given to us, Kant said; ‘Through concepts they are thought’.

Nonetheless, it is famous Kantian Thesis that knowledge is yielded neither by intuitions nor by concepts alone, but only by the two in conjunction, ‘Thoughts without content are empty’, he says in an often quoted remark, and ‘intuitions without concepts are blind’. Exactly what Kant means by the remark is a debated question, however, answered in different ways by scholars who bring different elements of Kant’s text to bear on it. A minimal reading is that it is only propositionally structured knowledge that requires the collaboration of intuition and concept: This view allows that intuitions without concepts constitute some kind of non-judgmental awareness. A stronger reading is that it is reference or intentionality that depends on intuition and concept together, so that the blindness of intuition without concept is its referring to an object. A more radical view, yet is that intuitions without concepts are indeterminate, a mere blur, perhaps nothing at all. This last interpretation, though admittedly suggested by some things Kant says, is at odds with his official view about the separation of the faculties.

Least that ‘content’ has become a technical term in philosophy for whatever it is a representation had that makes it semantically evaluable. Wherefore, a statement is sometimes said to have a proposition or truth condition as its content, whereby its term is sometimes said to have a concept as its content. Much less is known about how to characterize the contents of non-linguistic representations than is known about characterizing linguistic representations. ‘Content’ is a term precisely because it allows one to abstract away from questions about what semantic properties representations have: A representation’s content is just whatever it is underwrite is its semantic evaluation

One difficulty with the doctrine is that it is sometimes formulated as one about concepts or ideas which are held to be innate and at other times as one about a source of propositional knowledge. In so far as concepts are taken to be innate, the doctrine relates primarily to claim about meaning: Our idea of God, for example, is taken as a source for the meaning of the word God. When innate ideas are understood propositionally, their supposed innateness is taken as evidence for their truth. However, this clearly rests the assumption that innate prepositions have a source, usually taken to be God, but then any appeal to innate ideas to justify the existence of God is circular. Despite such difficulties the doctrine of innate ideas had a long and influential history until the eighteenth century and the concept has in recent decades been revitalized through its employment in Noam Chomsky’s influential account of the mind’s linguistic capabilities.

Least that ‘content’ has become a technical term in philosophy for whatever it is a representation had that makes it semantically evaluable. Wherefore, a statement is sometimes said to have a proposition or truth condition as its content, whereby its term is sometimes said to have a concept as its content. Much less is known about how to characterize the contents of non-linguistic representations than is known about characterizing linguistic representations. ‘Content’ is a term precisely because it allows one to abstract away from questions about what semantic properties representations have: A representation’s content is just whatever it is underwrite is its semantic evaluation.

According to most epistemologists, knowledge entails belief, so that I cannot know that such and such is the case unless I believe that such and such is the case, others think this entailment thesis can be rendered more accurately if we substitute for belief some closely related attitude. For instance, several philosophers would prefer to say that knowledge entails psychological certainty. Nonetheless, there are arguments against all versions of the thesis that knowledge requires having a belief-like attitude toward the known. These argument s are given by philosophers who think that knowledge and belief, or a facsimile, are mutually incompatible (the incompatibility thesis), or by ones who say that knowledge does not entail belief, or vice versa, so that each may exist without the other, however, the two may also coexist of the separability thesis.

The incompatibility thesis is sometimes traced to Plato in view of his claim that knowledge is infallible while belief or opinion is fallible (in the ‘Republic’). Nonetheless this claim would not support the thesis. Belief might be a component of an infallible form of knowledge in spite of the fallibility of belief. Perhaps knowledge involves some factor that compensates for the fallibility of belief.

A.Duncan-Jones cites linguistic evidence to back up the incompatibility thesis. He notes that people oftentimes say ‘I don’t believe she is guilty. ‘I know she is’, where ‘just’ makes it especially clear that the speaker is signalling that she has something more salient than mere belief, not that she has something inconsistent with belief, namely knowledge. Compare: ‘You didn’t hurt him, you killed him’.

H.A. Prichard (1966) offers a defence of the incompatibility thesis which hinges on the equation of knowledge with certainty, as both infallibility and psychological certitude gives the assumption that when we believe in the truth of a claim we are not certain about its truth. Given that knowledge never does, believing something rules out the possibility of knowing it. Unfortunately, Prichard gives us no-good reason to grant that states of belief are never ones involving confidence. Conscious beliefs clearly involve some level of confidence, only to suggest that we are completely confident is bizarre.

A.D.Woozley (1953) defends a version of the separability thesis. Woozley’s version which deals with psychological certainty rather than belief, whereas knowledge can exist in the absence of confidence about the item known, although knowledge might also be accompanied by confidence as well. Woozley’s remarks that the test of whether I know something is ‘what I can do, where what I can do may include answering questions’. On the basis of this remark he suggests that even when people are unsure of the truth of a claim, they might know that the claim is true. We unhesitatingly attribute knowledge to people who give correct responses on examinations even if those people show no confidence in their answers. Woozley acknowledges, however, that it would be odd for those who lack confidence to claim knowledge. It would be peculiar to say, ‘I am unsure whether my answer is true, still, I know its correct’. Nonetheless, this tension Woozley explains using a distinction between conditions under which we are justified in making a claim, such as a claim to know something, and conditions under which the claim we make is true. While ‘I know such and such’ might be true even if I am sure of whether such and such unless I were sure of the truth of my claim.

Colin Radford (1966) extends Woozley’s defence of the separability thesis. In Radford’s view, not only is knowledge compatible with the lack of certainty, it is also compatible with a complete lack of belief. He argues by example, which Walter has forgotten that he learned some English history years prior and yet he is able to give several correct responses to questions such as ‘When did the Battle of Hastings occur’? Since he forgo t that he took history, he considers his correct responses to be no more than guesses. Nonetheless, when he says that the Battle of Hastings took place in 1066 he would deny having the belief that the Battle of Hastings took place in 1066. A fortiori he would deny being sure, or having the right to be sure, that, nonetheless that 1066 was the correct date. Radford would nonetheless insist that Walter knows when the Battle occurred, since clearly he remembered the correct date. Radford admits that it would be inappropriate for Walter to say that he knew when the Battle of Hastings occurred, least of mention, that Woozley attributes the impropriety to a fact about when it is not appropriate to claim knowledge. When we claim knowledge, we ought, at least, believe that we have the knowledge we claim, or else our behaviour is, intentionally misleading’.

Those who agree with Radford’s defence of the separability thesis will probably think of belief as an inner state that can be detected through introspection. That Walter lack’s beliefs about English history is plausible on this Cartesian picture since Walter does not find himself with any beliefs about English history when he seeks them out. One might criticize Radford, however, by rejecting the Cartesian view of belief. One could argue that some beliefs are thoroughly unconscious, for example. Or one could adopt a behaviorist conception of belief, such as Alexander Bain’s (1859), according to which having beliefs is a matter of the way people are disposed to behave (and hasn’t Radford already adopted a behaviourist conception of knowledge?) Since Walter gives the correct response when queried, a form of verbal behaviour, a behaviorist would be tempted to credit him with the belief that the Battle of Hastings occurred in 1066.

D.M.Armstrong (1973) takes a different tack against Radford, Walter does know that the Battle of Hastings took place in 1066. Armstrong will grant Radford that point, however, Armstrong suggests that Walter believes that 1066 is not the date the Battle of Hastings occurred, for Armstrong equates the belief that such and such is just possible but no more than just possible with the belief that such and such is not the case. What is more that Armstrong insists, Walter also believes that the Battle did occur in 1066? After-all, had Walter been mistaught that the Battle occurred in 1066, and had he forgotten being ‘taught’ this and subsequently ‘guessed’ that it took place in 1066, we would surely describe the situation as one in which Walter’s false belief about the Battle became unconscious over time but persisted as a memory trace that was causally responsible for his guess. Out of consistency, we must describe Radford’s original case as one in which Walter’s true belief became unconscious but persisted long enough to cause his guess. Wherefore, Walter consciously believes that the Battle did not occur in 1066, unconsciously he does believe it occurred in 1066. So after all, Radford does not have a counterexample to claim that knowledge entails belief.

Armstrong’s response to Radford was to reject Radford’s claim that the examinee lacked the relevant belief about English history. Another response is to argue that the examinee lacks the knowledge Radford attributes to him. If Armstrong is correct in suggesting that Walter believes both that 1066 is and that is not the date of the Battle of Hastings, one might deny Walter knowledge on the grounds that people who believe the denial of what they believe cannot be aid to know the truth of their belief. Another strategy might be to liken the examinee case to examples of ignorance given in recent attacks on ‘externalisms’. This account of knowledge (needless to say, externalists themselves would tend not to favour this strategy). Consider the following case developed by BonJour (1895): For no apparent reason, Samantha believes that she is clairvoyant. Again, for no apparent reason, she one day comes to believe that the President is in New York, even though she has every reason to believe that the President is in Washington, D.C. In fact, Samantha is a completely reliable clairvoyant, and she arrived at her belief about the whereabouts of the President through the power of her clairvoyance. Yet, surely Samantha’s belief is completely irrational. She is not justified in thinking what she does. If so, then she does not know where the President is. But, Radford’s examinee is little different. Even if Walter lacks the belief which Radford denies him, Radford does not have an example of knowledge that is unattended with belief. Suppose that Walter’s memory had been sufficiently powerful to produce the relevant belief. As Radford says, Walter has every reason to suppose that his response is merely guesswork, and so he has every reason to consider his belief as false. His belief would be an irrational one, and wherefore, one about whose truth Walter would be ignorant.

The externalism/internalism distinction has been mainly applied if it requires that all of the factors needed for a belief to be epistemically justified for a given person be cognitively accessible to that person. However, epistemologists often use the distinction between internalist and externalist theories of epistemic justification without offering any explicit explication. Also, it has been applied in a closely related way to accounts of knowledge and in a rather different way to accounts of belief and thought content.

Perhaps the clearest example of an internalist position would be a foundationalist view according to which foundational beliefs pertain to immediately experienced states of mind and other beliefs are justified by standing in cognitively accessible logical or inferential relations to such foundational beliefs. Similarly, a coherentist view could also be internalist, if both he beliefs or other states with which a justificadum belief is required to cohere and the coherence relations themselves are reflectively accessible.

Also, on this way of drawing the distinction, a hybrid view to which some of the factors required for justification must be cognitively accessible while others need not and in general will not be, would count as an externalists view. Noticeably, its view that was externalist in relation to forms or versions of internalist, that by not requiring that the believer actually be aware of all justifying factors could still be internalist in relation for which requiring that he at least be capable of becoming aware of them.

The most prominent recent externalist views have been versions of reliabilism, whose main requirement for justification is roughly that the belief be produced in a way or via a process that makes it objectively likely that the belief is true. What makes such a view externalist is the absence of any requirement that the person for whom the belief is justified have any sort of cognitive access to the relation of reliability in question. Lacking such access, such a person will in general have no reason for thinking that the belief is true or likely to be true, but will, on such an account, nonetheless be epistemically justified in accepting it. Thus such a view arguably marks a major break from the modern epistemological tradition, stemming from Descartes, which identifies epistemic justification with having a reason, perhaps even a conclusive reason, for thinking that the belief is true. An epistemologist working within this tradition is likely to feel that the externalist, rather than offering a competing account of the same concept of epistemic justification with which the traditional epistemologist is concerned, has simply charged the subject.

As with justification and knowledge, the traditional view of content has been strongly internalist in character. The main argument for externalism derives from the philosophy of language more specifically from the various phenomena pertaining to natural kind terms, indexical, and so forth. That motivate the views that have come to be known as ‘direct reference’ theories. Such phenomena seem, at least, to show that the belief or thought content that can be properly attributed to a person is dependent on facts about his environment, e.g., whether he is on Earth or Twin Earth, what in fact he is pointing at, the classificatory criteria employed by the experts in his social group, and so forth. , -not just on what is going on internally in his mind or brain.

An objection to externalist accounts of content is that they seem unable to do justice to our ability to know the contents of our beliefs or thoughts ‘from the inside’, simply by reflection. If content is dependent on external factors pertaining to the environment, the n knowledge of content should depend on knowledge of these factors-which will not in general be available to the person whose belief or thought is in question.

The adoption of an externalist account of mental content would seem to support an externalist account of justification: That, if part or all of the content of a belief is inaccessible to the believer, then both the justifying status of other beliefs in relation to the content and the status of that content as justifying further beliefs will be similarly inaccessible: Thus contravening the internalist requirement for justification. An internalist must insist that there are no justification relations of these sorts, that only internally accessible content can either or justified or justly anything else: But such a response appears lame unless it is coupled with an attempt to show that the externalist account of content is mistaken.

If the world is radically different from the way it appears, to the pointy that apparent epistemic vices are actually truth-conducive, presumably his should not make us retrospectively term such vices ‘virtues’ even if they are and have always been truth-conducive. Suggestively, it would simply make the epistemic virtue qualities which a truth-desiring person would want to have. For even if, unbeknown to us, some wild sceptical possibility is realized, this would not affect our desires (it being, again, unknown). Such a characterization, moreover, it would seem to fit the virtues in our catalogue. Almost by definition, the truth-desiring person would want to be epistemically conscientious. And, given what seem to be the conditions pertaining to human life and knowledge, the truth-desiring person will also want to have the previously cited virtues of impartiality and intellectual courage.

Are, though, truth and the avoidance of error rich enough desires for the epistemically virtuous? Arguably not. For one thing, the virtuous inquirer aims not so much at having true beliefs as at discovering truths-a very different notions. Perpetual reading of a good encyclopaedia will expand my bank of true beliefs without markedly increasing human-kinds basic stock of truths. For Aristotle, too, one notes that true belief is not, as such, even a concern: The concern, is the discovery of scientific or philosophical truth. But, of course, the mere expansion of our bank of truths-even of scientific and philosophical truths-is not itself the complete goal of its present. Rather one looks for new truths of an appropriate kind-rich, deep, explanatorily fertile, say. By this reckoning, then, the epistemically virtuous person seeks at least three related, but separate ends, to discover new truths, to increase one’s explanatory understanding, to have true than false beliefs.

Another important area of concern for epistemologists is the relation between epistemic virtue and epistemic justification. Obviously, an epistemically virtuous person must itself, I take it, be virtuous. But is a virtuously formed belief automatically a justified one? I would hold that if a belief is virtuously formed, this fully justifies that person in having it: However, the belief itself may lack adequate justification, as the evidence for it may be, through no fault of this person, still inadequate. Different philosophers on this point or points, are, however, apparently to have different intuitions.

Hegel’s theory of justification contains both ‘externalist’ and ‘coherentist’ elements. He recognizes that some justification is provided by percepts and beliefs being generated reliably by our interaction with the environment. Hegel contends that full justification additionally requires a self-conscious, reflective comprehension of one’s beliefs and experiences which integrate them into a systematic conceptual scheme which provides an account for them which is both coherent and reflexively self-consistent.

Hegel contends that the corrigibility of conceptual categories is a social phenomenon. Our partial ignorance about the world can be revealed and corrected because one and the same claim or principle can be applied, asserted and assessed by different people in the same context or by the same person in different contexts. Hegel’s theory of justification requires that an account be shown to e adequate to its domain and to be superior to its alternatives. In this regard, Hegel is a fallibility according to whom justification is provisional and ineluctably historical, since it occurs against the background of less adequate alternative views.

Meanwhile, one important difference between the sceptical approach and more traditional ones becomes plain when the two are applied to sceptical questions. On the classical view, if we are to explain how knowledge is possible, it is illegitimate to make use of the resources of science, this would simply beg the question against the sceptic by making use of the very knowledge which he calls into question. Thus, Descartes’ attempt to answer the sceptic begins by rejecting all those beliefs about which any doubt is possible. Descartes must respond to the sceptic from a starting place which includes no beliefs at all. Naturalistic epistemologists, however, understand the demand to explain the possibility of knowledge differently. As Quine argues, sceptical question arise from within science. It is precisely our success in understanding the world, and thus, in seeing that appearance nd reality may differ, that raises the sceptical question in the first place. We may thus legitimately use the resources of science to answer the question which science itself has raised. The question about how knowledge is possible should it be construed as an empirical question: It is a question about how creatures such as we (given what our best current scientific theories tell us we are like) may come to have our best current scientific theories tell us the world is like. Quine suggests that the Darwinian account of the origin of species gives a very general explanation of why it is that we should be well adapted to getting true beliefs about our environment, while an examination of human psychology will fill the details of such an account. Although Quine himself does not suggest it, and so, investigations in the sociology of knowledge are obviously relevant as well.

This approach to sceptical questions clearly makes them quite tractable, and its proponents see this, understandably, as an important advantage of the naturalistic approach. It is in part for this reason that current work in psychology and sociology is under such close scrutiny by many epistemologists. By the same token, the detractors of the naturalistic approach argue that this way of dealing with sceptical questions simply bypasses the very questions which philosophers have long dealt with. Far from answering the traditional sceptical question it is argued, the naturalistic approach merely changes the topic. Debates between naturalistic epistemologists and their critics, in that frequently focus on whether this new way of doing epistemology adequately answers, transforms or simply ignores the questions which others see as central to epistemological inquiry. Some see the naturalistic approach as an attempt to abandon the philosophical study of knowledge entirely.

In thinking about the possibilities that we bear on in mind, our conscious states, according to Franz Brentano (1838-1917), are all objects of ‘inner perception’. Every such state is such that, for the person who is in that state, it is evident to that person that he or she is in that state, least of mention, that each of our conscious states is not an object of an act of perception, wherefore the doctrine does not lead to an infinite regress.

Brentano holds that there are two types of conscious state-those that are ‘physical’ and those are ‘intentional’ a ‘physical’, or sensory, state is a sensation or sense-impression-a qualitative individual composed of parts that are spatially related to each other. ‘Intentional’ states, e.g., believing, considering, hoping, desiring which are characterized by the facts that (1) they are ‘directed upon objects’. (2) objects may be ‘directed upon’, e.g., we may fear things that do not exist, and (3) such states are not sensory. There is no sensation, no sensory individual, that can be identified with any particular intentional attitude.

Following Leibniz, Brentano distinguishes two types of certainty: The certainty we can have with respect to the existence of our conscious states, and that a priori certainty that may be directed upon necessary truths. These two types of certainty may be combined in a significant way. At a given, moments I may be certain, on the basis of inner perception, that there is believing, desiring, hoping and fearing, and L may also be certain a priori that there cannot be believing, desiring, hoping, and fearing unless there is a ‘substance’ that believes, desires, hopes and fears. In such a case, it will be certain for me [as I will perceive] that there is a substance that believes, desires, hopes and fears. It is also axiomatic. Brentano says, that, if one is certain that a substance of a certain sort exists, then one is identical with that substance.

Brentano makes use of only two purely epistemic concepts, that of ‘being’ certain, or ‘evident’, and that of ‘being probable’. If a given hypothesis is probable, in the epistemic sense, for a particular person, then that person can be certain that the hypothesis is probable for him. Making use of the principles of probability, one may calculate the probability that a given hypothesis has on one’s evidence base.

Nonetheless, if our evidence-base is composed only of necessary truths and the facts of inner perception, then it is difficult to see how it could provide justification for any contingent truths other than those that pertain to states of consciousness. How could such an evidence-base even lend ‘probability’ to the hypothesis that there is a world of external physical things?

What, then, is the problem of the external world? Certainly it is not whether there is an external world as this is taken for granted. Instead, the problem is an epistemological one which, in a rough approximation, can be formulated by asking whether and if so how a person gains knowledge of the external world. However, the problem seems to admit of an easy solution. There is knowledge of the external world which persons acquire primarily by perceiving objects and events which make up the external world.

An epistemic argument would concede that the main reason for this in that knowledge of objects in the external world seems to be dependent on some other knowledge, and so would not qualify as immediate and non-inferential. It is claimed that perceptual knowledge that there is a brown and rectangular table before me, because I would not know such a proposition unless I knew that something then appeared brown and rectangular. Hence, knowledge of the table is dependent upon knowledge of how it appears. Alternately expressed, if there is knowledge of the table at all, it is indirect knowledge, secured only if the proposition about the table may be inferred from a preposition about appearances. If so, epistemological direct realism is false.

The significance of this emerges when one asks of a particular application that by what evidence or by what consideration is the best answer, clearly, is to question with which the argument will lead to the problems of the external world and the epistemological direct realism. That is, the crucial question is whether any part of the argument from illusion really forces us to abandon perceptual direct realism. The clear implication of the world perceived from the answer is ‘no’, we may point that a key premise in the relativity argument links how something appears with direct perception: The fact that the object of appear is supposed to entail that one directly perceives something which is otherwise an attributing state with content. Certainly we do not think that the proposition expressed by ‘The book appears worn and dusty and more than two hundred years’ old entails that the observer directly perceives something which is worn and dusty and more than two hundred years old (Chisholm, 1964). And there are countless other examples that are similarly like this one.

Proponents of the argument from illusion might complain that the inference they favour works only for certain adjectives, specifically for adjectives referring to non-relational sensible qualities such as colour, taste, shape, and he like. Such a move, moreover, requires an argument which shows why the inference works in these restricted cases and fails in all others. No such argument has ever been provided, and it is difficult to see what it might possibly be.

If the argument from illusion is defused, the major threat facing, perceptual direct realism will have been removed. So, that, there will no longer be any real motivation for the problem of the external world, of course, even if a perceptual direct realism is reinstated, this does not solve that the argument from illusion may suffice to refute all forms of perceptual realism. That problem, nonetheless, might arise even for one who accepts the perceptual direct realism, however, there is reason to be suspicious. What is not clear is whether the dependence is ‘epistemic’ or ‘semantic’. It is epistemic if, in order to understand what it is to see something blue, one must also understand what it is for something to look blue. However, this may be true, even when the belief that one is seeing something blue is not epistemically dependent on or based upon the belief that something looks blue. Merely claiming, that there is a dependence relation does not discriminate between epistemic and semantic dependence. Moreover, there is reason to think it is not an epistemic dependence. For in general, observers rarely have beliefs about objects appear, but this fact does not impugn their knowledge that they are seeing, e.g., blue objects.

This criticism means that representational states used for the problem of the external world is narrow, in the sense that it focuses only on individual elements within the argument on which the argument seems to be used. Those assumptions, are foundationalist in character: Knowledge and justified belief are divided into the basic, immediate and non-inferential cases, and the non-basic, inferential knowledge and justified belief which is supported by the basic. That is to say, however, though foundationalism was widely assumed when the problem of the external world was given currency in Descartes and the classical empiricists. It has been readily challenged and there are in place well-worked alterative accounts to knowledge and justified belief, some of which seem to be plausible as the most tenable version of foundationalism. So we have some good reason to suspect, quite as one might have initially thought, that the problem of the external world just does not arise, at least not in the forms in which it has usually been presented.

In contrast with the possibility of asking and answering to questions is very closely bound up with the fact that the problem with the external world or direct realism takes place relative to or from a point or points of reference, which does or does not have an origin. In addition to this, the significance of this emerges when one asks, that an object is a unified and coherent segment of the perceived array that can be perceived as having certain properties and as standing in certain relations to other objects (such as the property of having a determinate shape.) One way of putting this distinction, derived ultimately by Alexius Meinong, whose intentional attitude that we ordinarily call ‘perceiving’ and ‘remembering’, provide ‘presumptive evidence’, that is to say, prima facie evidence-for their intentional objects. For example, believing that one is looking at a group of people tends to justify the belief that there is a group of people that one is looking at. How, then, are we to distinguish merely ‘prima facie’ justification from the real thing? This type of solution would seem to call for principles that specify, by reference to further facts of inner perception, the conditions under which merely prima facie justification may become real justification.

Those who speak of prima facie reasons may do so in either of two ways (1) we have a prima facie duty to keep our promise if every action if every action of promise-keeping is to that extent right-if all actions of promise-keeping are the better for it, and (2) an action may be a prima facie duty in virtue of some property it has, in this sense even though it is wrong overall, and so not a ‘duty proper’.

However, what is required is an account of simply describing developmental progress that can be gained or articulated by one’s thoughts. That for developmental considerations do circumscribe the form that such an account will take in virtue of logical positivism, but it cannot be conclusive until we have looked more closely at the bases on which the relevant and distinguishable contents make clear to accommodate a different thought from that to be the functional dynamic areas, from which strongly suggests that in the move from implicit to explicit understanding involves our developing ability than purely reactive, manifestation of the relevant representational abilities.

It was ‘positivism’ in its adherence to the doctrine the within the paradigm of science is the only form of knowledge and that there is nothing in the universe beyond what can in principle be scientifically known. It was ‘logical’ in its dependence on developments in logic and mathematics in the early years of this century which were taken to reveal how a priori knowledge of necessary truths is compatible with a thoroughgoing empiricism.

The exclusiveness of a scientific world-view was to be secured by showing that everything beyond the reach of science is strictly or ‘cognitively’ meaningless. In the sense of being incapable of truth or falsity, and so not a possible object of cognition. This required a criterion of meaninglessness, and it was found in the idea of empirical verification. A sentence is said to be cognitively meaningful if and only if it can be verified or falsified in experience. This is not meant to require that the sentence be conclusively verbified or falsified, since universal scientific laws or hypotheses (which are supposed to pass the test) are not logically deducible from any amount of actually observed evidence. The criterion is accordingly to be understood to require only verifiability or fallibility, in the sense of empirical evidence which would count either for or against the truth of the sentence in question, without having to logically imply it. Verification or confirmation is not necessarily something that can be carried out by the person who entertains the sentence or hypothesis in question, or even by anyone at all at the stage of intellectual and technological development achieved at the time it is entertained. A sentence is cognitively meaningful if and only if it is in principle empirically verifiable or falsifiable.

Anything which does not fulfil this criterion is declared literally meaningless. There is no significant ‘cognitive’ question as to its truth or falsity: It is not an appropriate object of enquiry. Moral and aesthetic and other ‘evaluative’ sentences are held to be neither confirmable nor disconfirmable on empirical grounds, and so are cognitively meaningless. They are, at best, expressions of feeling or preference which are neither true nor false. Whatever is cognitively meaningful and therefore factual is value-free. The positivists claimed that many of the sentences of traditional philosophy, especially those in what they called ‘metaphysics’, also lack cognitive meaning and say nothing that could be true or false. But they did not spend much time trying to show this in detail about the philosophy of the past. They were more concerned with developing a theory of meaning and of knowledge adequate to the understanding nd perhaps even the improvement of science.

Nevertheless, that our beliefs are not only in bodies, but also in persons, or themselves, which continue to exist through time, and this belief too can be explained only by the operation of certain ‘principles of the imagination’. We never directly perceive anything we can call ourselves: The most we can be aware of in ourselves are our constantly changing momentary perceptions, not the mind or self which has them. For Hume (1711-76), there is nothing that really binds the different perceptions together, we are led into the ‘fiction’ that they form a unity only because of the way in which the thought of such series of perceptions works upon the mind. ‘The mind is a kind of theatre, where several perceptions successively make their appearance, . . . there is properly no simplicity in it at one time, nor identity in different: Whatever natural propensity we may have to imagine that simplicity and identity. The comparison of the theatre must not mislead us. They are the successive perceptions only, that constitutes the mind.

Leibniz held, in opposition to Descartes, that adult humans can have experiences of which they are unaware: Experiences of which effect what they do, but which are not brought to self-consciousness. Yet there are creatures, such as animals and infants, which completely lack the ability to reflect of their experiences, and to become aware of them as experiences of theirs. The unity of a subject’s experience, which stems from his capacity to recognize all his experience as his, was dubbed by Kant ‘ as the transcendental unity of an apperception ~ Leibniz’s term for inner awareness or self-consciousness. But, in contrast with ‘perception’ or ‘outer awareness’ ~ though, this apprehension of unity is transcendental, than empirical, it is presupposed in experience and cannot be derived from it. Kant used the need for this unity as the basis of his attemptive scepticism about the external world. He argued that my experiences could only be united in one-self-consciousness, if, at least some of them were experiences of a law-governed world of objects in space. Outer experience is thus a necessary condition of inner awareness.

Concepts have a normative dimension, a fact strongly emphasized by Kripke. For any judgement whose content involves a given concept, there is a ‘correctness condition’ for that judgement, a condition which is dependent in part on or upon the identity of the concept. The normative character of concepts also extends into the territory of a thinker’s reasons for making judgements. A thinker’s visual perception can give him good reason for judging. ‘For a concept, and consideration how the referent of the concept is fixed from it, together with the world. One proposal is that the referent of the concept is that object, or property, or function . . . which makes the practices of judgement and inference in the possession condition always lead to true judgements and truth-preserving inferences. This proposal would explain why certain reasons are necessarily good reasons for judging given contents. Provided the possession condition permits us to say what it is about a thinker’s previous judgements that make it the case that he is employing one concept than another, this proposal would also have another virtue. It would also allow us to say how the correctness condition is determined for a judgement in which the concept is applied to newly encountered objects. The judgement is correct if the new object had the property which in fact makes the judgement practices in the possession condition yield true judgements, or truth-preserving inferences.

What is more that innate ideas have been variously defined by philosophers either as ideas consciously present to the mind prior to sense experience (the-dispositional sense), or as ideas which we have an innate disposition to form (though we need not be actually aware of them at any particular time, e.g., as babies)-the dispositional sense?

Understood in either way they were invoked to account for our recognition of certain truths without recourse to experiential truths without recourse verification, such as those of mathematics, or justify certain moral and religious claims which were held to be capable of being known by introspection of our innate ideas. Examples of such supposed truths might include ‘murder is wrong’ or ‘God exists’.

One difficulty with the doctrine is that it is sometimes formulated as one about concepts or ideas which are held to be innate and at other times as one about a source of propositional knowledge. In so far as concepts are taken to be innate, the doctrine relates primarily ti claim about meaning: Our idea of God, for example, is taken as a source for the meaning of the word God. When innate ideas are understood propositionally, their supposed innateness is taken as evidence for their truth. However, this clearly rests the assumption that innate prepositions have an unimpeachable source, usually taken to be God, but then any appeal to innate ideas to justify the existence of God is circular. Despite such difficulties the doctrine of innate ideas had a long and influential history until the eighteenth century and the concept has in recent decades been revitalized through its employment in Noam Chomsky’s influential account of the mind’s linguistic capabilities.

The attraction of the theory has been felt strongly by those philosophers who have been unable to give an alternative account of our capacity to recognize that some proposition cannot be justified solely on the basis of an appeal to sense experience. Thus Plato argued that, for example, recognition of mathematical truths could only be explained on the assumption of some form of recollection. Since there was no plausible post-natal source the recollection must refer back to a pre-natal acquisition of knowledge. Thus understood, the doctrine of innate ideas supposed the view that there were important truths innate in human beings and it was the senses which hindered their proper apprehension.

The ascetic implications of the doctrine were important in Christian philosophy throughout the Middle Ages and the doctrine featured powerfully in scholastic teaching until its displacement by Locke’s philosophy in the eighteenth century. It had in the meantime acquired modern expression in the philosophy of Descartes who argued that we can come to know certain important truths before we have any empirical knowledge at all. Our idea of God, for example, and our coming to recognize that God must necessarily exist, are, Descartes held, logically independent of sense experience. In England the Cambridge Platonists such as Henry More and Ralph Cudworth added considerable support.

Locke’s rejection of innate ideas and his alternative empiricist account was powerful enough to displace the doctrine from philosophy y almost totally. Leibniz, in his critique of Locke, attempted to defend it with a sophisticated dispositional version of the theory, but it attracted few followers.

The empiricist alternative to innate ideas as an explanation of the certainty of propositions was in the direction of construing all necessary truths as analytic. Kant’s refinement of the classification of propositions with the fourfold distinction, analytic/synthetic and a priori/a posteriori did nothing to encourage a return to the innate idea’s doctrine, which slipped from view. The doctrine may fruitfully be understood as the production of confusion between explaining the genesis of ideas or concepts and the basis for regarding some propositions as necessarily true.

Nevertheless, according to Kant, our knowledge arises from two fundamentally different faculties of the mind, sensibility and understanding. He criticized his predecessors for running these faculties together. Leibniz for treating sensing as a confused mode of understanding and Locke for treating understanding as an abstracted mode of sensing. Kant held that each of the faculties operates with its own distinctive type of mental representation. Concepts, the instruments of the understanding, are mental representations that apply potentially to many things in virtue of their possession of a common feature. Intuitions, the instrument of sensibility, are representation that refer to just one thing and to that thing is played in Russell’s philosophy by ‘acquaintance’ though intuition’s objects are given to us, Kant said; Through concepts they are thought.

‘Thoughts without content are empty’, he says in an often quoted remark, and ‘intuitions without concepts are blind’. Exactly what Kant means by the remark is a debated question, however, answered in different ways by scholars who bring different elements of Kant’s text to bear on it. A minimal reading is that it is only propositionally structured knowledge that requires the collaboration of intuition and concept: This view allows that intuitions without concepts constitute some kind of non-judgmental awareness. A stronger reading is that it is reference or intentionality that depends on intuition and concept together, so that the blindness of intuition without concept is its referring to an object. A more radical view, yet is that intuitions without concepts are indeterminate, a mere blur, perhaps nothing at all. This last interpretation, though admittedly suggested by some things Kant says, is at odds with his official view about the separation of the faculties.

Least that ‘content’ has become a technical term in philosophy for whatever it is a representation had that makes it semantically evaluable. Wherefore, a statement is sometimes said to have a proposition or truth condition as its content, whereby its term is sometimes said to have a concept as it s content. Much less is known about how to characterize the contents of non-linguistic representations than is known about characterizing linguistic representations. ‘Content’ is a term precisely because it allows one to abstract away from questions about what semantic properties representations have: A representation’s content is just whatever it is underwrite is its semantic evaluation.

According to most epistemologists, knowledge entails belief, so that I cannot know that such and such is the case unless I believe that such and such is the case. Others think this entailment thesis can be rendered more accurately if we substitute for belief some closely related attitude. For instance, several philosophers would prefer to say that knowledge entails psychological certainty, or acceptance. Nonetheless, there are arguments against all versions of the thesis that knowledge requires having a belief-like attitude toward the known. These argument s are given by philosophers who think that knowledge and belief, or a facsimile, are mutually incompatible, that the incompatibility thesis, or by ones who say that knowledge does not entail belief, or vice versa, so ha t each may exist without the other, however, the two may also coexist of the separability thesis.

The incompatibility thesis is sometimes traced to Plato in view of his claim that knowledge is infallible while belief or opinion is fallible (Republic). Nonetheless this claim would not support the thesis. Belief might be a component of an infallible form of knowledge in spite of the fallibility of belief. Perhaps knowledge involves some factor that compensates for the fallibility of belief.

A.Duncan-Jones cites linguistic evidence to back up the incompatibility thesis. He notes that people oftentimes say ‘I don’t believe she is guilty. I know she is, where ‘just’ makes it especially clear that the speaker is signalling that she has something more salient than mere belief, not that she has something inconsistent with belief, namely knowledge. Compare: ‘You didn’t hurt him, you killed him’.

H.A.Prichard (1966) offers a defence of the incompatibility thesis which hinges on the equation of knowledge with certainty, as both infallibility and psychological certitude gives the assumption that when we believe in the truth of a claim we are not certain about its truth. Given that knowledge never does, believing something rules out the possibility of knowing it. Unfortunately, Prichard gives us no-good reason to grant that states of belief are never ones involving confidence. Conscious beliefs clearly involve some level of confidence, only to suggest that we are completely confident is bizarre.

A.D.Woozley (1953) defends a version of the separability thesis. Woozley’s version which deals with psychological certainty rather than belief, whereas knowledge can exist in the absence of confidence about the item known, although knowledge might also be accompanied by confidence as well. Woozley remarks that the test of whether I know something is ‘what I can do, where what I can do may include answering questions’. On the basis of this remark he suggests that even when people are unsure of the truth of a claim, they might know that the claim is true. We unhesitatingly attribute knowledge to people who give correct responses on examinations even if those people show no confidence in their answers. Woozley acknowledges, however, that it would be odd for those who lack confidence to claim knowledge. It would be peculiar to say, ‘I am unsure whether my answer is true, still, I know it s correct’. Nonetheless, this tension Woozley explains using a distinction between conditions under which we are justified in making a claim, such as a claim to know something, and conditions under which the claim we make is true. While ‘I know such and such’ might be true even if I am sure of whether such and such unless I were sure of the truth of my claim.

Colin Radford (1966) extends Woozley’s defence of the separability thesis. In Radford’s view, not only is knowledge compatible with the lack of certainty, it is also compatible with a complete lack of belief. He argues by example that Walter has forgotten that he learned some English history years prior and yet he is able to give several correct responses to questions such as ‘When did the Battle of Hastings occur’? Since he forgot that he took history, he considers his correct responses to be no more than guesses. Nonetheless, when he says that the Battle of Hastings took place in 1066 he would deny having the belief that the Battle of Hastings took place in 1066. A fortiori he would deny being sure, or having the right to be sure, that, nonetheless that 1066 was the correct date. Radford would nonetheless insist that Walter knows when the Battle occurred, since clearly he remembered the correct date. Radford admits that it would be inappropriate for Walter to say that he knew when the Battle of Hastings occurred, least of mention, that Woozley attributes the impropriety to a fact about when it is not appropriate to claim knowledge. When we claim knowledge, we ought, at least, believe that we have the knowledge we claim, or else our behaviour is ‘intentionally misleading’.

Those who agree with Radford’s defence of the separability thesis will probably think of belief as an inner state that can be detected through introspection. That Walter lack’s belief about English history is plausible on this Cartesian picture since Walter does not find himself with any beliefs about English history when he seeks them out. One might criticize Radford, however, by rejecting the Cartesian view of belief. One could argue that some beliefs are thoroughly unconscious, for example. Or one could adopt a behaviorist conception of belief, such as Alexander Bain’s (1859), according to which having beliefs is a matter of the way people are disposed to behave (and hasn’t Radford already adopted a behaviourist conception of knowledge?) Since Walter gives the correct response when queried, a form of verbal behaviour, a behaviorist would be tempted to credit him with the belief that the Battle of Hastings occurred in 1066.

D.M.Armstrong (1973) takes a different tack against Radford, Walter does know that the Battle of Hastings took place in 1066. Armstrong will grant Radford that points, however, Armstrong suggests that Walter believes that 1066 is not the date the Battle of Hastings occurred, for Armstrong equates the belief that such and such is just possible but no more than just possible with the belief that such and such is not the case. What is more that Armstrong insists, Walter also believes that the Battle did occur in 1066? After-all, had Walter been mistaught that the Battle occurred in 1066, and had he forgotten being ‘taught’ this and subsequently ‘guessed’ that it took place in 1066, we would surely describe the situation as one in which Walter’s false belief about the Battle became unconscious over time but persisted as a memory trace that was causally responsible for his guess. Out of consistency, we must describe Radford’s original case as one in which Walter’s true belief became unconscious but persisted long enough to cause his guess. Wherefore, Jan consciously believes that the Battle did not occur in 1066, unconsciously he does believe it occurred in 1066. So after all, Radford does not have a counterexample to claim that knowledge entails belief.

The externalism/internalism distinction has been mainly applied if it requires that all of the factors needed for a belief to be epistemically justified for a given person be cognitively accessible to that person. However, epistemologists often use the distinction between internalist and externalist theories of epistemic justification without offering any explicit explication. Also, it has been applied in a closely related way to accounts of knowledge and in a rather different way to accounts of belief and thought content.

Perhaps the clearest example of an internalist position would be a foundationalist view according to which foundational beliefs pertain to immediately experienced states of mind and other beliefs are justified by standing in cognitively accessible logical or inferential relations to such foundational beliefs. Similarly, a coherentist view could also be internalist, if both he beliefs or other states with which a justificadum belief is required to cohere and the coherence relations themselves are reflectively accessible.

Also, on this way of drawing the distinction, a hybrid view to which some of the factors required for justification must be cognitively accessible while others need not and in general will not be, would count as an externalist position. Obviously too, a view that was externalist in relation to forms or versions of internalist, that by not requiring that the believer actually be aware of all justifying factors could still be internalist in relation for which requiring that he at least be capable of becoming aware of them.

The most prominent recent externalist views have been versions of Reliabilism, whose main requirement for justification is roughly that the beliefs are produced in a way or via a process that makes it objectively likely that the belief is true. What makes such a view externalist is the absence of any requirement that the person for whom the belief is justified have any sort of cognitive access to the relation of reliability in question. Lacking such access, such a person will in general have no reason for thinking that the belief is true or likely to be true, but will, on such an account, nonetheless be epistemically justified in accepting it. Thus such a view arguably marks a major break from the modern epistemological tradition, stemming from Descartes, which identifies epistemic justification with having a reason, perhaps even a conclusive reason, for thinking that the belief is true. An epistemologist working within this tradition is likely to feel that the externalist, rather than offering a competing account of the same concept of epistemic justification with which the traditional epistemologist is concerned, has simply charged the subject.

As with justification and knowledge, the traditional view of content has been strongly internalist in character. The main argument for externalism derives from the philosophy of language more specifically from the various phenomena pertaining to natural kind terms, indexical, and so forth. That motivate the views that have come to be known as ‘direct reference’ theories. Such phenomena seem, at least, to show that the belief or thought content that can be properly attributed to a person is dependent on facts about his environment, e.g., whether he is on Earth or Twin Earth, what in fact he is pointing at, the classificatory criteria employed by the experts in his social group, and so forth. , -not just on what is going on internally in his mind or brain.

An objection to externalist accounts of content is that they seem unable to do justice to our ability to know the contents of our beliefs or thoughts ‘from the inside’, simply by reflection. If content is dependent on external factors pertaining to the environment, the n knowledge of content should depend on knowledge of these factors-which will not in general be available to the person whose belief or thought is in question.

The adoption of an externalist account of mental content would seem to support an externalist account of justification: That, if part or all of the content of a belief is inaccessible to the believer, then both the justifying status of other beliefs in relation to the content and the status of that content as justifying further beliefs will be similarly inaccessible: Thus contravening the internalist requirement for justification. An internalist must insist that there are no justification relations of these sorts, that only internally accessible content can either or justified or justly anything else: But such a response appears lame unless it is coupled with an attempt to show that the externalist account of content is mistaken.

If the world is radically different from the way it appears, to the pointy that apparent epistemic vices are actually truth-conducive, presumably his should not make us retrospectively term such vices ‘virtues’ even if they are and have always been truth-conducive. Suggestively, it would simply make the epistemic virtue qualities which a truth-desiring person would want to have. For even if, unbeknown to us, some wild sceptical possibility is realized, this would not affect our desires (it being, again, unknown). Such a characterization, moreover, it would seem to fit the virtues in our catalogue. Almost by definition, the truth-desiring person would want to be epistemically conscientious. And, given what seem to be the conditions pertaining to human life and knowledge, the truth-desiring person will also want to have the previously cited virtues of impartiality and intellectual courage.

Are, though, truth and the avoidance of error rich enough desires for the epistemically virtuous? Arguably not. For one thing, the virtuous inquirer aims not so much at having true beliefs as at discovering truths-a very different notion. Perpetual reading of a good encyclopaedia will expand my bank of true beliefs without markedly increasing human-kinds basic stock of truths. For Aristotle, too, one notes that true belief is not, as such, even a concern: The concern, is the discovery of scientific or philosophical truth. But, of course, the mere expansion of our bank of truths-even of scientific and philosophical truths-is not itself the complete goal of its present. Rather one looks for new truths of an appropriate kind-rich, deep, explanatorily fertile, say. By this reckoning, then, the epistemically virtuous person seeks at least three related, but separate ends, to discover new truths, to increase one’s explanatory understanding, to have true than false beliefs.

Another important area of concern for epistemologists is the relation between epistemic virtue and epistemic justification. Obviously, an epistemically virtuous person must itself, I take it, be virtuous. But is a virtuously formed belief automatically a justified one? I would hold that if a belief is virtuously formed, this fully justifies that person in having it: However, the belief itself may lack adequate justification, as the evidence for it may be, through no fault of this person, still inadequate. Different philosophers on this point or points, ae, however, apparently to have different intuitions.

Hegel’s theory of justification contains both ‘externalist’ and ‘coherentist’ elements. He recognizes that some justification is provided by percepts and beliefs being generated reliably by our interaction with the environment. Hegel contends that full justification additionally requires a self-conscious, reflective comprehension of one’s beliefs and experiences which integrates them into a systematic conceptual scheme which provides an account for them which is both coherent and reflexively self-consistent.

Hegel contends that the corrigibly of conceptual categories is a social phenomenon. Our partial ignorance about the world can be revealed and corrected because one and the same claim or principle can be applied, asserted and assessed by different people in the same context or by the same person in different contexts. Hegel’s theory of justification requires that an account be shown to e adequate to its domain and to be superior to its alternatives. In this regard, Hegel is a fallibilist according to whom justification is provisional and ineluctably historical, since it occurs against the background of less adequate alternative views.

Meanwhile, one important difference between the sceptical approach and to a greater extent traditional ones becomes plain when the two are applied to sceptical questions. On the classical view, if we are to explain how knowledge is possible, it is illegitimate to make use of the resources of science, this would simply beg the question against the sceptic by making use of the very knowledge which he calls into question. Thus, Descartes’ attempt to answer the sceptic begins by rejecting all those beliefs about which any doubt is possible. Descartes must respond to the sceptic from a starting place which includes no beliefs at all. Naturalistic epistemologists, however, understand the demand to explain the possibility of knowledge differently. As Quine argues, sceptical questions arise from within science. It is precisely our success in understanding the world, and thus, in seeing that appearance nd reality may differ, that raises the sceptical question in the first place. We may thus legitimately use the resources of science to answer the question which science itself has raised. The question about how knowledge is possible should it be construed as an empirical question: It is a question about how creatures such as we (given what our best current scientific theories tell us we are like) may come to have our best current scientific theories tell us the world is like. Quine suggests that the Darwinian account of the origin of species gives a very general explanation of why it is that we should be well adapted to getting true beliefs about our environment, while an examination of human psychology will fill the details of such an account. Although Quine himself does not suggest it, and so, investigations in the sociology of knowledge are obviously relevant as well.

This approach to sceptical questions clearly makes them quite tractable, and its proponents see this, understandably, as an important advantage of the naturalistic approach. It is in part for this reason that current work in psychology and sociology is under such close scrutiny by many epistemologists. By the same token, the detractors of the naturalistic approach argue that this way of dealing with sceptical questions simply bypasses the very questions which philosophers have long dealt with. Far from answering the traditional sceptical question it is argued, the naturalistic approach merely changes the topic. Debates between naturalistic epistemologists and their critics, in that frequently focus on whether this new way of doing epistemology adequately answers, transforms or simply ignores the questions which others see as central to epistemological inquiry. Some see the naturalistic approach as an attempt to abandon the philosophical study of knowledge entirely.

In thinking about the possibilities that we bear on in mind, our conscious states, according to Franz Brentano (1838-1917), are all objects of ‘inner perception’. Every such state is such that, the person who is in that state, it is evident to that person that he or she is in that state, least of mention, that each of our conscious states is not an object of an act of perception, wherefore the doctrine does not lead to an infinite regress.

Brentano holds that there are two types of conscious state-those that are ‘physical’ and those are ‘intentional’ a ‘physical’, or sensory, state is a sensation or sense-impression-a qualitative individual composed of parts that are spatially related to each other. ‘Intentional’ states, e.g., believing, considering, hoping, desiring which are characterized by the facts that (1) they are ‘directed upon objects’. (2) objects may be ‘directed upon, e.g., we may fear things that do not exist, and (3) such states are not sensory. There is no sensation, no sensory individual, that can be identified with any particular intentional attitude.

Following Leibniz, Brentano distinguishes two types of certainty: The certainty we can have with respect to the existence of our conscious states, and that a priori certainty that may be directed upon necessary truths. These two types of certainty may be combined in a significant way. At a given moment, I may be certain, on the basis of inner perception, that there is believing, desiring, hoping and fearing, and L may also be certain a priori that there cannot be believing, desiring, hoping, and fearing unless there is a ‘substance’ that believes, desires, hopes and fears. In such a case, it will be certain for me [as I will perceive] that there is a substance that believes, desires, hopes and fears. It is also axiomatic. Brentano says, that, if one is certain that a substance of a certain sort exists, then one is identical with that substance.

Brentano makes use of only two purely epistemic concepts, that of ‘being’ certain, or ‘evident’, and that of ‘being probable’. If a given hypothesis is probable, in the epistemic sense, for a particular person, then that person can be certain that the hypothesis is probable for him. Making use of the principles of probability, one may calculate the probability that a given hypothesis has on one’s evidence base.

Nonetheless, if our evidence-base is composed only of necessary truths and the facts of inner perception, then it is difficult to see how it could provide justification for any contingent truths other than those that pertain to states of consciousness. How could such an evidence-base even lend ‘probability’ to the hypothesis that there is a world of external physical things?

The awareness generated by an introspective act can have varying degrees of complexity. It might be a simple knowledge of (mental) things’ ~ such as a particular perception-episode, or it might be the more complex knowledge of truths about one’s own mind. In this latter full-blown judgement form, introspection is usually the self-ascription of psychological properties and, when linguistically expressed, results in statements like ‘I am watching the spider’ or ‘I am repulsed’.

In psychology this deliberate inward look becomes a scientific method when it is ‘directed toward answering questions of theoretical importance for the advancement of our systematic knowledge of the laws and conditions of mental processes’. In philosophy, introspection (sometimes also called ‘reflection’) remains simply that notice which mind takes of its own operations and has been used to serve the following important functions:

(1) Methodological: However, the fact that though experiments are a powerful addition in philosophical investigation. The Ontological Argument, for example, asks us to try to think of the most perfect being as lacking existence and Berkeley’s Master Argument challenges us to conceive of an unseen tree, conceptual results are then drawn from our failure or success. From such experiments to work, we must not only have (or fail to have) the relevant conceptions but also know that we have (or fail to have) them ~ presumably by introspection.

(2) Metaphysical: A philosophy of mind needs to take cognizance of introspection. One can argue for ‘ghostly’ mental entities for ‘Qualia’, for ‘sense-data’ by claiming introspective awareness of them. First-person psychological reports can have special consequences for the nature of persons and personal identity: Hume, for example, was content to reject the notion of a soul-substance because he failed to find such a thing by ‘looking within’. Moreover, some philosophers argue for the existence of additional perspectival facts ~ the fact of ‘what it is like’ to be the person I am or to have an experience of such-and-such-a-kind. Introspection as our access to such facts becomes important when we collectively consider the managing forms of a complete substantiation of the world.

(3) Epistemological: Surprisingly, the most important use made of introspection has been in an accounting for our knowledge of the outside world. According to a foundationalist theory of justification an empirical belief is either basic and ‘self-justifying’ or justified in relation to basic beliefs. Basic beliefs therefore, constitute the rock-bottom of all justification and knowledge. Now introspective awareness is said to have a unique epistemological status in it, we are said to achieve the best possibly epistemological position and consequently, introspective beliefs and thereby constitute the foundation of all justification.

Coherence is a major player in the theatre of knowledge. There are coherence theories of belief, truth and justification where these combine in various ways to yield theories of knowledge, coherence theories of belief are concerned with the content of beliefs. Consider a belief you now have, the belief that you are reading a page in a book. So what makes that belief the belief that it is? What makes it the belief that you are reading a page in a book than the belief that you have something other that is elsewhere of a preoccupation? The same stimuli may produce various beliefs and various beliefs may produce the same action. The role that gives the belief the content it has is the role it plays within a network of relations to other beliefs, the role in inference and implication, for example, I infer different things from believing that I am reading a page in a book than from any other belief, just as I infer that belief from different things than I refer other belief’s form.

The input of perception and the output of an action supplement the central role of the systematic relations the belief has to other beliefs, except that the systematic relations given to the belief specified of the content it has. They are the fundamental source of the content of beliefs. That is how coherence comes to be. A belief that the content that it does because of the away in which it coheres within the system of beliefs, however, weak coherence theories affirm that coherence is one determinant of the content of belief as strong coherence theories on the content of belief affirm that coherence is the sole determinant of the content of belief.

Nonetheless, the concept of the given-referential immediacy as apprehended of the contents of sense experience is expressed in the first person, and present tense reports of appearances. Apprehension of the given is seen as immediate both in a causal sense, since it lacks the usual causal chain involved in perceiving real qualities of physical objects, and in an epistemic sense, since judgements expressing it are justified independently of all other beliefs and evidence. Some proponents of the idea of the ‘given’ maintain that its apprehension is absolutely certain: Infallible, incorrigible and indubitable. It has been claimed also that a subject is omniscient with regard to the given ~ if a property appears, then the subject knows this.

Without some independent indication that some of the beliefs within a coherent system are true, coherence in itself is no indication of truth. Fairy stories can cohere, however, our criteria for justification must indicate to us the probable truth of our beliefs. Hence, within any system of beliefs there must be some privileged class with which others must cohere to be justified. In the case of empirical knowledge, such privileged beliefs must represent the point of contact between subject and world: They must originate within our descendable inherent perceptions of the world, that when challenged, however, we justify our ordinary perceptual beliefs about physical properties by appeal to beliefs about appearances. The latter seem more suitable as foundational, since there is no class of more certain perceptual beliefs to which we appeal for their justification.

The argument that foundations must be certain was offered by Lewis (1946). He held that no proposition can be probable unless some are certain. If the probability of all propositions or beliefs were relative to evidence expressed in others, and if these relations were linear, then any regress would apparently have to terminate in propositions or beliefs that are certain. But Lewis shows neither that such relations must be linear nor that redresses cannot terminate in beliefs that are merely probable or justified in themselves without being certain or infallible.

Arguments against the idea of the given originate with Kant (1724-1804), who argues that percepts without concepts do not yet constitute any form of knowing. Being non-epistemic, they presumably cannot serve as epistemic foundations. Once we recognize that we must apply concepts of properties to appearances and formulate beliefs utilizing those concepts before the appearances can play any epistemic role, it becomes more plausible that such beliefs are fallible. The argument was developed by Wilfrid Sellars (1963), which according to him, the idea of the given involves a confusion between sensing particulars (having sense impressions), which is non-epistemic, and having non-inferential knowledge of propositions referring to appearances. The former may be necessary for acquiring perceptual knowledge, but it is not itself a primitive kind of knowing. Its being non-epistemic renders it immune from error, but also unsuitable for epistemological foundations. The latter, non-referential perceptual knowledge, are fallible, requiring concepts acquired through trained responses to public physical objects.

Contemporary foundationalists deny the coherentist’s claim whole eschewing the claim that foundations, in the form of reports about appearances, are infallible. They seek alternatives to the given as foundations. Although arguments against infallibility are sound, other objections to the idea of foundations are not. That concepts of objective properties are learned prior to concepts of appearances, for example, implied neither that claims about appearances are less certain than claims about objective properties, nor that the latter are prior in chains of justification. That there can be no knowledge prior to the acquisition and consistent application of concepts allows for propositions whose truth requires only consistent applications of concepts, and this may be so for some claims about appearances, least of mention, coherentists would add that such genuine belief’s stands in need of justification in themselves and so cannot be foundations.

Until very recently it could have been that most approaches to the philosophy of science were ‘cognitive’. This includes ‘logical positivism’, as nearly all of those who wrote about the nature of science would have been in agreement that science ought to be ‘value-free’. This had been a particular emphasis on the part of the first positivist, as it would be upon twentieth-century successors. Science, so it is said, deals with ‘facts’, and facts and values and irreducibly distinct. Facts are objective. They are what we seek in our knowledge of the world. Values are subjective: They bear the mark of human interest, they are the radically individual products of feeling and desire. Fact and value cannot, therefore, be inferred from fact, fact cannot be influenced by value. There were philosophers, notably some in the Kantian tradition, who viewed the relation of the human individual to the universalist aspiration of difference rather differently. But the legacy of three centuries of largely empiricist reflection of the ‘new’ sciences ushered in by Galilee Galileo (1564-1642), the Italian scientist whose distinction belongs to the history of physics and astronomy, rather than natural philosophy.

The philosophical importance of Galileo’s science rests largely upon the following closely related achievements: (1) His stunning successful arguments against Aristotelean science, (2) his proofs that mathematics is applicable to the real world. (3) His conceptually powerful use of experiments, both actual and employed regulatively, (4) His treatment of causality, replacing appeal to hypothesized natural ends with a quest for efficient causes, and (5) his unwavering confidence in the new style of theorizing that would come to be known as ‘mechanical explanation’.

A century later, the maxim that scientific knowledge is ‘value-laded’ seems almost as entrenched as its opposite was earlier. It is supposed that between fact and value has been breached, and philosophers of science seem quite at home with the thought that science and value may be closely intertwined after all. What has happened to bring about such an apparently radical change? What is its implications for the objectivity of science, the prized characteristic that, from Plato’s time onwards, has been assumed to set off real knowledge (epist~m~) from mere opinion (doxa)? To answer these questions adequately, one would first have to know something of the reasons behind the decline of logical positivism, as, well as of the diversity of the philosophies of science that have succeeded it.

More general, the interdisciplinary field of cognitive science is burgeoning on several fronts. Contemporary philosophical re-election about the mind ~ which has been quite intensive ~ has been influenced by this empirical inquiry, to the extent that the boundary lines between them are blurred in places.

Nonetheless, the philosophy of mind at its core remains a branch of metaphysics, traditionally conceived. Philosophers continue to debate foundational issues in terms not radically differently from those in vogue in previous eras. Many issues in the metaphysics of science hinge on the notion of ‘causation’. This notion is as important in science as it is in everyday thinking, and much scientific theorizing is concerned specifically to identify the ‘causes’ of various phenomena. However, there is little philosophical agreement on what it is to say that one event is the cause of some other.

Modern discussion of causation starts with the Scottish philosopher, historian, and essayist David Hume (1711-76), who argued that causation is simply a matter for which he denies that we have innate ideas, that the causal relation is observably anything other than ‘constant conjunction’, that there are observable necessary connections anywhere, and that there is either an empirical or demonstrative proof for the assumptions that the future will resemble the past, and that every event has a cause. That is to say, that there is an irresolvable dispute between advocates of free-will and determinism, that extreme scepticism is coherent and that we can find the experiential source of our ideas of self-substance or God.

According to Hume (1978) on event causes another if only if events of the type to which the first event belongs regularly occur in conjunctive events of the type to which the second event belongs. The formulation, however, leaves a number of questions open. Firstly, there is a problem of distinguishing genuine ‘causal law’ from ‘accidental regularities’. Not all regularities are sufficiently law-like to underpin causal relationships. Being that there is a screw in my desk could well be constantly conjoined with being made of copper, without its being true that these screws are made of copper because they are in my desk. Secondly, the idea of constant conjunction does not give a ‘direction’ to causation. Causes need to be distinguished from effects. But knowing that A-type events are constantly conjoined with B-type events does not tell us which of ‘A’ and ‘B’ is the cause and which the effect, since constant conjunction is itself a symmetric relation. Thirdly, there is a problem about ‘probabilistic causation’. When we say that causes and effects are constantly conjoined, do we mean that the effects are always found with the causes, or is it enough that the causes make the effect probable?

Many philosophers of science during the past century have preferred to talk about ‘explanation’ than causation. According to the covering-law model of explanation, something is explained if it can be deduced from premises which include one or more laws. As applied to the explanation of particular events this implies that one particular event can be explained if it is linked by a law to some other particular event. However, while they are often treated as separate theories, the covering-law account of explanation is at bottom little more than a variant of Hume’s constant conjunction account of causation. This affinity shows up in the fact at the covering-law account faces essentially the same difficulties as Hume: (1) In appealing to deduction from ‘laws’, it needs to explain the difference between genuine laws and accidentally true regularities: (2) Its omission by effects, as well as effects by causes, after all, it is as easy to deduce the height of the flag-pole from the length of its shadow and the law of optics: (3) Are the laws invoked in explanation required to be exceptionalness and deterministic, or is it an acceptable say, to appeal to the merely probabilistic fact that smoking makes cancer more likely, in explaining why some particular person develops cancer?

Nevertheless, one of the centrally obtainable achievements for which the philosophy of science is to provide explicit and systematic accounts of the theories and explanatory strategies exploitrated in the science. Another common goal is to construct philosophically illuminating analyses or explanations of central theoretical concepts invoked in one or another science. In the philosophy of biology, for example, there is a rich literature aimed at understanding teleological explanations, and there has been a great deal of work on the structure of evolutionary theory and on such crucial concepts as fitness and biological function. By introducing ‘teleological considerations’, this account views beliefs as states with biological purpose and analyses their truth conditions specifically as those conditions that they are biologically supposed to covary with.

A teleological theory of representation needs to be supplemental with a philosophical account of biological representation, generally a selectionism account of biological purpose, according to which item ‘F’ has purpose ‘G’ if and only if it is now present as a result of past selection by some process which favoured items with ‘G’. So, a given belief type will have the purpose of covarying with ‘P’, say. If and only if some mechanism has selected it because it has covaried with ‘P’ the past.

Along the same lines, teleological theory holds that ‘r’ represents ‘x’ if it is r’s function to indicate (i.e., covary with) ‘x’, teleological theories take issue depending on the theory of functions they import. Perhaps the most important distinction is that between historical theories of functions and a-historical theories. Historical theories individuate functional states (hence, contents) in a way that is sensitive to the historical development of the state, i.e., to factors such as the way the state was ‘learned’, or the way it evolved. An historical theory might hold that the function of ‘r’ is to indicate ‘x’ only if the capacity to token ‘r’ was developed (selected, learned) because it indicates ‘x’. Thus, a state physically indistinguishable from ‘r’ (physical states being a-historical) but lacking r’s historical origins would not represent ‘x’ according to historical theories.

The American philosopher of mind (1935-) Jerry Alan Fodor, is known for resolute ‘realism’ about the nature of mental functioning, taking the analogy between thought and computation seriously. Fodor believes that mental representations should be conceived as individual states with their own identities and structures, like formulae transformed by processes of computation or thought. His views are frequently contrasted with those of ‘holist’ such as the American philosopher, Herbert Donald Davidson (1917-2003), or ‘instrumentalists’ about mental ascription, such as the British philosopher of logic and language, Eardley Anthony Michael Dummett, 1925- In recent years he has become a vocal critic of some of the aspirations of cognitive science.

Nonetheless, a suggestion extrapolating the solution of teleology is continually queried by points as owing to ‘causation’ and ‘content’, and ultimately a fundamental appreciation is to be considered, is that: We suppose that there’s a causal path from A’s to ‘A’s’ and a causal path from B’s to ‘A’s’, and our problem is to find some difference between B-caused ‘A’s’ and A-caused ‘A’s’ in virtue of which the former but not the latter misrepresented. Perhaps, the two paths differ in their counter-factual properties. In particular, in spite of the fact that although A’s and B’s botheration gives cause by A’s’ every fragmentation is in pieces of its matter in the contestation of conveyance, and, as, perhaps, a conceivable assumption deducing that of only A’s would cause ‘A’s’ in ~ as one can say -, ‘optimal circumstances’. We could then hold that a symbol expresses its ‘optimal property’, viz., the property that would causally control its tokening in optimal circumstances. Correspondingly, when the tokening of a symbol is causally controlled by properties other than its optimal property, the tokens that eventuate are ipso facto wild.

Suppose at the present time, that this story about ‘optimal circumstances’ is proposed as part of a naturalized semantics for mental representations. In which case it is, of course, essential that it be possible to say that the optimal circumstances for tokening a mental representation are in terms that are not themselves either semantical or intentional. (It would not do, for example, to identify the optimal circumstances for tokening a symbol as those in which the tokens are true, that would be to assume precisely the sort of semantical notion that the theory is supposed to naturalize.) Befittingly, the suggestion ~ to put it in a nutshell ~ is that appeals to ‘optimality’ should be buttressed by appeals to ‘teleology’: Optimal circumstances are the ones in which the mechanisms that mediate symbol tokening are functioning ‘as they are supposed to’. In the case of mental representations, these would be paradigmatically circumstances where the mechanisms of belief fixation are functioning as them are supposed to.

So, then: The teleologies of the cognitive mechanisms determine the optimal condition for belief fixation, and the optimal condition for belief fixation determines the content of beliefs. So the story goes.

To put this objection in slightly other words: The teleology story perhaps strikes one as plausible in that it understands one normative notion ~ truth ~ in terms of another normative notion ~ optimality. But this appearance if it is spurious there is no guarantee that the kind of optimality that teleology reconstructs has much to do with the kind of optimality that the explication of ‘truth’ requires. When mechanisms of repression are working ‘optimally’ ~ when they’re working ‘as they’re supposed to’ ~ what they deliver are likely to be ‘falsehoods’.

Once, again, there’s no obvious reason why coitions that are optimal for the tokening of one sort of mental symbol need be optimal for the tokening of other sorts. Perhaps the optimal conditions for fixing beliefs about very large objects, are different from the optimal conditions for fixing beliefs about very small ones, are different from the optimal conditions for fixing beliefs sights. But this raises the possibility that if we’re to say which conditions are optimal for the fixation of a belief, we’ll have to know what the content of the belief is ~ what it’s a belief about. Our explication of content would then require a notion of optimality, whose explication in turn requires a notion of content, and the resulting pile would clearly be unstable.

Teleological theories hold that ‘r’ represents ‘x’ if it is r’s function to indicate (i.e., covary with) ‘x’. Teleological theories differ, depending on the theory of functions they import. Perhaps the most important distinction is that between historical theories of functions: Historically, theories individuate functional states (hence, contents) in a way that is sensitive to the historical development of the state, i.e., to factors such as the way the state was ‘learned’, or the way it evolved. An historical theory might hold that the function of ‘r’ is to indicate ‘x’ only if the capacity to token ‘r’ was developed (selected, learned) because it indicates ‘x’. Thus, a state physically indistinguishable from ‘r’ (physical states being a-historical), but lacking r’s historical origins would not represent ‘x’ according to historical theories.

Just as functional role theories hold that r’s representing ‘x’ is grounded in the functional role ‘r’ has in the representing system, i.e., on the relations imposed by specified cognitive processes between ‘r’ and other representations in the system’s repertoire. Functional role theories take their cue from such common-sense ideas as that people cannot believe that cats are furry if they do not know that cats are animals or that fur is like hair.

That being said, that nowhere is the new period of collaboration between philosophy and other disciplines more evident than in the new subject of cognitive science. Cognitive science from its very beginning has been ‘interdisciplinary’ in character, and is in effect the joint property of psychology, linguistics, philosophy, computer science and anthropology. There is, therefore, a great variety of different research projects within cognitive science, but the central area of cognitive science, its hard-coded ideology rests on the assumption that the mind is best viewed as analogous to a digital computer. The basic idea behind cognitive science is that recent developments in computer science and artificial intelligence have enormous importance for our conception of human beings. The basic inspiration for cognitive science went something like this: Human beings do information processing. Computers are designed precisely do information processing. Therefore, one way to study human cognition ~ perhaps the best way to study it ~ is to study it as a matter of computational information processing. Some cognitive scientists think that the computer is just a metaphor for the human mind: Others think that the mind is literally a computer program. But it is fair to say, that without the computational model there would not have been a cognitive science as we now understand it.

In, Essay Concerning Human Understanding is the first modern systematic presentation of empiricist epistemology, and as such had important implications for the natural sciences and for philosophy of science generally. Like his predecessor, Descartes, the English philosopher (1632-1704) Walter Locke began his account of knowledge from the conscious mind aware of ideas. Unlike Descartes, however, he was concerned not to build a system based on certainty, but to identify the mind’s scope and limits. The premise upon which Locke built his account, including his account of the natural sciences, is that the ideas which furnish the mind are all derived from experience. He thus, totally rejected any kind of innate knowledge. In this he consciously opposing Descartes, who had argued that it is possible to come to knowledge of fundamental truths about the natural world through reason alone. Descartes (1596-1650) had argued, that we can come to know the essential nature of both the ‘mind’ and ‘matter’ by pure reason. Walter Locke accepted Descartes’s criterion of clear and distinct ideas as the basis for knowledge, but denied any source for them other than experience. It was information that came in a via the five senses (ideas of sensation) and ideas engendered from pure inner experiences (ideas of reflection) arose in the building blocks of the understanding.

Locke combined his commitment to ‘the new way of ideas’ with the native espousal of the ‘corpuscular philosophy’ of the Irish scientist (1627-92) Robert Boyle. This, in essence, was an acceptance of a revised, more sophisticated account of matter and its properties that had been advocated by the ancient atomists and recently supported by Galileo (1564-1642) and Pierre Gassendi (1592-1655). Boyle argued from theory and experiment that there were powerful reasons to justify some kind of corpuscular account of matter and its properties. He called the latter qualities, which he distinguished as primary and secondary. The distinction between primary and secondary qualities may be reached by two rather different routes: Either from the nature or essence of matter or from the nature and essence of experience, though practising these have tended to run together. The former considerations make the distinction seem like an a priori, or necessary, truth about the nature of matter, while the latter make it appears to be an empirical hypothesis -. Locke, too, accepted this account, arguing that the ideas we have of the primary qualities of bodies resemble those qualities as they are in the subject, whereas the ideas of the secondary qualities, such as colour, taste, and smell, do not resemble their causes in the object.

There is no strong connection between acceptance of the primary secondary quality distinction and Locke’s empiricism and Descartes had also argued strongly for universal acceptance by natural philosophers, and Locke embraced it within his more comprehensive empirical philosophy. But Locke’ empiricism did have major implications for the natural sciences, as he well realized. His account begins with an analysis of experience. All ideas, he argues, are either simple or complex. Simple ideas are those like the red of a particular rose or the roundness of a snowball. Complicated and complex ideas, our ideas of the rose or the snowball, are combinations of simple ideas. We may create new complicated and complex ideas in our imagination ~ a parallelogram, for example. But simple ideas can never be created by us: We just have them or not, and characteristically they are caused, for example, the impact on our senses of rays of light or vibrations of sound in the air coming from a particular physical object. Since we cannot create simple ideas, and they are determined by our experience. Our knowledge is in a very strict uncompromising way limited. Besides, our experiences are always of the particular, never of the general. It is this particular simple idea or that particular complex idea that we apprehend. We never in that sense apprehend a universal truth about the natural world, but only particular instances. It follows from this that all claims to generality about that world ~ for example, all claims to identity what were then beginning to be called the laws of nature ~ must to that extent go beyond our experience and thus be less than certain.

The Scottish philosopher, historian, and essayist, (1711-76) David Hume, whose famous discussion appears in both his major philosophical works, the ‘Treatise’ (1739) and the ‘Enquiry’(1777). The distinction is couched in terms of the concept of causality, so that where we are accustomed to talking of laws, Hume contends, involves three ideas:

1. That there should be a regular concomitance between

events of the type of the cause and those of the type

of the effect.

2. That the cause event should be contiguous with the

affect events.

3. That the cause event should necessitate the effect event.

The tenets (1) and (2) occasion no differently for Hume, since he believes that there are patterns of sensory impressions of non-problematically related ideas of regularity concomitance and of contiguity. But the third requirement is deeply problematic, in that the idea of necessarily that figures in it seems to have no sensory impression correlated with it. However, carefully and attentively we scrutinize a causal process, we do not seem to observe anything that might be the observed correlate of the idea of necessity. We do not observe any kind of activity, power, or necessitation. All we ever observe is one event following another, which is logically independent of it. Nor is this logical necessity, since, as, Hume observes, one can jointly assert the existence of the cause and a denial of the existence of the effect, as specified in the causal statement or the law of nature, without contradiction. What, then, are we to make of the seemingly central notion of necessity that is deeply embedded in the very idea of causation, or lawfulness? To this query, Hume gives an ingenious and telling story. There is an impression corresponding to the idea of causal necessity, but it is a psychological phenomenon: Our exception that even similar to those we have already observed to be correlated with the cause-type of events will come to be in this case too. Where does that impression come from? It is created as a kind of mental habit by the repeated experience of regular concomitance between events of the type of the effect and the occurring of events of the type of the cause. And then, the impression that corresponds to the idea of regular concomitance ~ the law of nature then asserts nothing but the existence of the regular concomitance.

At this point in our narrative, the question at once arises as to whether this factor of life in nature, thus interpreted, corresponds to anything that we observe in nature. All philosophy is an endeavour to obtain a self-consistent understanding of things observed. Thus, its development is guided in two ways, one is demand for coherent self-consistency, and the other is the elucidation of things observed. With our direct observations how are we to conduct such comparisons? Should we turn to science? No. There is no way in which the scientific endeavour can detect the aliveness of things: Its methodology rules out the possibility of such a finding. On this point, the English mathematician and philosopher (1861-1947) Alfred Whitehead, comments: That science can find no individual enjoyment in nature, as science can find no creativity in nature, it finds mere rules of succession. These negations are true of natural science. They are inherent in its methodology. The reason for this blindness of physical science lies in the fact that such science only deals with half the evidence provided by human experience. It divides the seamless coat ~ or, to change the metaphor into a happier form, it examines the coat, which is superficial, and neglects the body which is fundamental.

Whitehead claims that the methodology of science makes it blind to a fundamental aspect of reality, namely, the primacy of experience, it neglected half of the evidence. Working within Descartes’ dualistic framework reference, of matter and mind as separate and incommensurate, science limits itself to the study of objectivised phenomena, neglecting the subject and the mental events that are his or her experience.

Both the adoption of the Cartesian paradigm and the neglect of mental events are reason enough to suspect ‘blindness’, but there is no need to rely on suspicions. This blindness is clearly evident. Scientific discoveries, impressive as they are, are fundamentally superficial. Science can express regularities observed in nature, but it cannot explain the reasons for their occurrence. Consider, for example, Newton’s law of gravity. It shows that such apparently disparate phenomena as the falling of an apple and the revolution of the earth around the sun are aspects of the same regularity ~ gravity. According to this law the gravitational attraction between two objects deceases in proportion to the square of the distance between them. Why is that so? Newton could not provide an answer. Simpler still, why does celestial or supernal space have three dimensions? Why is time one-dimensional? Whitehead notes, ‘None of these laws of nature gives the slightest evidence of necessity. They are [merely] the modes of procedure which within the scale of observation do in fact prevail’.

This analysis reveals that the capacity of science to fathom the depths of reality is limited. For example, if reality is, in fact, made up of discrete units, and these units have the fundamental character in being ‘ the pulsing throbs of experience’, then science may be in a position to discover the discreteness: But it has no access to the subjective side of nature since, as the Austrian physicist(1887-1961) Erin Schrödinger points out, we ‘exclude the subject of cognizance from the domain of nature that we endeavour to understand’. It follows that in order to find ‘the elucidation of things observed’ in relation to the experiential or aliveness aspect, we cannot rely on science, we need to look elsewhere.

If, instead of relying on science, we rely on our immediate observation of nature and of ourselves, we find, first, that this [i.e., Descartes’] stark division between mentality and nature has no ground in our fundamental observation. We find ourselves living within nature. Secondly, in that we should conceive mental operations as among the factors which make up the constitution of nature, and thirdly, that we should reject the notion of idle wheels in the process of nature. Every factor which makes a difference, and that difference can only be expressed in terms of the individual character of that factor.

Whitehead proceeds to analyse our experiences in general, and our observations of nature in particular, and ends up with ‘mutual immanence’ as a central theme. This mutual immanence is obvious in the case of an experience that, I am a part of the universe, and, since I experience the universe, the experienced universe is part of me. Whitehead gives an example, ‘I am in the room, and the room is an item in my present experience. But my present experience is what I am now’. A generalization of this relationship to the case of any actual occasions yields the conclusion that ‘the world is included within the occasion in one sense, and the occasion is included in the world in another sense’. The idea that each actual occasion appropriates its universe follows naturally from such considerations.

The description of an actual entity for being a distinct unit is, therefore, only one part of the story. The other, complementary part is this: The very nature of each and every actual entity is one of interdependence with all the other actual entities in the universe. Each and every effective entity the determinant by which some outward appearance of something as distinguished from the substance for which it is made a process of prehending or appropriating all the other actual entities and creating one new entity out of them all, namely, itself.

There are two general strategies for distinguishing laws from accidentally true generalizations. The first stands by Hume’s idea that causal connections are mere constant conjunctions, and then seeks to explain why some constant conjunctions are better than others. That is, this first strategy accepts the principle that causation involves nothing more than certain events always happening together with certain others, and then seeks to explain why some such patterns ~ the ‘laws’ ~ matter more than others ~ the ‘accidents’ -. The second strategy, by contrast, rejects the Humean presupposition that causation involves nothing more than is responsible for an effect to happen in reserve to the chance-stantial co-occurrence, and instead postulates the relationship ‘necessitation’, a kind of ‘cement, which links events that are connected by law, but not those events (like having a screw in my desk and being made of copper) that are only accidentally conjoined.

There are a number of versions of the first Human strategy. The most successful, originally proposed by the Cambridge mathematician and philosopher F.P. Ramsey (1903-30), and later revived by the American philosopher David Lewis (1941-2002), who holds that laws are those true generalizations that can be fitted into an ideal system of knowledge. The thought is, that, the laws are those patterns that are somewhat explicated in terms of basic science, either as fundamental principles themselves, or as consequences of those principles, while accidents, although true, have no such explanation. Thus, ‘All water at standard pressure boils at 1000 C’ is a consequence of the laws governing molecular bonding: But the fact that ‘All the screws in my desk are copper’ is not part of the deductive structure of any satisfactory science. Frank Plumpton Ramsey (1903-30), neatly encapsulated this idea by saying that laws are ‘consequences of those propositions which we should take as axioms if we knew everything and organized it as simply as possible in a deductive system’.

Advocates of the alternative non-Humean strategy object that the difference between laws and accidents is not a ‘linguistic’ matter of deductive systematization, but rather a ‘metaphysical’ contrast between the kind of links they report. They argue that there is a link in nature between being at 1000 C and boiling, but not between being ‘in my desk’ and being ‘made of copper’, and that this is nothing to do with how the description of this link may fit into theories. According to the forthright Australian D.M. Armstrong (1983), the most prominent defender of this view, the real difference between laws and accidentals, is simply that laws report relationships of natural ‘necessitation’, while accidents only report that two types of events happen to occur together.

Armstrong’s view may seem intuitively plausible, but it is arguable that the notion of necessitation simply restates the problem, than solving it. Armstrong says that necessitation involves something more than constant conjunction: If two events e related by necessitation, then it follows that they are constantly conjoined, but two events can be constantly conjoined without being related by necessitation, as when the constant conjunction is just a matter of accidents. So necessitation is a stronger relationship than constant conjunction. However, Armstrong and other defenders of this view say very little about what this extra strength amounts to, except that it distinguishes laws from accidents. Armstrong’s critics argue that a satisfactory account of laws ought to cast more light than this on the nature of laws.

Hume said that the earlier of two causally related events is always the cause, and the later effect. However, there are a number of objections to using the earlier-later ‘arow of time’ to analyse the directional ‘arrow of causation’. For a start, it seems in principle, possible that some causes and effects could be simultaneous. That more, in the idea that time is directed from ‘earlier’ too ‘later’ itself stands in need of philosophical explanation ~ and one of the most popular explanations is that the idea of ‘movement’ from earlier to later depend on the fact that cause-effect pairs always have a time, and explain ‘earlier’ as the direction in which causes lie, and ‘later’ as the direction of effects, that we will clearly need to find some account of the direction of causation which does not itself assume the direction of time.

A number of such accounts have been proposed. David Lewis (1979) has argued that the asymmetry of causation derives from an ‘asymmetry of over-determination’. The over-determination of present events by past events ~ consider a person who dies after simultaneously being shot and struck by lightning ~ is a very rare occurrence, by contrast, the multiple ‘over-determination’ of present events by future events is absolutely normal. This is because the future, unlike the past, will always contain multiple traces of any present event. To use Lewis’s example, when the president presses the red button in the White House, the future effects do not only include the dispatch of nuclear missiles, but also the fingerprint on the button, his trembling, the further depletion of his gin bottle, the recording of the button’s click on tape, he emission of light waves bearing the image of his action through the window, the warnings of the wave from the passage often signal current, and so on, and so on, and on.

The American philosopher David Lewis (1941-2002), relates this asymmetry of over-determination to the asymmetry of causation as follows. If we suppose the cause of a given effect to have been absent, then this implies the effect would have been absent too, since (apart from freak -like occurrence in the lightning-shooting case) there will not be any other causes left to ‘fix’ the effect. By contrast, if we suppose a given effect of some cause to have been absent, this does not imply the cause would have been absent, for there are still all the other traces left to ‘fix’ the causes. Lewis argues that these counterfactual considerations suffice to show why causes are different from effects.

Other philosophers appeal to a probabilistic variant of Lewis’s asymmetry. Following, the philosopher of science and probability theorists, Hans Reichenbach (1891-1953), they note that the different causes of any given type of effect are normally probabilistically independent of each other, by contrast, the different effects of any given type of cause are normally probabilistically correlated. For example, both obesity and high excitement can cause heart attacks, but this does not imply that fat people are more likely to get excited than thin ones: Its facts, that both lung cancer and nicotine-stained fingers can result from smoking does imply that lung cancer is more likely among people with nicotine-stained fingers. So this account distinguishes effects from causes by the fact that the former, but not the latter are probabilistically dependent on each other.

However, there is another course of thought in philosophy of science, the tradition of ‘negative’ or ‘eliminative’ induction. From the English statesman and philosopher Francis Bacon (1561-1626) and in modern time the philosopher of science Karl Raimund Popper (1902-1994), we have the idea of using logic to bring falsifying evidence to bear on hypotheses about what must universally be the case that many thinkers accept in essence his solution to the problem of demarcating proper science from its imitators, namely that the former results in genuinely falsifiable theories whereas the latter do not. Although falsely, allowed many people’s objections to such ideologies as psychoanalysis and Marxism.

Hume was interested in the processes by which we acquire knowledge: The processes of perceiving and thinking, of feeling and reasoning. He recognized that much of what we claim to know derives from other people secondhand, thirdhand or worse: Moreover, our perceptions and judgements can be distorted by many factors ~ by what we are studying, as well as by the very act of study itself, the main reason, however, behind his emphasis on ‘probabilities and those other measures of evidence on which life and action entirely depend’ is this: It is apparent, for by and large, that complete understanding concerning the validity of ‘matter of fact’, are founded on the relation of cause and effect, and that we can never infer the existence of one object from another unless they are connected together, nor also mediately nor immediately.

When we apparently observe a whole sequence, say of one ball hitting another, what exactly do we observe? And in the much commoner cases, when we wonder about the unobserved causes or effects of the events we observe, what precisely are we doing?

Hume recognized that a notion of ‘must’ or necessity is a peculiar feature of causal relation, inference and principles, and challenges us to explain and justify the notion. He argued that there is no observable feature of events, nothing like a physical bond, which can be properly labelled the ‘necessary connection’ between a given cause and its effect: Events simply are, they merely occur, and there is in ‘must’ or ‘ought’ about them. However, repeated experience of pairs of events sets up the habit of expectation in us, such that when one of the pair occurs we inescapably expect the other. This expectation makes us infer the unobserved cause or unobserved effect of the observed event, and we mistakenly project this mental inference onto the events themselves. There is no necessity observable in causal relations, all that can be observed is regular sequence, here is necessity in causal inferences, but only in the mind. Once we realize that causation is a relation between pairs of events. We also realize that often we are not present for the whole sequence e which we want to divide into ‘cause’ and ‘effect’. Our understanding of the casual relation is thus intimately linked with the role of the causal inference cause only causal inferences entitle us to ‘go beyond what is immediately present to the senses’. But now two very important assumptions emerge behind the causal inference: The assumptions that like causes, in ‘like circumstances, will always produce like effects’, and the assumption that ‘the course of nature will continue uniformly the same’ ~ or, briefly that the future will resemble the past. Unfortunately, this last assumption lacks either empirical or a priori proof, that is, it can be conclusively established neither by experience nor by thought alone.

Hume frequently endorsed a standard seventeenth-century view that all our ideas are ultimately traceable, by analysis, to sensory impressions of an internal or external kind. Accordingly, he claimed that all his theses are based on ‘experience’, understood as sensory awareness together with memory, since only experience establishes matters of fact. But is our belief that the future will resemble the past properly construed as a belief concerning only a mater of fact? As the English philosopher Bertrand Russell (1872-1970) remarked, earlier this century, the real problem that Hume rises are whether future futures will resemble future pasts, in the way that past futures really did resemble past pasts. Hume declares that ‘if . . . the past may be no rule for the future, all experience become useless and can give rise to inference or conclusion. And yet, he held, the supposition cannot stem from innate ideas, since there are no innate ideas in his view nor can it stems from any abstract formal reasoning. For one thing, the future can surprise us, and no formal reasoning seems able to embrace such contingencies: For another, even animals and unthinkable people conduct their lives as if they assume the future resembles the past: Dogs return for buried bones, children avoid a painful fire, and so forth. Hume is not deploring the fact that we have to conduct our lives on the basis of probabilities, and he is not saying that inductive reasoning could or should be avoided or rejected. Rather, he accepted inductive reasoning but tried to show that whereas formal reasoning of the kind associated with mathematics cannot establish or prove matters of fact, factual or inductive reasoning lacks the ‘necessity’ and ‘certainty’ associated with mathematics. His position, therefore clear; because ‘every effect is a distinct event from its cause’, only investigation can settle whether any two particular events are causally related: Causal inferences cannot be drawn with the force of logical necessity familiar to us from deductivity, but, although they lack such force, they should not be discarded. In the context of causation, inductive inferences are inescapable and invaluable. What, then, makes ‘past experience’ the standard of our future judgement? The answer is ‘custom’, it is a brute psychological fact, without which even animal life of a simple kind would be more or less impossible. ‘We are determined by custom to suppose the future conformable to the past’ (Hume, 1978), nevertheless, whenever we need to calculate likely events we must supplement and correct such custom by self-conscious reasoning.

Nonetheless, the problem that the causal theory of reference will fail once it is recognized that all representations must occur under some aspect or that the extentionality of causal relations is inadequate to capture the aspectual character of reference. The only kind of causation that could be adequate to the task of reference is intentional causal or mental causation, but the causal theory of reference cannot concede that ultimately reference is achieved by some met device, since the whole approach behind the causal theory was to try to eliminate the traditional mentalism of theories of reference and meaning in favour of objective causal relations in the world, though it is at present by far the most influential theory of reference, will prove to be a failure for these reasons.

If mental states are identical with physical states, presumably the relevant physical states are various sorts of neural states. Our concepts of mental states such as thinking, sensing, and feeling are of course, different from our concepts of neural states, of whatever sort. But that is no problem for the identity theory. As J.J.C. Smart (1962), who first argued for the identity theory, emphasized, the requisite identities do not depend on understanding concepts of mental states or the meanings of mental terms. For ‘a’ to be the identical with ‘b’, ‘a’, and ‘b’ must have exactly the same properties, but the terms ‘a’ and ‘b’ need not mean the same. Its principal means by measure can be accorded within the indiscernibility of identicals, in that, if ‘A’ is identical with ‘B’, then every property that ‘A’ has ‘B’, and vice versa. This is, sometimes known as Leibniz’ s Law.

But a problem does seem to arise about the properties of mental states. Suppose pain is identical with a certain firing of c-fibres. Although a particular pain is the very same as a neural-firing, we identify that state in two different ways: As a pain and as neural-firing. that the state will therefore have certain properties in virtue of which we identify it as pain and others in virtue of which we identify it as an excitability of neural firings. The properties in virtue of which we identify it as a pain will be mental properties, whereas those in virtue of which ewe identify it as neural excitability firing, will be physical properties. This has seemed as of too many to lead to a kind of dualism at the level of the properties of mental states, even if we reject dualism of substances and take people simply to be physical organisms, those organisms still have both mental and physical states. Similarly, even if we identify those mental states with certain physical states, those states will, nonetheless have both mental and physical properties. So disallowing dualism with respect to substances and their states simply is to its reappearance at the level of the properties of those states.

There are two broad categories of mental property. Mental states such as thoughts and desires, often called ‘propositional attitudes’, have ‘content’ that can be de scribed by ‘that’ clauses. For example, one can have a thought, or desire, that it will rain. These states are said to have intentional properties, or ‘intentionality sensations’, such as pains and sense impressions, lack intentional content, and have instead qualitative properties of various sorts.

The problem about mental properties is widely thought to be most pressing for sensations, since the painful qualities of pains and the red quality of visual sensations seem to be irretrievably non-physical. And if mental states do actually have non-physical properties, the identity of mental states generate to physical states as they would not sustain a thoroughgoing mind-body materialism.

The Cartesian doctrine that the mental is in some way non-physical is so pervasive that even advocates of the identity theory sometimes accepted it, for the ideas that the mental is non-physical underlies, for example, the insistence by some identity theorists that mental properties are really neural as between being mental and physical. To be neural is in this way. A property would have to be neutral as to whether it’s mental at all. Only if one thought that being meant being non-physical would one hold that defending materialism required showing the ostensible mental properties are neutral as regards whether or not they’re mental.

But holding that mental properties are non-physical has a cost that is usually not noticed. A phenomenon is mental only if it has some distinctively mental property. So, strictly speaking, a materialist, who claims that mental properties are non-physical phenomena subsisting the state or fact of having independently been being actualized in the presence that present a reality that proves to exist. This is the ‘eliminative-Materialist position advanced by the American philosopher and critic Richard Rorty (1979).

According to Rorty (1931-) ‘mental’ and ‘physical’ are incompatible terms. Nothing can be both mental and physical, so mental states cannot be identical with bodily states. Rorty traces this incompatibly to our views about incorrigibility: ‘Mental’ and ‘physical’ are incorrigible reports of one’s own mental states, but not reports of physical occurrences, but he also argues that we can imagine a people who describe themselves and each other using terms just like our mental vocabulary, except that those people do not take the reports made with that vocabulary to be incorrigible. Since Rorty takes a state to be a mental state only if one’s reports about it are taken to be incorrigible, his imaginary people do not ascribe mental states to themselves or each other. Nonetheless, the only difference between their language and ours is that we take as incorrigible certain reports which they do not. So their language as no less descriptive or explanatory power than ours. Rorty concludes that our mental vocabulary is idle, and that there are no distinctively mental phenomena.

This argument variably rests on or upon the indeterminate contingence of its buildings incorrigibly into the meaning of the term ‘mental’. If we do not, the way is open to interpret Rorty’s imaginary people as simply having a different theory of mind from ours, on which reports of one’s own mental states are corrigible. Their reports would this be about mental states, as construed by their theory. Rorty’s thought experiment would then provide to conclude not that our terminology is idle, but only that this alternative theory of mental phenomena is correct. His thought experiment would thus sustain the non-eliminativist view that mental states are bodily states. Whether Rorty’s argument supports his eliminativism conclusion or the standard identity theory, therefore, depends solely on whether or not one holds that the mental is in some way non-physical.

Paul M. Churchlands (1981) advances a different argument for eliminative materialism. According to Churchlands, the common-sense concepts of mental states contained in our present folk psychology are, from a scientific point of view, radically defective. But we can expect that eventually a more sophisticated theoretical account will relace those folk-psychological concepts, showing that mental phenomena, as described by current folk psychology, do not exist. Since, that account would be integrated into the rest of science, we would have a thoroughgoing materialist treatment of all phenomena, unlike Rorty’s, does not rely of assuming that the mental is non-physical.

But even if current folk psychology is mistaken, that does not show that mental phenomenon does not exist, but only that they are of the way folk psychology described them as. We could conclude they do not exist only if the folk-psychological claims that turn out to be mistaken actually define what it is for some phenomena to be mental. Otherwise, the new theory would be about mental phenomena, and would help show that they’re identical with physical phenomena. Churchlands argument, like Rorty’s, depends on a special way of defining the mental, which we need not adopt, it’s likely that any argument for Eliminative materialism will require some such definition, without which the argument would instead support the identity theory.

Despite initial appearances, the distinctive properties of sensations are neutral as between being mental and physical, in that borrowed from the English philosopher and classicist Gilbert Ryle (1900-76), they are topic neutral: My having a sensation of red consists in my being in a state that is similar, in respect that we need not specify, even so, to something that occurs in me when I am in the presence of certain stimuli. Because the respect of similarity is not specified, the property is neither distinctively mental nor distinctively physical. But everything is similar to everything else in some respect or other. So leaving the respect of similarity unspecified makes this account too weak to capture the distinguishing properties of sensation.

A more sophisticated reply to the difficultly about mental properties is due independently to the Australian, David Malet Armstrong (1926-) and American philosopher David Lewis (1941-2002), who argued that for a state to be a particular sort of intentional state or sensation is for that state to bear characteristic causal relations to other particular occurrences. The properties in virtue of which e identify states as thoughts or sensations will still be neural as between being mental and physical, since anything can bear a causal relation to anything else. But causal connections have a better chance than similarity in some unspecified respect to capturing the distinguishing properties of sensations and thought.

This casual theory is appealing, but is misguided to attempt to construe the distinctive properties of mental states for being neutral as between being mental, and physical. To be neutral as regards being mental or physical is to be neither distinctively mental nor distinctively physical. But since thoughts and sensations are distinctively mental states, for a state to be a thought or a sensation is perforce for it to have some characteristically mental property. We inevitably lose the distinctively mental if we construe these properties for being neither mental nor physical.

Not only is the topic-neutral construal misguided: The problem it was designed to solve is equally so, only to say, that problem stemmed from the idea that mental must have some non-physical aspects. If not at the level of people or their mental states, then at the level of the distinctively mental properties of those states. However, it should be of mention, that properties can be more complicated, for example, in the sentence, ‘Walter is married to Julie’, we are attributing to Walter the property of being married, and unlike the property of Walter is bald. Consider the sentence: ‘Walter is bearded’. The word ‘Walter’ in this sentence is a bit of language ~ a name of some individual human being ~ and more some would be tempted to confuse the word with what it names. Consider the expression ‘is bald’, this too is a bit of language ~ philosophers call it a ‘predicate’ ~ and it brings to our attention some property or feature which, if the sentence is true. Is possessed by Walter? Understood in this way, a property is not its self linguist though it is expressed, or conveyed by something that is, namely a predicate. What might be said that a property is a real feature of the word, and that it should be contrasted just as sharply with any predicates we use to express it as the name ‘Walter’ is contrasted with the person himself. Controversially, just what sort of ontological status should be accorded to properties by describing ‘anomalous monism’, ~ while it’s conceivably given to a better understanding the similarity with the American philosopher Herbert Donald Davidson (1917-2003), wherefore he adopts a position that explicitly repudiates reductive physicalism, yet purports to be a version of materialism, nonetheless, Davidson holds that although token mental evident states are identical to those of physical events and states ~ mental ‘types’ -, i.e., kinds, and/or properties ~ are neither to, nor nomically co-existensive with, physical types. In other words, his argument for this position relies largely on the contention that the correct assignment of mental a actionable properties to a person is always a holistic matter, involving a global, temporally diachronic, ‘intentional interpretation’ of the person. But as many philosophers have in effect pointed out, accommodating claims of materialism evidently requires more than just repercussions of mental/physical identities. Mentalistic explanation presupposes not merely that metal events are causes but also that they have causal/explanatory relevance as mental -, i.e., relevance insofar as they fall under mental kinds or types. Nonetheless, Davidson’s position, which denies there are strict psychological or psychological laws, can accommodate the causal/explanation relevance of the mental quo mentally: If to ‘epiphenomenalism’ with respect to mental properties.

But the idea that the mental is in some respect non-physical cannot be assumed without argument. Plainly, the distinctively mental properties of the mental states are unlikely any other properties we know about. Only mental states have properties that are at all like the qualitative properties that anything like the intentional properties of thoughts and desires. However, this does not show that the mental properties are not physical properties, not all physical properties like the standard states: So, mental properties might still be special kinds of physical properties. It’s question beginning to assume otherwise. The doctrine that the mental properties is simply an expression of the Cartesian doctrine that the mental is automatically non-physical.

It is sometimes held that properties should count as physical properties only if they can be defined using the terms of physics. This to far to restrictively. Nobody would hold that to reduce biology to physics, for example, we must define all biological properties using only terms that occur in physics. And even putting ‘reduction’ aside, in certain biological properties could have been defined, that would not mean that those properties were in n way non-physical. The sense of ‘physical’ that is relevant that is of its situation it must be broad enough to include not only biological properties, but also most common-sense, macroscopic properties. Bodily states are uncontroversially physical in the relevant way. So, we can recast the identity theory as asserting that mental states are identical with bodily state.

In the course of reaching conclusions about the origin and limits of knowledge, Locke had occasioned in concerning himself with topics which are of philosophical interest in themselves. On of these is the question of identity, which includes, more specifically, the question of personal identity: What are the criteria by which a person at one time is numerically the same person as a person encountering of time? Locke points out whether ‘this is what was here before, it matters what kind of thing ‘this’ is meant to be. If ‘this’ is meant as a mass of matter then it is what was before so long as it consists of the same material panicles, but if it is meant as a living body then its considering of the same particles does mot matter and the case is different. ‘A colt grown up to a horse, sometimes fat, sometimes lean, is all the while the same horse, though . . . there may be a manifest change of the parts. So, when we think about personal identity, we need to be clear about a distinction between two things which ‘the ordinary way of speaking runs together’ ~ the idea of ‘man’ and the idea of ‘person’. As with any other animal, the identity of a man consists ‘in nothing but a participation of the same continued life, by constantly fleeting particles of matter, in succession initially united to the same organized body, however, the idea of a person is not that of a living body of a certain kind. A person is a ‘thinking’. ‘intelligent being, which has some sorts of reflection and such a being ‘will be the same self as far as the same consciousness can extend to action past or to come’. Locke is at pains to argue that this continuity of self-consciousness does not necessarily involve the continuity of some immaterial substance, in the way that Descartes had held, for we all know, says Locke, consciousness and thought may be powers which can be possessed by ‘systems of matter fitly disposed’, and even if this is not so the question of the identity of a person is not the same as the question of the identity of an ‘immaterial’ subject matter. For just as the identity of as horse can be preserved through changes of matter and depended not on the identity of a continued material substance of its unity of one continued life. So the identity of a person does not depend on the continuity of a immaterial substance. The unity of one’s continued consciousness does not depend on its being ‘annexed’ only to one individual substance, [and not] . . . continued in a succession of several substances. For Lock e, then, personal identity consists in an identity of consciousness, and not in the identity of some substance whose essence it is to be conscious

Casual mechanisms or connections of meaning will help to take a historical route, and focus on the terms in which analytical philosophers of mind began to discuss seriously psychoanalytic explanation. These were provided by the long-standing and presently unconcluded debate over cause and meaning in psychoanalysis.

It is not hard to see why psychoanalysis should be viewed in terms of cause and meaning. On the one hand, Freud’s theories introduce a panoply of concepts which appear to characterize mental processes as mechanical and non-meaningful. Included are Freud’s neurological model of the mind, as outlined in his ‘Project or a Scientific Psychology’, more broadly, his ‘economic’ description of the mental, as having properties of force or energy, e.g., as ‘cathexing’ objects: And his account in the mechanism of repression. So it would seem that psychoanalytic explanation employs terms logically at variance with those of ordinary, common-sense psychology, where mechanisms do not play a central role. Bu t on the other hand, and equally striking, there is the fact that psychoanalysis proceeds through interpretation and engages on a relentless search for meaningful connections in mental life ~ something that even a superficial examination of the Interpretation of Dreams, or The Psychopathology of Everyday Life, cannot fail to impress upon one. Psychoanalytic interpretation adduces meaningful connections between disparate and often apparently dissociated mental and behavioural phenomena, directed by the goal of ‘thematic coherence’. Of giving mental life the sort of unity that we find in a work of art or cogent narrative. In this respect, psychoanalysis would seem to adopt as its central plank the most salient feature of ordinary psychology, its insistence e on relating actions to reason for them through contentful characterizations of each that make their connection seem rational, or intelligible: A goal that seems remote from anything found in the physical sciences.

The application to psychoanalysis of the perspective afforded by the cause-meaning debate can also be seen as a natural consequence of another factor, namely the semi-paradoxical nature of psychoanalysis’ explananda. With respect to all irrational phenomena, something like a paradox arises. Irrationality involves a failure of a rational connectedness and hence of meaningfulness, and so, if it is to have an explanation of any kind, relations that are non-meaningful are causal appear to be needed. And, yet, as observed above, it would seem that, in offering explanations for irrationality ~ plugging the ‘gaps’ in consciousness ~ what psychoanalytic explanation hinges on is precisely the postulation of further, albeit non-apparent connections of meaning.

For these two reasons, then ~ the logical heterogeneity of its explanation and the ambiguous status of its explananda ~ it may seem that an examination in terms of the concepts of cause and meaning will provide the key to a philosophical elucidation of psychoanalysis. The possible views of psychoanalytic explanation that may result from such an examination can be arranged along two dimensions. (1) Psychoanalytic explanation may then be viewed after reconstruction, as either causal and non-meaningful, or meaningful and non-causal, or as comprising both meaningful and causal elements, in various combinations. Psychoanalytic explanation then may be viewed, on each of these reconstructions, as either licensed or invalidated depending one’s view of the logical nature of psychology.

So, for instance, some philosophical discussion infer that psychoanalytic explanation is void, simple on the grounds that it is committed to causality in psychology. On another, opposed view, it is the virtue of psychoanalytic explanation that it imputes causal relations, since only causal relations can be relevant to explaining the failures of meaningful psychological connections. On yet another view, it is psychoanalysis’ commitment to meaning which is its great fault: It s held that the stories that psychoanalysis tries to tell do not really, on examination, explain successfully. And so on.

It is fair to say that the debates between these various positions fail to establish anything definite about psychoanalytic explanation. There are two reasons for this. First, there are several different strands in Freud’s whitings, each of which may be drawn on, apparently conclusively, in support of each alternative reconstruction. Secondly, preoccupation with a wholly general problem in the philosophy of mind, that of cause and meaning, distracts attention from the distinguishing features of psychoanalytic explanation. At this point, and in order to prepare the way for a plausible reconstruction of psychoanalytic explanation. It is appropriate to take a step back, and take a fresh look at the cause-meaning issue in the philosophy of psychoanalysis.

Suppose, first, that some sort of cause-meaning compatibilism ~ such as that of the American philosopher Donald Davidson (1917-2003) -, holds for ordinary psychology, on this view, psychological explanation requires some sort of parallelism of causal and meaningful connections, grounded in the idea that psychological properties play causal roles determined by their content. Nothing in psychoanalytic explanation is inconsistent with this picture: After his abandonment of the early ‘Project’. Freud exceptionlessly viewed psychology as autonomous relative to neurophysiology, and at the same time as congruent with a broadly naturalistic world-view. ‘Naturalism’ is often used interchangeably with ‘physicalism’ and ‘materialism’, though each of these hints at specific doctrines. Thus, ‘physicalism’ suggests that, among the natural sciences, there is something especially fundamental about physics. And ‘materialism’ has connotations going back to eighteenth-and-nineteenth-century views of the world as essentially made of material particles whose behaviour is fundamental for explaining everything else. Moreover, ‘naturalism’ with respect to some realm is the view that everything that exists in that realm, and all those events that take place in it, are empirically accessible features of the world. Sometimes naturalism is taken to my that some realm can be in principle understood by appeal to the laws and theories of the natural sciences, but one must be careful as sine naturalism does not by itself imply anything about reduction. Historically, ‘natural’ contrasts with ‘supernatural’, but in the context of contemporary philosophy of mind where debate centres around the possibility of explaining mental phenomena as part of the natural order, it is the non-natural rather than the supernatural that is the contrasting notion. The naturalist holds that they can be so explained, while the opponent of naturalism thinks otherwise, though it is not intended that opposition to naturalism commits one to anything supernatural. Nonetheless, one should not take naturalism in regard as committing one to any sort of reductive explanation of that realm, and there are such commitments in the use of ‘physicalism’ and ‘materialism’.

If psychoanalytic explanation gives the impression that it imputes bare, meaning-free causality, this results from attending to only half the story, and misunderstanding what psychoanalysis means when it talks of psychological mechanisms. The economic descriptions of mental processes that psychoanalysis provides are never replacements for, but themselves always presuppose, characterizations of mental processes in terms of meaning. Mechanisms in psychoanalytic context are simply processes whose operation cannot be reconstructed as instances of rational functioning (they are what we might by preference call mental activities, by contrast with action) Psychoanalytic explanation’s postulation of mechanisms should not therefore be regarded as a regrettable and expugnable incursion of scientism into Freud’s thought, as is often claimed.

Suppose, alternatively, that hermeneuticists such as Habermas ~ who follow Dilthey beings as a interpretative practice to which the concepts of the physical sciences. Are given ~ are correct in thinking that connections of meaning are misrepresented through being described as causal? Again, this does not impact negatively o psychoanalytic explanation since, as just argued, psychoanalytic explanations nowhere impute meaning-free causation. Nothing is lost for psychoanalytic explanation I causation is excised from the psychological picture.

The conclusion must be that psychoanalytic explanation is at bottom indifferent to the general meaning-cause issue. The core of psychoanalysis consists in its tracing of meaningful connections with no greater or lesser commitment to causality than is involved in ordinary psychology. (Which helps to set the stage ~ pending appropriate clinical validation ~ for psychoanalysis to claim as much truth for its explanation as ordinary psychology?). Also, the true key to psychoanalytic explanation, its attribution of special kinds of mental states, not recognized in ordinary psychology, whose relations to one another do not have the form of patterns of inference or practical reasoning.

In the light of this, it is easy to understand why some compatibilities and hermeneuticists assert that their own view of psychology is uniquely consistent with psychoanalytic explanation. Compatibilities are right to think that, in order to provide for psychoanalytic explanation, it is necessary to allow mental connections that are unlike the connections of reasons to the actions that they rationalize, or to the beliefs that they support: And, that, in outlining such connections, psychoanalytic explanation must outstrip the resources of ordinary psychology, which does attempt to force as much as possible into the mould of practical reasoning. Hermeneuticists, for their part, are right to think that it would be futile to postulate connections which were nominally psychological but that characterized in terms of meaning, and that psychoanalytic explanation does not respond to the ‘paradox’ of irrationality by abandoning the search for meaningful connections.

Compatibilities are, however, wrong to think that non-rational but meaningful connections require the psychological order to be conceived as a causal order. The hermeneuticists is free to postulate psychological connections that are determined by meaning but not by rationality: It is coherent to suppose that there are connections of meaning that are not -bona fide- rational connections, without these being causal. Meaningfulness is a broader concept than rationality. (Sometimes this thought has been expressed, though not helpful, by saying that Freud discovered the existence of ‘neurotic rationality.) Despite the fact that an assumption of rationality is doubtless necessary to make sense of behaviour in general. It does not need to be brought into play in making sense of each instance of behaviour. Hermeneuticists, in turn, are wrong to think that the compatibility view psychology as causal signals a confusion of meaning with causality or that it must lead to compatibilism to deny that there is any qualitative difference between rational and irrational psychological connections.

All the same, the last two decades have been an intermittent interval through which times’ extraordinary changes, placing an encouraging well-situated plot in the psychology of the sciences. ‘Cognitive psychology’, which focuses on higher mental processes like reasoning, decision making, problem solving, language processing and higher-level processing, has become ~ perhaps, the ~ dominant paradigm among experimental psychologists, while behaviouristically oriented approaches have gradually fallen into disfavour.

The relationship between physical behaviour and agential behaviour is controversial. On some views, all ‘actions’ are identical; to physical changes in the subjects body, however, some kinds of physical behaviour, such as ‘reflexes’, are uncontroversially not kinds of agential behaviour. On others, a subject’s action must involve some physical change, but it is not identical to it.

Both physical and agential behaviours could be understood in the widest sense. Anything a person can do ~ even calculating in his head, for instance ~ could be regarded as agential behaviour. Likewise, any physical change in a person’s body ~ even the firing of a certain neuron, for instance ~ could be regarded as physical behaviour.

Of course, to claim that the mind is ‘nothing over and above’ such-and-such kinds of behaviour, construed as either physical or agential behaviour in the widest sense, is not necessarily to be a behaviourist. The theory that the mind is a series of volitional acts ~ a view close to the idealist position of George Berkeley (1685-1753) ~ and the theory that the mind is a certain configuration of neuronal events, while both controversial, are not forms of behaviourism.

Awaiting, right along side of an approaching account for which anomalous monism may take on or upon itself is the view that there is only one kind of substance underlying all others, changing and processes. It is generally used in contrast to ‘dualism’, though one can also think of it as denying what might be called ‘pluralism’ ~ a view often associated with Aristotle which claims that there are a number of substances, as the corpses of times generations have let it be known. Against the background of modern science, monism is usually understood to be a form of ‘materialism’ or ‘physicalism’. That is, the fundamental properties of matter and energy as described by physics are counted the only properties there are.

The position in the philosophy of mind known as ‘anomalous monism’ has its historical origins in the German philosopher and founder of critical philosophy Immanuel Kant (1724-1804), but is universally identified with the American philosopher Herbert Donald Davidson (1917-2003), and it was he who coined the term. Davidson has maintained that one can be a monist ~ indeed, a physicalist ~ about the fundamental nature of things and events, while also asserting that there can be no full ‘reduction’ of the mental to the physical. (This is sometimes expressed by saying that there can be an ontological, though not a conceptual reduction.) Davidson thinks that complete knowledge of the brain and any related neurophysiological systems that support the mind’s activities would not themselves be knowledge of such things as belief, desire, experience and the rest of mentalistic generativist of thoughts. This is not because he thinks that the mind is somehow a separate kind of existence: Anomalous monism is after all monism. Rather, it is because the nature of mental phenomena rules out a priori that there will be law-like regularities connecting mental phenomena and physical events in the brain, and, without such laws, there is no real hope of explaining the mental via the physical structure of the brain.

All and all, one central goal of the philosophy of science is to provided explicit and systematic accounts of the theories and explanatory strategies explored in the science. Another common goal is to construct philosophically illuminating analyses or explanations of central theoretical concepts involved in one or another science. in the philosophy of biology, for example, there is a rich literature aimed at understanding teleological explanations, and thereby has been a great deal of work on the structure of evolutionary theory and on such crucial concepts. If concepts of the simple (observational) sorts were internal physical structures that had, in this sense, an information-carrying function, a function they acquired during learning, then instances of these structure types would have a content that (like a belief) could be either true or false. In that of ant information-carrying structure carries all kinds of information if, for example, it carries information ‘A’, it must also carry the information that ‘A’ or ‘B’. Conceivably, the process of learning is supposed to be a process in which a single piece of this information is selected for special treatment, thereby becoming the semantic content ~ the meaning ~ of subsequent tokens of that structure type. Just as we conventionally give artefacts and instruments information-providing functions, thereby making their flashing lights, and so forth ~ representations of the conditions in the world in which we are interested, so learning converts neural states that carry information ~ ‘pointer readings’ in the head, so to speak ~ in structures that have the function of providing some vital piece of information they carry when this process occurs in the ordinary course of learning, the functions in question develop naturally. They do not, as do the functions of instruments and artefacts, depends on the intentions, beliefs, and attitudes of users. We do not give brain structure these functions. They get it by themselves, in some natural way, either (in the case of the senses) from their selectional history or (in the case of thought) from individual learning. The result is a network of internal representations that have (in different ways) the power representation, of experience and belief.

To understand that this approach to ‘thought’ and ‘belief’, the approach that conceives of them as forms of internal representation, is not a version of ‘functionalism’ ~ at least, not if this dely held theory is understood, as it often is, as a theory that identifies mental properties with functional properties. For functional properties have to do with the way something, is, in fact, behaves, with its syndrome of typical causes and effects. An informational model of belief, in order to account for misrepresentation, a problem with which a preliminary way that in both need something more than a structure that provided information. It needs something having that as its function. It needs something that is supposed to provide information. As Sober (1985) comments for an account of the mind we need functionalism with the function, the ‘teleological’, is put back in it.

Philosophers need not (and typically do not) assume that there is anything wrong with the science they are studying. Their goal is simply to provide accounts of he theories, concepts and explanatory strategies that scientists are using ~ accounts that are more explicit, systematic and philosophically sophisticated than the often rather rough-and-ready accounts offered by the scientists themselves.

Cognitive psychology is in many ways a curious and puzzling science. Many of the theories put forward by cognitive psychologists make use of a family of ‘intentional’ concepts ~ like believing that ‘, desiring that ‘q’, and representing ‘r’ ~ which do not appear in the physical or biological sciences, and these intentional concepts play a crucial role in many of the explanations offered by these theories.

It is characteristic of dialectic awareness that discussions of intentionality appeared as the paradigm cases discussed which usually are beliefs or sometimes beliefs and desires, however, the biologically most basic forms of intentionality are in perception and in intentional action. These also have certain formal features which are not common to beliefs and desire. Consider a case of perceptual experience. Suppose that I see my hand in front of my face. What are the conditions of satisfaction? First, the perceptual experience of the hand in front of my face has as its condition of satisfaction that there be a hand in front of my face. Thus far, the condition of satisfaction is the same as the belief than there is a hand in front of my face. But with perceptual experience there is this difference: In order that the intentional content be satisfied, the fact that there is a hand in front of my face must cause the very experience whose intentional content is that there is a hand in front of my face. This has the consequence that perception has a special kind of condition of satisfaction that we might describe as ‘causally self-referential’. The full conditions of satisfaction of the perceptual experience are, first that there be a hand in front of my face, and second, that there is a hand in front of my face caused the very experience of whose conditions of satisfaction forms a part. We can represent this in our acceptation of the form. S(p), such as:

Visual experience (that there is a hand in front of face

and the fact that there is a hand in front of my face

is causing this very experience.)

Furthermore, visual experiences have a kind of conscious immediacy not characterised of beliefs and desires. A person can literally be said to have beliefs and desires while sound asleep. But one can only have visual experiences of a non-pathological kind when one is fully awake and conscious because the visual experiences are themselves forms of consciousness.

People’s decisions and actions are explained by appeal to their beliefs and desires. Perceptual processes, sensational, are said to result in mental states which represent (or sometimes misrepresent) one or as another aspect of the cognitive agent’s environment. Other theorists have offered analogous acts, if differing in detail, perhaps, the most crucial idea in all of this is the one about representations. There is perhaps a sense in which what happens at, the level of the retina constitutes, as a result of the processes occurring in the process of stimulation, some kind of representation of what produces that stimulation, and thus, some kind of representation of the objects of perception. Or so it may seem, if one attempts to describe the relation between the structure and characteristic of the object of perception and the structure and nature of the retinal processes. One might say that the nature of that relation is such as to provide information about the part of the world perceived, in the sense of ‘information’ presupposed when one says that the rings in the sectioning of a tree’s truck provide information of its age. This is because there is an appropriate causal relation between the things which make it impossible for it to be a matter of chance. Subsequently processing can then be thought to be one carried out on what is provided in the representations in question.

However, if there are such representations, they are not representations for the perceiver, it is the thought that perception involves representations of that kind which produced the old, and now largely discredited philosophical theories of perception which suggested that perception is a matter, primarily, of an apprehension of mental states of some kind, e.g., sense-data, which are representatives of perceptual objects, either by being caused by them or in being in some way constitutive of them. Also, if it be said that the idea of information so invoked indicates that there is a sense in which the processes of stimulation can be said to have content, but a non-conceptual content, distinct from the content provided by the subsumption of what is perceived under concepts. It must be emphasised that, that content is not one of the perceivers. What the information-processing story provides, at best, a more adequate categorization than previously available of the causal processes involved. That may be important, but more should not be claimed for it than there is. If in perception is a given case one can be said to have an experience as of an object of a certain shape and kind related to another object it is because there is presupposed in that perception the possession of concepts of objects, and more particular, a concept of space and how objects occupy space.

It is, that, nonetheless, cognitive psychologists occasionally say a bit about the nature of intentional concepts and the nature of intentional concepts and the explanations that exploit them. Their comments are rarely systematic or philosophically illuminating. Thus, it is hardly surprising that many philosophers have seen cognitive psychology as fertile grounds for the sort of careful descriptive work that is done in the philosophy of biology and the philosophy of physics. The American philosopher of mind Alan Jerry Fodor’s (1935-), The Language of Thought (1975) was a pioneering study in the genre on the field. Philosophers have, also, done important and widely discussed work in what might be called the ‘descriptive philosophy’ or ‘cognitive psychology’.

These philosophical accounts of cognitive theories and the concepts they invoke are generally much more explicit than the accounts provided by psychologists, and they inevitably smooth over some of the rough edges of scientists’ actual practice. But if the account they give of cognitive theories diverges significantly from the theories that psychologists actually produce, then the philosophers have just got it wrong. There is, however, a very different way in which philosopher’s have approached cognitive psychology. Rather than merely trying to characterize what cognitive psychology is actually doing, some philosophers try to say what it should and should not be doing. Their goal is not to explicate scientific practice, but to criticize and improve it. The most common target of this critical approach is the use of intentional concepts in cognitive psychology. Intentional notions have been criticized on various grounds. The two situated consideration are that they fail to supervene on the physiology of the cognitive agent, and that they cannot be ‘naturalized’.

Perhaps e easiest way to make the point about ‘supervenience is to use a thought experiment of the sort originally proposed by the American philosopher Hilary Putnam (1926-). Suppose that in some distant corner of the universe there is a planet, Twin Earth, which is very similar to our own planet. On Twin Earth, there is a person who is an atom for atom replica of J.F. Kennedy. Now the President J.F. Kennedy, who lives on Earth believe s that Rev. Martin Luther King Jr. was born in Tennessee. If you asked him. ‘Was the Rev. Martin Luther King Jr. born in Tennessee, In all probability the answer would either or not it is yes or no? Twin, Kennedy would respond in the same way, but it is not because he believes that our Rev. Martin Luther King Jr.? Was, as, perhaps, very much in question of what is true or false? His beliefs are about Twin-Luther, and that Twin -Luther was certainly not born in Tennessee, and thus, that J.F. Kennedy’s belief is true while Twin-Kennedy’s is false. What all this is supposed to show is that two people, perhaps on opposite polarities of justice, or justice as drawn on or upon human rights, can share all their physiological properties without sharing all their intentional properties. To directorially place this into a problem for cognitive psychology, two additional premises are needed. The first is that cognitive psychology attempts to explain behaviour by appeal to people’s intentional properties. The second, is that psychological explanations should not appeal to properties that fall to supervene on an organism’s physiology. (Variations on this theme can be found in the American philosopher Allen Jerry Fodor (1987)).

The thesis that the mental is supervenient on the physical ~ roughly, the claim that the mental character of a wholly determinant of its rendering adaptation of its physical nature ~ has played a key role in the formulation of some influential positions of the ‘mind-body’ problem. In particular versions of non-deductive ‘physicalism’, and has evoked in arguments about the mental, and has been used to devise solutions to some central problems about the mind ~ for example, the problem of mental causation.

The idea of supervenience applies to one but not to the other, that this, there could be no difference in a moral respect without a difference in some descriptive, or non-moral respect evidently, the idea generalized so as to apply to any two sets of properties (to secure greater generality it is more convenient to speak of properties that predicates). The American philosopher Donald Herbert Davidson (1970), was perhaps first to introduce supervenience into the rhetoric discharging into discussions of the mind-body problem, when he wrote ‘ . . . mental characteristics are in some sense dependent, or supervenient, on physical characteristics. Such supervenience might be taken to mean that there cannot be two events alike in all physical respects but differing in some mental respectfulness, or that an object cannot alter in some metal deferential submission without altering in some physical regard. Following, the British philosopher George Edward Moore (1873-1958) and the English moral philosopher Richard Mervyn Hare (1919-2003), from whom he avowedly borrowed the idea of supervenience. Donald Herbert Davidson, went on to assert that supervenience in this sense is consistent with the irreducibility of the supervened to their ‘subvenient’, or ‘base’ properties. Dependence or supervenience of this kind does not entail reducibility through law or definition . . . ‘

Thus, three ideas have purposively come to be closely associated with supervenience: (1) Property covariation, (if two things are indiscernible in base properties they must be indiscernible in supervenient properties). (2) Dependence, (supervenient properties are dependent on, or determined by, their subservient bases) and (3) Non-reducibility (property covariation and dependence involved in supervenience can obtain even if supervenient properties are not reducible to their base properties.)

Nonetheless, in at least, for the moment, supervenience of the mental ~ in the form of strong supervenience, or, at least global supervenience ~ is arguably a minimum commitment to physicalism. But can we think of the thesis of mind-body supervenience itself as a theory of the mind-body relation ~ that is, as a solution to the mind-body problem?

It would seem that any serious theory addressing the mind-body problem must say something illuminating about the nature of psychophysical dependence, or why, contrary to common belief, there is no dependence in either way. However, if we take to consider the ethical naturalist intuitivistic will say that the supervenience, and also the dependence, for which is a brute fact you discern through moral intuition: And the prescriptivism will attribute the supervenience to some form of consistency requirements on the language of evaluation and prescription. And distinct from all of these is Mereological supervenience, namely the supervenience of properties of a whole on properties and relations of its pats. What all this shows, is that there is no single type of dependence relation common to all cases of supervenience, supervenience holds in different cases for different reasons, and does not represent a type of dependence that can be put alongside causal dependence, meaning dependence, Mereological dependence, and so forth.

There seems to be a promising strategy for turning the supervenience thesis into a more substantive theory of mind, and it is that to explicate mind-body supervenience as a special case of Mereological supervenience ~ that is, the dependence of the properties of a whole on the properties and relations characterizing its proper parts. Mereological dependence does seem to be a special form of dependence that is metaphysically sui generis and highly important. If one takes this approach, one would have to explain psychological properties as macroproperties of a whole organism that covary, in appropriate ways, with its micro-properties, i.e., the way its constituent organs, tissues, and so forth, are organized and function. This more specific supervenience thesis may well be a serious theory of the mind-body relation that can compete for the classic options in the field.

On this topic, as with many topics in philosophy, there is a distinction to be made between (1) certain vague, partially inchoate, pre-theoretic ideas and beliefs about the matter at hand, and (2) certain more precise, more explicit, doctrines or theses that are taken to articulate or explicate those pre-theoretic ideas and beliefs. There are various potential ways of precisifying our pre-theoretic conception of a physicalist or materialist account of mentality, and the question of how best to do so is itself a matter for ongoing, dialectic, philosophical inquiry.

The view concerns, in the first instance, at least, the question of how we, as ordinary human beings, in fact go about ascribing beliefs to one another. The idea is that we do this on the basis of our knowledge of a common-sense theory of psychology. The theory is not held to consist in a collection of grandmotherly saying, such as ‘once bitten, twice shy’. Rather it consists in a body of generalizations relating psychological states to each other to input from the environment, and to actions. Such may be founded on or upon the grounds that show or include the following:

(1) (x)(p)(if x fears that p, then x desires that not-p.)

(2) (x)(p)(if x hopes that p and ✸ hopes that p and

✸ discovers that p, then ✸ is pleased that p.)

(3) (x)(p)(q) (If x believes that p and ✸ believes that

if p, then q, barring confusion, distraction and so

forth ✸ believes that q.)

(4) (x)(p)(q) (If x desires that p and x believes that if q then

p, and x is able to bring it about that q, then, barring

conflict desires or preferred strategies, x brings it about

that q.)

All of these generalizations should be understood as containing ceteris paribus clauses. (1) For example, applies most of the time, but variably. Adventurous types often enjoy the adrenal thrill produced by fear. This leads them, on occasion, to desire the very state of affairs that frightens them. Analogously, with (3). A subject who believes that ‘p’ nd believes that if ‘p’, then ‘q’. Would typically infer that ‘q?’. But certain atypical circumstances may intervene: Subjects may become confused or distracted, or they ma y find the prospect of ‘q’ so awful that they dare not allow themselves to believe it. The ceteris paribus nature of these generalizations is not usually considered to be problematic, since atypical circumstances are, of course, atypical, and the generalizations are applicable most of the time.

We apply this psychological theory to make inference about people’s beliefs, desires and so forth. If, for example, we know that Julie believes that if she is to be at the airport at four, then she should get a taxi at half past two, and she believes that she is to be at the airport at four, then we will predict, using (3), that Julie will infer that she should get a taxi at half past two.

The Theory-Theory, as it is called, is an empirical theory addressing the question of our actual knowledge of beliefs. Taken in its purest form if addressed both first and third-person knowledge: We know about our own beliefs and those of others in the same way, by application of common-sense psychological theory in both cases. However, it is not very plausible to hold that we always ~ or, indeed usually ~ know our own beliefs by way of theoretical inference. Since it is an empirical theory concerning one of our cognitive abilities, the Theory-Theory is open to psychological scrutiny. Various issues of the hypothesized common-sense psychological theory, we need to know whether it is known consciously or unconsciously. Nevertheless, research has revealed that three-year-old children are reasonably god at inferring the beliefs of others on the basis of actions, and at predicting actions on the basis of beliefs that others are known to possess. However, there is one area in which three-year-old’s psychological reasoning differs markedly from adults. Tests of the sorts are rationalized in such that: ‘False Belief Tests’, reveal largely consistent results. Three-year-old subjects are witness to the scenario about the child, Billy, sees his mother place some biscuits in a biscuit tin. Billy then goes out to play, and, unseen by him, his mother removes the biscuit from the tin and places them in a jar, which is then hidden in a cupboard. When asked, ‘Where will Billy look for the biscuits’? The majority of three-year-olds answer that Billy will look in the jar in the cupboard ~ where the biscuits actually are, than where Billy saw them being placed. On being asked ‘Where does Billy think the biscuits are’? They again, tend to answer ‘in the cupboard’, rather than ‘in the jar’. Three-year-olds thus, appear to have some difficulty attributing false beliefs to others in case in which it would be natural for adults to do so. However, it appears that three-year-olds are lacking the idea of false beliefs in general, nor does it appear that they struggle with attributing false beliefs in other kinds of situation. For example, they have little trouble distinguishing between dreams and play, on the one hand, and true beliefs or claims on the other. By the age of four and a half years, most children pass the False Belief Tests fairly consistently. There is yet no general accepted theory of why three-year-olds fare so badly with the false beliefs tests, nor of what it reveals about their conception of beliefs.

Recently some philosophers and psychologists have put forward what they take to be an alternative to the Theory-Theory: However, the challenge does not end there. We need also to consider the vital element of making appropriate adjustments for differences between one’s own psychological states and those of the other. Nevertheless, it is implausible to think in every such case of simulation, yet alone will provide the resolving obtainability to achieve.

The evaluation of the behavioural manifestations of belief, desires, and intentions are enormously varied, every bit as suggested. When we move away from perceptual beliefs, the links with behaviour are intractable and indirect: The expectation I form on the basis of a particular belief reflects the influence of numerous other opinions, my actions are formed by the totality of my preferences and all those opinions which have a bearing on or upon them. The causal processes that produce my beliefs reflect my opinions about those processes, about their reliability and the interference to which they are subject. Thus, behaviour justifies the ascription of a particular belief only by helping to warrant a more inclusive interpretation of the overall cognitive position of the individual in question. Psychological descriptions, like translation, is a ‘holistic’ business. And once this is taken into account, it is all the less likely that a common physical trait will be found which grounds all instances of the same belief. The ways in which all of our propositional altitudes interact in the production of behaviour reinforce the anomalous character of the mental and render any sort of reduction of the mental to the physical impossibilities. Such is not meant as a practical procedure, it can, however, generalize on this so that interpretation and merely translation is at issue, has made this notion central to methods of accounting responsibilities of the mind.

Theory and Theory-Theory are two, as many think competing, views of the nature of our common-sense, propositional attitude explanations of action. For example, when we say that our neighbour cut down his apple tree because he believed that it was ruining his patio and did not want it ruined, we are offering a typically common-sense explanation of his action in terms of his beliefs and desires. But, even though wholly familiar, it is not clear what kind of explanation is at issue. Connected of one view, is the attribution of beliefs and desires that are taken as the application to actions of a theory which, in its informal way, functions very much like theoretical explanations in science. This is known as the ‘theory-theory’ of every day psychological explanation. In contrast, it has been argued that our propositional attributes are not theoretical claims do much as reports of a kind of ‘simulation’. On such a ‘simulation theory’ of the matter, we decide what our neighbour will do (and thereby why he did so) by imagining himself in his position and deciding what we would do.

The Simulation Theorist should probably concede that simulations need to be backed up by the independent means of discovering the psychological states of others. But they need not concede that these independent means take the form of a theory. Rather, they might suggest that we can get by with some rules of thumb, or straightforward inductive reasoning of a general kind.

A second and related difficulty with the Simulation Theory concerns our capacity to attribute beliefs that are too alien to be easily simulated: Beliefs of small children, or psychotics, or bizarre beliefs deeply suppressed in the unconscious latencies. The small child refuses to sleep in the dark: He is afraid that the Wicked Witch will steal him away. No matter how many adjustments we make, it may be hard for mature adults to get their own psychological processes, even in pretended play, to mimic the production of such belief. For the Theory-Theory alien beliefs are not particularly problematic: So long as they fit into the basic generalizations of the theory, they will be inferrable from the evidence. Thus, the Theory-Theory can account better for our ability to discover more bizarre and alien beliefs than can the Simulation Theory.

The Theory-Theory and the Simulation Theory are not the only proposals about knowledge of belief. A third view has its origins in the Austrian philosopher Ludwig Wittgenstein (1889-1951). On this view both the Theory and Simulation Theories attribute too much psychologizing to our common-sense psychology. Knowledge of other minds is, according to this alternative picture, more observational in nature. Beliefs, desires, feelings are made manifest to us in the speech and other actions of those with whom we share a language and way of life. When someone says. ‘Its going to rain’ and takes his umbrella from his bag. It is immediately clear to us that he believes it is going to rain. In order to possess an intellectual hold of we neither theorize nor simulate: We just perceive, of course, this is not straightforward visual perception of the sort that we use to see the umbrella. But it is like visual perception in that it provides immediate and non-inferential awareness of its objects. We might call this the ‘Observational Theory’.

The Observational Theory does not seem to accord very well with the fact that we frequently do have to indulge in a fair amount of psychologizing to find in what others believe. It is clear that any given action might be the upshot of any number of different psychological attitudes. This applies even in the simplest cases. For example, because one’s friend is suspended from a dark balloon near a beehive, with the intention of stealing honey. This idea to make the bees behave that it is going to rain and therefore believe that the balloon as a dark cloud, and therefore pay no attention to it, and so fail to notice one’s dangling friend. Given this sort of possibility, the observer would surely be rash immediately to judge that the agent believes that it is going to rain. Rather, they would need to determine ~ perhaps, by theory, perhaps by simulation ~ which of the various clusters of mental states that might have led to the action, actually did so. This would involve bringing in further knowledge of the agent, the background circumstances and so forth. It is hard to see how the sort of complex mental process involved in this sort of psychological reflection could be assimilated to any kind of observation.

The attributions of intentionality that depend on optimality or rationality are interpretations of the assumptive phenomena ~ a ‘heuristic overlay’ (1969), describing an inescapable idealized ‘real pattern’. Like such abstractions, as centres of gravity and parallelograms of force, the beliefs and desires posited by the highest stance have noo independent and concrete existence, and since this is the case, there would be no deeper facts that could settle the issue if ~ most importantly ~ rival intentional interpretations arose that did equally well at rationalizing the history of behaviour f an entity. Orman van William Quine (1908-2000), the most influential American philosopher of the latter half of the 20th century, whose thesis on the indeterminacy of radical translation carries all the way in the thesis of the indeterminacy of radical interpretation of mental states and processes.

No comments:

Post a Comment