Additional contributions within the analytic and linguistic movement include the work of the British philosopher’s Gilbert Ryle, John Austin, and P. F. Strawson and the American philosopher W. V. Quine. According to Ryle, the task of philosophy is to restate ‘systematically misleading expressions’ in forms that are logically more accurate. He was particularly concerned with statements the grammatical form of which suggests the existence of nonexistent objects. For example, Ryle is best known for his analysis of mentalistic language, language that misleadingly suggests that the mind is an entity in the same way as the body.
Austin maintained that one of the most fruitful starting points for philosophical inquiry is attention to the extremely fine distinctions drawn in ordinary language. His analysis of language eventually led to a general theory of speech acts, that is, to a description of the variety of activities that an individual may be performing when something is uttered.
Strawson is known for his analysis of the relationship between formal logic and ordinary language. The complexity of the latter, he argued, is inadequately represented by formal logic. A variety of analytic tools, therefore, is needed in addition to logic in analyzing ordinary language.
Quine discussed the relationship between language and ontology. He argued that language systems tend to commit their users to the existence of certain things. For Quine, the justification for speaking one way rather than another is a thoroughly pragmatic one.
The commitment to language analysis as a way of pursuing philosophy has continued as a significant contemporary dimension in philosophy. A division also continues to exist between those who prefer to work with the precision and rigour of symbolic logical systems and those who prefer to analyze ordinary language. Although few contemporary philosophers maintain that all philosophical problems are linguistic, the view continues to be widely held that attention to the logical structure of language and to how language is used in everyday discourse can often aid in resolving philosophical problems
Is term of logical calculus is also called a formal language, and a logical system? A system in which explicit rules are provided to determining (1) which are the expressions of the system (2) which sequence of expressions count as well formed (well-forced formulae) (3) which sequence would count ss proofs. A system may include axioms for which leaves terminate a proof, however, it shows of the prepositional calculus and the predicated calculus.
Its most immediate of issues surrounding certainty are especially connected with those concerning ‘scepticism’. Although Greek scepticism entered on the value of enquiry and questioning, scepticism is now the denial that knowledge or even rational belief is possible, either about some specific subject-matter, e.g., ethics, or in any area whatsoever. Classical scepticism, springs from the observation that the best methods in some area seems to fall short of giving us contact with the truth, e.g., there is a gulf between appearances and reality, it frequently cites the conflicting judgements that our methods deliver, with the result that questions of truth become undefinable. In classic thought the various examples of this conflict were systemized in the tropes of Aenesidemus. So that, the scepticism of Pyrrho and the new Academy was a system of argument and inasmuch as opposing dogmatism, and, particularly the philosophical system building of the Stoics.
As it has come down to us, particularly in the writings of Sextus Empiricus, its method was typically to cite reasons for finding our issue undesirable (sceptics devoted particular energy to undermining the Stoics conception of some truths as delivered by direct apprehension or some katalepsis). As a result the sceptics concludes eposhé, or the suspension of belief, and then go on to celebrate a way of life whose object was ataraxia, or the tranquillity resulting from suspension of belief.
Fixed by its will for and of itself, the mere mitigated scepticism which accepts everyday or commonsense belief, is that, not the delivery of reason, but as due more to custom and habit. Nonetheless, it is self-satisfied at the proper time, however, the power of reason to give us much more. Mitigated scepticism is thus closer to the attitude fostered by the accentuations from Pyrrho through to Sextus Expiricus. Despite the fact that the phrase ‘Cartesian scepticism’ is sometimes used. Descartes himself was not a sceptic, however, in the ‘method of doubt’ uses a sceptical scenario in order to begin the process of finding a general distinction to mark its point of knowledge. Descartes trusts in categories of ‘clear and distinct’ ideas, not far removed from the phantasiá kataleptikê of the Stoics.
For many sceptics had traditionally held that knowledge requires certainty, artistry. And, of course, they claim that certain knowledge is not possible. In part, nonetheless, of the principle that every effect it’s a consequence of an antecedent cause or causes. For causality to be true it is not necessary for an effect to be predictable as the antecedent causes may be numerous, too complicated, or too interrelated for analysis. Nevertheless, in order to avoid scepticism, this participating sceptic has generally held that knowledge does not require certainty. Except for alleged cases of things that are evident for one just by being true. It has often been thought, that any thing known must satisfy certain criteria as well for being true. It is often taught that anything is known must satisfy certain standards. In so saying, that by ‘deduction’ or ‘induction’, there will be criteria specifying when it is. As these alleged cases of self-evident truths, the general principle specifying the sort of consideration that will make such standards in the apparent or justly conclude in accepting it warranted to some degree.
Besides, there is another view-the absolute globular view that we do not have any knowledge whatsoever. In whatever manner,
It is doubtful that any philosopher seriously entertains of an absolute scepticism. Even the Pyrrhonist sceptics, who held that we should refrain from accenting to any non-evident standards that no such hesitancy about asserting to ‘the evident’, the non-evident is any belief that requires evidences because it is warranted.
René Descartes (1596-1650), in his sceptical guise, never doubted the content of his own ideas. Its challenging logic, inasmuch as of whether they ‘corresponded’ to anything beyond ideas.
All the same, Pyrrhonism and Cartesian form of a virtual globular scepticism, in having been held and defended, that of assuming that knowledge is some form of true, sufficiently warranted belief, it is the warranted condition that provides the truth or belief conditions, in that of providing the grist for the sceptic’s mill about. The Pyrrhonist will suggest that no non-evident, empirically deferring the sufficiency of giving in but warranted. Whereas, a Cartesian sceptic will agree that no empirical standard about anything other than one’s own mind and its contents is sufficiently warranted, because there are always legitimate grounds for doubting it. In which, the essential difference between the two views concerns the stringency of the requirements for a belief being sufficiently warranted to take account of as knowledge.
A Cartesian requires certainty. A Pyrrhonist merely requires that the standards in case are more warranted then its negation.
Cartesian scepticism was by an unduly influence with which Descartes agues for scepticism, than his reply holds, in that we do not have any knowledge of any empirical standards, in that of anything beyond the contents of our own minds. The reason is roughly in the position that there is a legitimate doubt about all such standards, only because there is no way to justifiably deny that our senses are being stimulated by some sense, for which it is radically different from the objects which we normally think, in whatever manner they affect our senses. Therefrom, if the Pyrrhonist are the agnostics, the Cartesian sceptic is the atheist.
Because the Pyrrhonist require much less of a belief in order for it to be confirmed as knowledge than do the Cartesian, the argument for Pyrrhonism are much more difficult to construct. A Pyrrhonist must show that there is no better set of reasons for believing to any standards, of which are in case that any knowledge learnt of the mind is understood by some of its forms, that has to require certainty.
The underlying latencies that are given among the many derivative contributions as awaiting their presence to the future that of specifying to the theory of knowledge, is, but, nonetheless, the possibility to identify a set of shared doctrines, however, identity to discern two broad styles of instances to discern, in like manner, these two styles of pragmatism, clarify the innovation that a Cartesian approval is fundamentally flawed, nonetheless, of responding very differently but not fordone.
Repudiating the requirements of absolute certainty or knowledge, insisting on the connection of knowledge with activity, as, too, of pragmatism of a reformist distributing knowledge upon the legitimacy of traditional questions about the truth-unconductiveness of our cognitive practices, and sustain a conception of truth objectives, enough to give those questions that undergo of a gathering in their own purposive latencies, yet we are given to the spoken word for which a dialectic awareness sparks the flame from the ambers of fire.
Pragmatism of a determinant revolution, by contrast, relinquishing the objectivity of youth, acknowledges no legitimate epistemological questions over and above those that are naturally kindred of our current cognitive conviction.
It seems clear that certainty is a property that can be assembled to either a person or a belief. We can say that a person, ‘S’ is certain, or we can say that its descendable alinement are aligned as of ‘p’, is certain. The two uses can be connected by saying that ‘S’ has the right to be certain just in case the value of ‘p’ is sufficiently verified.
In defining certainty, it is crucial to note that the term has both an absolute and relative sense. More or less, we take a proposition to be certain when we have no doubt about its truth. We may do this in error or unreasonably, but objectively a proposition is certain when such absence of doubt is justifiable. The sceptical tradition in philosophy denies that objective certainty is often possible, or ever possible, either for any proposition at all, or for any proposition from some suspect family (ethics, theory, memory, empirical judgement etc.) a major sceptical weapon is the possibility of upsetting events that Can cast doubt back onto what were hitherto taken to be certainties. Others include reminders of the divergence of human opinion, and the fallible source of our confidence. Fundamentalist approaches to knowledge look for a basis of certainty, upon which the structure of our system is built. Others reject the metaphor, looking for mutual support and coherence, without foundation.
However, in moral theory, the view that there are inviolable moral standards or absolute variable human desires or policies or prescriptions.
In spite of the notorious difficulty of reading Kantian ethics, a hypothetical imperative embeds a command which is in place only given some antecedent desire or project: ‘If you want to look wise, stay quiet’. The injunction to stay quiet only applies to those with the antecedent desire or inclination. If one has no desire to look wise the injunction cannot be so avoided: It is a requirement that binds anybody, regardless of their inclination. It could be represented as, for example, ‘tell the truth (regardless of whether you want to or not)’. The distinction is not always signalled by presence or absence of the conditional or hypothetical form: ‘If you crave drink, don’t become a bartender’ may be regarded as an absolute injunction applying to anyone, although only activated in case of those with the stated desire.
In Grundlegung zur Metaphsik der Sitten (1785), Kant discussed five forms of the categorical imperative: (1) the formula of universal law: ‘act only on that maxim through which you can at the same times will that it should become universal law: (2) the formula of the law of nature: ‘act as if the maxim of your action were to become through your will a universal law of nature’: (3) the formula of the end-in-itself: ‘act in such a way that you always treat humanity, whether in your own person or in the person of any other, never simply as a means, but always at the same time as an end’: (4) the formula of autonomy, or considering ‘the will of every rational being as a will which makes universal law’: (5) the formula of the Kingdom of Ends, which provides a model for the systematic union of different rational beings under common laws.
Even so, a proposition that is not a conditional ‘p’. Moreover, the affirmative and negative, modern opinion is wary of this distinction, since what appears categorical may vary notation. Apparently, categorical propositions may also turn out to be disguised conditionals: ‘X’ is intelligent (categorical?) = if ‘X’ is given a range of tasks she performs them better than many people (conditional?) The problem. Nonetheless, is not merely one of classification, since deep metaphysical questions arise when facts that seem to be categorical and therefore solid, come to seem by contrast conditional, or purely hypothetical or potential.
A limited area of knowledge or endeavour to which pursuits, activities and interests are a central representation held to a concept of physical theory. In this way, a field is defined by the distribution of a physical quantity, such as temperature, mass density, or potential energy y, at different points in space. In the particularly important example of force fields, such ad gravitational, electrical, and magnetic fields, the field value at a point is the force which a test particle would experience if it were located at that point. The philosophical problem is whether a force field is to be thought of as purely potential, so the presence of a field merely describes the propensity of masses to move relative to each other, or whether it should be thought of in terms of the physically real modifications of a medium, whose properties result in such powers that is, are force fields purely potential, fully characterized by dispositional statements or conditionals, or are they categorical or actual? The former option seems to require within ungrounded dispositions, or regions of space hat differ only in what happens if an object is placed there. The law-like shape of these dispositions, apparent for example in the curved lines of force of the magnetic field, may then seem quite inexplicable. To atomists, such as Newton it would represent a return to Aristotelian entelechies, or quasi-psychological affinities between things, which are responsible for their motions. The latter option requires understanding of how forces of attraction and repulsion can be ‘grounded’ in the properties of the medium.
The basic idea of a field is arguably present in Leibniz, who was certainly hostile to Newtonian atomism. Despite the fact that his equally hostility to ‘action at a distance’ muddies the water. It is usually credited to the Jesuit mathematician and scientist Joseph Boscovich (1711-87) and Immanuel Kant (1724-1804), both of whom influenced the scientist Faraday, with whose work the physical notion became established. In his paper ‘On the Physical Character of the Lines of Magnetic Force’ (1852). Faraday was to suggest several criteria for assessing the physical reality of lines of force, such as whether they are affected by an intervening material medium, whether the motion depends on the nature of what is placed at the receiving end. As far as electromagnetic fields go, Faraday himself inclined to the view that the mathematical similarity between heat flow, currents, and electromagnetic lines of force was evidence for the physical reality of the intervening medium.
Once, again, our mentioning recognition for which its case value, whereby its view is especially associated the American psychologist and philosopher William James (1842-1910), that the truth of a statement can be defined in terms of a ‘utility’ of accepting it. Communicated, so much as a dispiriting position for which its place of valuation may be viewed as an objection. Since there are things that are false, as it may be useful to accept, and conversely there are things that are true and that it may be damaging to accept. Nevertheless, there are deep connections between the idea that a representation system is accorded, and the likely success of the projects in progressive formality, by its possession. The evolution of a system of representation either perceptual or linguistic, seems bounded to connect successes with everything adapting or with utility in the modest sense. The Wittgenstein doctrine stipulates the meaning of use that upon the nature of belief and its relations with human attitude, emotion and the idea that belief in the truth on one hand, the action of the other. One way of binding with cement, wherefore the connection is found in the idea that natural selection becomes much as much in adapting us to the cognitive creatures, because beliefs have effects, they work. Pragmatism can be found in Kant’s doctrine, and continued to play an influencing role in the theory of meaning and truth.
James, (1842-1910), although with characteristic generosity exaggerated in his debt to Charles S. Peirce (1839-1914), he charted that the method of doubt encouraged people to pretend to doubt what they did not doubt in their hearts, and criticize its individualist’s insistence, that the ultimate test of certainty is to be found in the individuals personalized consciousness.
From his earliest writings, James understood cognitive processes in teleological terms. Thought, he held, assists us in the satisfactory interests. His will to Believe doctrine, the view that we are sometimes justified in believing beyond the evidential relics upon the notion that a belief’s benefits are relevant to its justification. His pragmatic method of analysing philosophical problems, for which requires that we find the meaning of terms by examining their application to objects in experimental situations, similarly reflects the teleological approach in its attention to consequences.
Such an approach, however, set’s James’ theory of meaning apart from verification, dismissive of metaphysics. Unlike the verificationalist, who takes cognitive meaning to be a matter only of consequences in sensory experience. James’ took pragmatic meaning to include emotional and matter responses. Moreover, his ,metaphysical standard of value, not a way of dismissing them as meaningless. It should also be noted that in a greater extent, circumspective moment’s James did not hold that even his broad set of consequences were exhaustive of a terms meaning. ‘Theism’, for example, he took to have antecedent, definitional meaning, in addition to its varying degree of importance and chance upon an important pragmatic meaning.
James’ theory of truth reflects upon his teleological conception of cognition, by considering a true belief to be one which is compatible with our existing system of beliefs, and leads us to satisfactory interaction with the world.
However, Peirce’s famous pragmatist principle is a rule of logic employed in clarifying our concepts and ideas. Consider the claim the liquid in a flask is an acid, if, we believe this, we except that it would turn red: We except an action of ours to have certain experimental results. The pragmatic principle holds that listing the conditional expectations of this kind, in that we associate such immediacy with applications of a conceptual representation that provides a complete and orderly sets clarification of the concept. This is relevant ti the logic of abduction: Clarificationists using the pragmatic principle provides all the information about the content of a hypothesis that is relevantly to decide whether it is worth testing.
To a greater extent, and most important, is the famed apprehension of the pragmatic principle, in so that, Pierces’s account of reality: When we take something to be rea that by this single case, we think it is ‘fated to be agreed upon by all who investigate’ the matter to which it stand, in other words, if I believe that it is really the case that ‘P’, then I except that if anyone were to inquire depthfully into the finding its measure into whether ‘p’, they would arrive at the belief that ‘p’. It is not part of the theory that the experimental consequences of our actions should be specified by a warranted empiricist vocabulary-Peirce insisted that perceptual theories are abounding in latency. Even so, nor is it his view that the collected conditionals do or not clarify a concept as all analytic. In addition, in later writings, he argues that the pragmatic principle could only be made plausible to someone who accepted its metaphysical realism: It requires that ‘would-bees’ are objective and, of course, real.
If realism itself can be given a fairly quick clarification, it is more difficult to chart the various forms of supposition, for they seem legendary. Other opponents deny that the entitles posited by the relevant discourse that exist or at least exists: The standard example is ‘idealism’ that reality is somehow mind-curative or mind-co-ordinated-that real object comprising the ‘external world’ are not independently of eloping minds, but only exist as in some way correlative to the mental operations. The doctrine assembled of ‘idealism’ enters on the conceptual note that reality as we understand this as meaningful and reflects the working of mindful purposes. And it construes this as meaning that the inquiring mind itself makes of a formative constellations and not of any mere understanding of the nature of the ‘real’ bit even the resulting charger we attribute to it.
Wherefore, the term ids most straightforwardly used when qualifying another linguistic form of grammatik: a real ‘x’ may be contrasted with a fake, a failed ‘x’, a near ‘x’, and so on. To trat something as real, without qualification, is to suppose it to be part of the actualized world. To reify something is to suppose that we have committed by some indoctrinated treatise, as that of a theory. The central error in thinking of reality and the totality of existence is to think of the ‘unreal’ as a separate domain of things, perhaps, unfairly to that of the benefits of existence.
Such that non-existence of all things, as the product of logical confusion of treating the term ‘nothing’ as itself a referring expression instead of a ‘quantifier’. (Stating informally as a quantifier is an expression that reports of a quantity of times that a predicate is satisfied in some class of things, i.e., in a domain.) This confusion leads the unsuspecting to think that a sentence such as ‘Nothing is all around us’ talks of a special kind of thing that is all around us, when in fact it merely denies that the predicate ‘is all around us’ has appreciations. The feelings that lad some philosophers and theologians, notably Heidegger, to talk of the experience of Nothing, is not properly the experience of nothing, but rather the failure of a hope or expectations that there would be something of some kind at some point. This may arise in quite everyday cases, as when one finds that the article of functions one expected to see as usual, in the corner has disappeared. The difference between ‘existentialist’’ and ‘analytic philosophy’, on the point of what, whereas the former is afraid of nothing, and the latter thinks that there is nothing to be afraid of.
A rather different set of concerns arise when actions are specified in terms of doing nothing, saying nothing may be an admission of guilt, and doing nothing in some circumstances may be tantamount to murder. Still, other substitutional problems arise over conceptualizing empty space and time.
Whereas, the standard opposition between those who affirm and those who deny, the real existence of some kind of thing or some kind of fact or state of affairs. Almost any area of discourse may be the focus of its dispute: The external world, the past and future, other minds, mathematical objects, possibilities, universals, moral or aesthetic properties are examples. There be to one influential suggestion, as associated with the British philosopher of logic and language, and the most determinative of philosophers centred round Anthony Dummett (1925), to which is borrowed from the ‘intuitionistic’ critique of classical mathematics, and suggested that the unrestricted use of the ‘principle of bivalence’ is the trademark of ‘realism’. However, this ha to overcome counter-examples both ways: Although Aquinas wads a moral ‘realist’, he held that moral really was not sufficiently structured to make true or false every moral claim. Unlike Kant who believed that he could use the law of bivalence happily in mathematics, precisely because it wad only our own construction. Realism can itself be subdivided: Kant, for example, combines empirical realism (within the phenomenal world the realist says the right things-surrounding objects really exist and independent of us and our mental stares) with transcendental idealism (the phenomenal world as a whole reflects the structures imposed on it by the activity of our minds as they render it intelligible to us). In modern philosophy the orthodox oppositions to realism has been from philosopher such as Goodman, who, impressed by the extent to which we perceive the world through conceptual and linguistic lenses of our own making.
Assigned to the modern treatment of existence in the theory of ‘quantification’ is sometimes put by saying that existence is not a predicate. The idea is that the existential quantifies itself as an operator on a predicate, indicating that the property it expresses has instances. Existence is therefore treated as a second-order property, or a property of properties. It is fitting to say, that in this it is like number, for when we say that these things of a kind, we do not describe the thing (ad we would if we said there are red things of the kind), but instead attribute a property to the kind itself. The parallelled numbers are exploited by the German mathematician and philosopher of mathematics Gottlob Frége in the dictum that affirmation of existence is merely denied of the number nought. A problem, nevertheless, proves accountable for its crated by sentences like ‘This exists’, where some particular thing is undirected, such that a sentence seems to express a contingent truth (for this insight has not existed), yet no other predicate is involved. ‘This exists’ is. Therefore, unlike ‘Tamed tigers exist’, where a property is said to have an instance, for the word ‘this’ and does not locate a property, but only an individual.
Possible worlds seem able to differ from each other purely in the presence or absence of individuals, and not merely in th distribution of exemplification of properties.
The philosophical ponderance over which to set upon the unreal, as belonging to the domain of Being. Nonetheless, there is little for us that can be said with the philosopher’s study. So it is not apparent that there can be such a subject for being by itself. Nevertheless, the concept had a central place in philosophy from Parmenides to Heidegger. The essential question of ‘why is there something and not of nothing’? Prompting over logical reflection on what it is for a universal to have an instance, nd as long history of attempts to explain contingent existence, by which id to reference and a necessary ground.
In the transition, ever since Plato, this ground becomes a self-sufficient, perfect, unchanging, and external something, identified with the Good or God, but whose relation with the everyday world remains obscure. The celebrated argument for the existence of God first propounded by Anselm in his Proslogin. The argument by defining God as ‘something than which nothing greater can be conceived’. God then exists in the understanding since we understand this concept. However, if He only existed in the understanding something greater could be conceived, for a being that exists in reality is greater than one that exists in the understanding. Bu then, we can conceive of something greater than that than which nothing greater can be conceived, which is contradictory. Therefore, God cannot exist on the understanding, but exists in reality.
An influential argument (or family of arguments) for the existence of God, finding its premisses are that all natural things are dependent for their existence on something else. The totality of dependent brings must then itself depend upon a non-dependent, or necessarily existent bring of which is God. Like the argument to design, the cosmological argument was attacked by the Scottish philosopher and historian David Hume (1711-76) and Immanuel Kant.
Its main problem, nonetheless, is that it requires us to make sense of the notion of necessary existence. For if the answer to the question of why anything exists is that some other tings of a similar kind exists, the question merely arises gain. So the ‘God’ that ends the question must exist necessarily: It must not be an entity of which the same kinds of questions can be raised. The other problem with the argument is attributing concern and care to the deity, not for connecting the necessarily existent being it derives with human values and aspirations.
The ontological argument has been treated by modern theologians such as Barth, following Hegel, not so much as a proof with which to confront the unconverted, but as an explanation of the deep meaning of religious belief. Collingwood, regards the argument s proving not that because our idea of God is that of id quo maius cogitare viequit, therefore God exists, but proving that because this is our idea of God, we stand committed to belief in its existence. Its existence is a metaphysical point or absolute pre-supposition of certain forms of thought.
In the 20th century, modal versions of the ontological argument have been propounded by the American philosophers Charles Hertshorne, Norman Malcolm, and Alvin Plantinge. One version is to define something as unsurpassably great, if it exists and is perfect in every ‘possible world’. Then, to allow that it is at least possible that an unsurpassable great being existing. This means that there is a possible world in which such a being exists. However, if it exists in one world, it exists in all (for the fact that such a being exists in a world that entails, in at least, it exists and is perfect in every world), so, it exists necessarily. The correct response to this argument is to disallow the apparently reasonable concession that it is possible that such a being exists. This concession is much more dangerous than it looks, since in the modal logic, involved from possibly necessarily ‘p’, we can device necessarily ‘p’. A symmetrical proof starting from the assumption that it is possible that such a being not exist would derive that it is impossible that it exists.
The doctrine that it makes an ethical difference of whether an agent actively intervenes to bring about a result, or omits to act in circumstances in which it is foreseen, that as a result of the omission the same result occurs. Thus, suppose that I wish you dead. If I act to bring about your death, I am a murderer, however, if I happily discover you in danger of death, and fail to act to save you, I am not acting, and therefore, according to the doctrine of acts and omissions not a murderer. Critics implore that omissions can be as deliberate and immoral as I am responsible for your food and fact to feed you. Only omission is surely a killing, ‘Doing nothing’ can be a way of doing something, or in other worlds, absence of bodily movement can also constitute acting negligently, or deliberately, and defending on the context ,may be a way of deceiving, betraying, or killing. Nonetheless, criminal law offers to find its conveniences, from which to distinguish discontinuous intervention, for which is permissible, from bringing about result, which may not be, if, for instance, the result is death of a patient. The question is whether the difference, if there is one, is, between acting and omitting to act be discernibly or defined in a way that bars a general moral might.
The double effect of a principle attempting to define when an action that had both good and bad results is morally permissible. I one formation such an action is permissible if (1) The action is not wrong in itself, (2) the bad consequences is not that which is intended (3) the good is not itself a result of the bad consequences, and (4) the two consequential affects are commensurate. Thus, for instance, I might justifiably bomb an enemy factory, foreseeing but intending that the death of nearby civilians, whereas bombing the death of nearby civilians intentionally would be disallowed. The principle has its roots in Thomist moral philosophy, accordingly. St. Thomas Aquinas (1225-74), held that it is meaningless to ask whether a human being is two tings (soul and body) or, only just as it is meaningless to ask whether the wax and the shape given to it by the stamp are one: On this analogy the sound is ye form of the body. Life after death is possible only because a form itself doe not perish (pricking is a loss of form).
And is, therefore, in some sense available to reactivate a new body. , therefore, not I who survives body death, but I ma y be resurrected in the same personalized bod y that becomes reanimated by the same form, that which Aquinas’s account, as a person has no privileged self-understanding, we understand ourselves as we do everything else, by way of sense experience and abstraction, and knowing the principle of our own lives is an achievement, not as a given. Difficult ast this point led the logical positivist to abandon the notion of an epistemological foundation altogether, and to flirt with the coherence theory of truth, it is widely accepted that trying to make the connection between thought and experience through basic sentence s depends on an untenable ‘myth of the given
The special way that we each have of knowing our own thoughts, intentions, and sensationalist have brought in the many philosophical ‘behaviorist and functionalist tendencies, that have found it important to deny that there is such a special way , arguing the way that I know of my own mind inasmuch as the way that I know of yours, e.g., by seeing what I say when asked. Others, however, point out that the behaviour of reporting the result of introspection in a particular and legitimate kind of behavioural access that deserves notice in any account of historically human psychology. The historical philosophy of reflection upon the astute of history, or of historical, thinking, finds the term was used in the 18th century, e.g., by Volante was to mean critical historical thinking as opposed to the mere collection and repetition of stories about the past. In Hegelian, particularly by conflicting elements within his own system, however, it came to man universal or world history. The Enlightenment confidence was being replaced by science, reason, and understanding that gave history a progressive moral thread, and under the influence of the German philosopher, whom is in spreading Romanticism, came Gottfried Herder (1744-1803),and, Immanuel Kant, this idea took it further to hold, so that philosophy of history cannot be the detecting of a grand system, the unfolding of the evolution of human nature as wi witnessed in successive sages (the progress of rationality or of Spirit). This essential speculative philosophy of history is given a extra Kantian twist in the German idealist Johann Fichte, in whom the extra association of temporal succession with logical implication introduces the idea that concepts themselves are the dynamic engine of historical change. The idea is readily intelligible in that there world of nature and of thought become identified. The work of Herder, Kant, Flichte and Schelling is synthesized by Hegel: History has a plot, as too, this to the moral development of man, equates with freedom within the state, this in turn is the development of thought, or a logical development in which various necessary moment in the life of the concept are successively achieved and improved upon. Hegel’s method is at its mst successful, when the object is the history of ideas, and the evolution of thinking may march in steps with logical oppositions and their resolution encounters red by various systems of thought.
Within the revolutionary communism, Karl Marx (1818-83) and the German social philosopher Friedrich Engels (1820-95), there emerges a rather different kind of story, based upon Hefl’s progressive structure not laying the achievement of the goal of history to a future in which the political condition for freedom comes to exist, so that economic and political fears than ‘reason’ is in the engine room. Although, itself is such that speculations upon the history may that it be continued to be written, notably: late examples, by the late 19th century large-scale speculation of tis kind with the nature of historical understanding, and in particular with a comparison between the ,methos of natural science and with the historians. For writers such as the German neo-Kantian Wilhelm Windelband and the German philosopher and literary critic and historian Wilhelm Dilthey, it is important to show that the human sciences such. as history are objective and legitimate, nonetheless they are in some way deferent from the enquiry of the scientist. Since the subjective-matter is the past thought and actions of human brings, what is needed and actions of human beings, past thought and actions of human beings, what is needed is an ability to re-live that past thought, knowing the deliberations of past agents, as if they were the historian’s own. The most influential British writer on this theme was the philosopher and historian George Collingwood (1889-1943) whose, The Idea of History (1946), contains an extensive defence of the verstehe approach, but it is nonetheless, the explanation from there actions, however, by re-living the situation as our understanding that understanding others is not gained by the tactic use of a ‘theory’, enabling us to infer what thoughts or intentionality experienced, again, the matter to which the subjective-matters of past thoughts and actions , as I have a human ability of knowing the deliberations of past agents as if they were the historian’s own. The immediate question of the form of historical explanation, and the fact that general laws have other than no place or any apprentices in the order of a minor place in the human sciences, it is also prominent in thoughts about distinctiveness as to regain their actions, but by re-living the situation
in or thereby an understanding of what they experience and thought.
The view that everyday attributions of intention, belief and meaning to other persons proceeded via tacit use of a theory that enables ne to construct these interpretations as explanations of their doings. The view is commonly hld along with functionalism, according to which psychological states theoretical entities, identified by the network of their causes and effects. The theory-theory had different implications, depending on which feature of theories is being stressed. Theories may be though of as capable of formalization, as yielding predications and explanations, as achieved by a process of theorizing, as achieved by predictions and explanations, as achieved by a process of theorizing, as answering to empirical evince that is in principle describable without them, as liable to be overturned by newer and better theories, and o on. The main problem with seeing our understanding of others as the outcome of a piece of theorizing is the non-existence of a medium in which this theory can be couched, as the child learns simultaneously he minds of others and the meaning of terms in its native language.
Our understanding of others is not gained by the tacit use of a ‘theory’. Enabling us to infer what thoughts or intentions explain their actions, however, by re-living the situation ‘in their moccasins’, or from their point of view, and thereby understanding what hey experienced and thought, and therefore expressed. Understanding others is achieved when we can ourselves deliberate as they did, and hear their words as if they are our own. The suggestion is a modern development of the ‘verstehen’ tradition associated with Dilthey, Weber and Collngwood.
Much as much, it is therefore, in some sense available to reactivate a new body, however, not that I, who survives bodily death, but I may be resurrected in the same body that becomes reanimated by the same form, in that of Aquinas’s account, a person has no privileged self-understanding. We understand ourselves, just as we do everything else, that through the sense experience, in that of an abstraction, may justly be of knowing the principle of our own lives, is to obtainably achieve, and not as a given. In the theory of knowledge that knowing Aquinas holds the Aristotelian doctrine that knowing entails some similarities between the Knower and what there is to be known: A human’s corporal nature, therefore, requires that knowledge start with sense perception. As yet, the same limitations that do not apply of bringing further he levelling stabilities that are contained within the hierarchical mosaic, such as the celestial heavens that open in bringing forth to angles.
In the domain of theology Aquinas deploys the distraction emphasized by Eringena, between the existence of God in understanding the significance; of five arguments: The are (1) Motion is only explicable if there exists an unmoved, a first mover (2) the chain of efficient causes demands a first cause (3) the contingent character of existing things in the wold demands a different order of existence, or in other words as something that has a necessary existence (4) the gradation of value in things in the world require the existence of something that is most valuable, or perfect, and (5) the orderly character of events points to a final cause, or end t which all things are directed, and the existence of this end demands a being that ordained it. All the arguments are physico-theological arguments, in that between reason and faith, Aquinas lays out proofs of the existence of God.
He readily recognizes that there are doctrines such that are the Incarnation and the nature of the Trinity, know only through revelations, and whose acceptance is more a matter of moral will. God’s essence is identified with his existence, as pure activity. God is simple, containing no potential. No matter how, we cannot obtain knowledge of what God is (his quiddity), perhaps, doing the same work as the principle of charity, but suggesting that we regulate our procedures of interpretation by maximizing the extent to which we see the subject s humanly reasonable, than the extent to which we see the subject as right about things. Whereby remaining content with descriptions that apply to him partly by way of analog y , God reveals of himself is not himself.
The immediate problem availed of ethics is posed b y the English philosopher Phillippa Foot, in her ‘The Problem of Abortion and the Doctrine of the Double Effect’ (1967). A runaway train or trolley comes to a section in the track that is under construction and impassable. One person is working on one part and five on the other, and the trolley will put an end to anyone working on the branch it enters. Clearly, to most minds, the driver should steer for the fewest populated branch. But now suppose that, left to itself, it will enter the branch with its five employs that are there, and you as a bystander can intervene, altering the points so that it veers through the other. Is it right or obligors, or even permissible for you to do this, thereby, apparently involving yourself in ways that responsibility ends in a death of one person? After all, whom have you wronged if you leave it to go its own way? The situation is similarly standardized of others in which utilitarian reasoning seems to lead to one course of action, but a person’s integrity or principles may oppose it.
Describing events that haphazardly happen does not of itself permit us to talk of rationality and intention, which are the categories we may apply if we conceive of them as action. We think of ourselves not only passively, as creatures that make things happen. Understanding this distinction gives forth of its many major problems concerning the nature of an agency for the causation of bodily events by mental events, and of understanding the ‘will’ and ‘free will’. Other problems in the theory of action include drawing the distinction between an action and its consequence, and describing the structure involved when we do one thing ‘by;’ dong another thing. Even the planning and dating where someone shoots someone on one day and in one place, whereby the victim then dies on another day and in another place. Where and when did the murderous act take place?
Causation, least of mention, is not clear that only events are created by and for itself. Kant cites the example o a cannonball at rest and stationed upon a cushion, but causing the cushion to be the shape that it is, and thus to suggest that the causal states of affairs or objects or facts may also be casually related. All of which, the central problem is to understand the elements of necessitation or determinacy of the future. Events, Hume thought, are in themselves ‘loose and separate’: How then are we to conceive of others? The relationship seems not to perceptible, for all that perception gives us (Hume argues) is knowledge of the patterns that events do, actually falling into than any acquaintance with the connections determining the pattern. It is, however, clear that our conception of everyday objects ids largely determined by their casual powers, and all our action is based on the belief that these causal powers are stable and reliable. Although scientific investigation can give us wider and deeper dependable patterns, it seems incapable of bringing us any nearer to the ‘must’ of causal necessitation. Particular examples’ o f puzzles with causalities are quite apart from general problems of forming any conception of what it is: How are we to understand the casual interaction between mind and body? How can the present, which exists, or its existence to a past that no longer exists? How is the stability of the casual order to be understood? Is backward causality possible? Is causation a concept needed in science, or dispensable?
The news concerning free-will, is nonetheless, a problem for which is to reconcile our everyday consciousness of ourselves as agent, with the best view of what science tells us that we are. Determinism is one part of the problem. It may be defined as the doctrine that every event has a cause. More precisely, for any event ‘C’, there will be one antecedent states of nature ‘N’, and a law of nature ‘L’, such that given L, N will be followed by ‘C’. But if this is true of every event, it is true of events such as my doing something or choosing to do something. So my choosing or doing something is fixed by some antecedent state ‘N’ an d the laws. Since determinism is universal these in turn are fixed, and so backwards to events for which I am clearly not responsible (events before my birth, for example). So, no events can be voluntary or free, where that means that they come about purely because of my willing them I could have done otherwise. If determinism is true, then there will be antecedent states and laws already determining such events: How then can I truly be said to be their author, or be responsible for them?
Reactions to this problem are commonly classified as: (1) Hard determinism. This accepts the conflict and denies that you have real freedom or responsibility (2) Soft determinism or compatibility, whereby reactions in this family assert that everything you should be from a notion of freedom is quite compatible with determinism. In particular, if your actions are caused, it can often be true of you that you could have done otherwise if you had chosen, and this may be enough to render you liable to be held unacceptable (the fact that previous events will have caused you to choose as you did, and is deemed irrelevant on this option). (3) Libertarianism, as this is the view that while compatibilism is only an evasion, there is a more substantiative, real notion of freedom that can yet be preserved in the face of determinism (or, of indeterminism). In Kant, while the empirical or phenomenal self is determined and not free, whereas the noumenal or rational self is capable of being rational, free action. However, the noumeal self exists outside the categorical priorities of space and time, as this freedom seems to be of a doubtful value as other libertarian avenues do include of suggesting that the problem is badly framed, for instance, because the definition of determinism breaks down, or postulates by its suggesting that there are two independent but consistent ways of looking at an agent, the scientific and the humanistic, wherefore it is only through confusing them that the problem seems urgent. Nevertheless, these avenues have gained general popularity, as an error to confuse determinism and fatalism.
The dilemma for which determinism is for itself often supposes of an action that seems as the end of a causal chain, or, perhaps, by some hieratical set of suppositional actions that would stretch back in time to events for which an agent has no conceivable responsibility, then the agent is not responsible for the action.
Once, again, the dilemma adds that if an action is not the end of such a chain, then either two or one of its causes occurs at random, in that no antecedent events brought it about, and in that case nobody is responsible for its ever to occur. So, whether or not determinism is true, responsibility is shown to be illusory.
Still, there is to say, to have a will is to be able to desire an outcome and to purpose to bring it about. Strength of will, or firmness of purpose, is supposed to be good and weakness of will or akrasia bad.
A mental act of willing or trying whose presence is sometimes supposed to make the difference between intentional or voluntary action, as well of mere behaviour. The theory that there are such acts is problematic, and the idea that they make the required difference is a case of explaining a phenomenon by citing another that raises exactly the same problem, since the intentional or voluntary nature of the set of volition now needs explanation. For determinism to act in accordance with the law of autonomy or freedom, is that in ascendance with universal moral law and regardless of selfish advantage.
A categorical notion in the work as contrasted in Kantian ethics show of a hypothetical imperative that embeds of a commentary which is in place only given some antecedent desire or project. ‘If you want to look wise, stay quiet’. The injunction to stay quiet only applies to those with the antecedent desire or inclination: If one has no desire to look wise the injunction or advice lapses. A categorical imperative cannot be so avoided, it is a requirement that binds anybody, regardless of their inclination,. It could be repressed as, for example, ‘Tell the truth (regardless of whether you want to or not)’. The distinction is not always mistakably presumed or absence of the conditional or hypothetical form: ‘If you crave drink, don’t become a bartender’ may be regarded as an absolute injunction applying to anyone, although only activated in the case of those with the stated desire.
In Grundlegung zur Metaphsik der Sitten (1785), Kant discussed some of the given forms of categorical imperatives, such that of (1) The formula of universal law: ‘act only on that maxim through which you can at the same time will that it should become universal law’, (2) the formula of the law of nature: ‘Act as if the maxim of your action were to become through your will a universal law of nature’, (3) the formula of the end-in-itself, ‘Act in such a way that you always trat humanity of whether in your own person or in the person of any other, never simply as an end, but always at the same time as an end’, (4) the formula of autonomy, or consideration; ’the will’ of every rational being a will which makes universal law’, and (5) the formula of the Kingdom of Ends, which provides a model for systematic union of different rational beings under common laws.
A central object in the study of Kant’s ethics is to understand the expressions of the inescapable, binding requirements of their categorical importance, and to understand whether they are equivalent at some deep level. Kant’s own application of the notions are always convincing: One cause of confusion is relating Kant’s ethical values to theories such as ;expressionism’ in that it is easy but imperatively must that it cannot be the expression of a sentiment, yet, it must derive from something ‘unconditional’ or necessary’ such as the voice of reason. The standard mood of sentences used to issue request and commands are their imperative needs to issue as basic the need to communicate information, and as such to animals signalling systems may as often be interpreted either way, and understanding the relationship between commands and other action-guiding uses of language, such as ethical discourse. The ethical theory of ‘prescriptivism’ in fact equates the two functions. A further question is whether there is an imperative logic. ‘Hump that bale’ seems to follow from ‘Tote that barge and hump that bale’, follows from ‘Its windy and its raining’:.But it is harder to say how to include other forms, does ‘Shut the door or shut the window’ follow from ‘Shut the window’, for example? The usual way to develop an imperative logic is to work in terms of the possibility of satisfying the other one command without satisfying the other, thereby turning it into a variation of ordinary deductive logic.
Despite the fact that the morality of people and their ethics amount to the same thing, there is a usage that I restart morality to systems such as that of Kant, based on notions given as duty, obligation, and principles of conduct, reserving ethics for the more Aristotelian approach to practical reasoning as based on the valuing notions that are characterized by their particular virtue, and generally avoiding the separation of ‘moral’ considerations from other practical considerations. The scholarly issues are complicated and complex, with some writers seeing Kant as more Aristotelian,. And Aristotle as more involved with a separate sphere of responsibility and duty, than the simple contrast suggests.
A major topic of philosophical inquiry, especially in Aristotle, and subsequently since the 17th and 18th centuries, when the ‘science of man’ began to probe into human motivation and emotion. For such as these, the French moralist, or Hutcheson, Hume, Smith and Kant, a prime task as to delineate the variety of human reactions and motivations. Such an inquiry would locate our propensity for moral thinking among other faculties, such as perception and reason, and other tendencies as empathy, sympathy or self-interest. The task continues especially in the light of a post-Darwinian understanding of ourselves.
In some moral systems, notably that of Immanuel Kant, real moral worth comes only with interactivity, justly because it is right. However, if you do what is purposely becoming, equitable, but from some other equitable motive, such as the fear or prudence, no moral merit accrues to you. Yet, that in turn seems to discount other admirable motivations, as acting from main-sheet benevolence, or ‘sympathy’. The question is how to balance these opposing ideas and how to understand acting from a sense of obligation without duty or rightness , through which their beginning to seem a kind of fetish. It thus stands opposed to ethics and relying on highly general and abstractive principles, particularly. Those associated with the Kantian categorical imperatives. The view may go as far back as to say that taken in its own, no consideration point, for that which of any particular way of life, that, least of mention, the contributing steps so taken as forwarded by reason or be to an understanding estimate that can only proceed by identifying salient features of a situation that weigh on one’s side or another.
As random moral dilemmas set out with intense concern, inasmuch as philosophical matters that exert a profound but influential defence of common sense. Situations in which each possible course of action breeches some otherwise binding moral principle, are, nonetheless, serious dilemmas making the stuff of many tragedies. The conflict can be described in different was. One suggestion is that whichever action the subject undertakes, that he or she does something wrong. Another is that his is not so, for the dilemma means that in the circumstances for what she or he did was right as any alternate. It is important to the phenomenology of these cases that action leaves a residue of guilt and remorse, even though it had proved it was not the subject’s fault that she or he were considering the dilemma, that the rationality of emotions can be contested. Any normality with more than one fundamental principle seems capable of generating dilemmas, however, dilemmas exist, such as where a mother must decide which of two children to sacrifice, least of mention, no principles are pitted against each other, only if we accept that dilemmas from principles are real and important, this fact can then be used to approach in themselves, such as of ‘utilitarianism’, to espouse various kinds may, perhaps, be centred upon the possibility of relating to independent feelings, liken to recognize only one sovereign principle. Alternatively, of regretting the existence of dilemmas and the unordered jumble of furthering principles, in that of creating several of them, a theorist may use their occurrences to encounter upon that which it is to argue for the desirability of locating and promoting a single sovereign principle.
Nevertheless, some theories into ethics see the subject in terms of a number of laws (as in the Ten Commandments). Th status of these laws may be that they are the edicts of a divine lawmaker, or that they are truths of reason, given to its situational ethics, virtue ethics, regarding them as at best rules-of-thumb, and, frequently disguising the great complexity of practical representations that for reason has placed the Kantian notions of their moral law.
In continence, the natural law possibility points of the view of the states that law and morality are especially associated with St Thomas Aquinas (1225-74), such that his synthesis of Aristotelian philosophy and Christian doctrine was eventually to provide the main philosophical underpinning of th Catholic church. Nevertheless, to a greater extent of any attempt to cement the moral and legal order and together within the nature of the cosmos or the nature of human beings, in which sense it found in some Protestant writings, under which had arguably derived functions. From a Platonic view of ethics and its agedly implicit advance of Stoicism. Its law stands above and apart from the activities of human lawmakers: It constitutes an objective set of principles that can be seen as in and for themselves by means of ‘natural usages’ or by reason itself, additionally, (in religious verses of them), that express of God’s will for creation. Non-religious versions of the theory substitute objective conditions for humans flourishing as the source of constraints, upon permissible actions and social arrangements within the natural law tradition. Different views have been held about the relationship between the rule of the law and God’s will. Grothius, for instance, sides with the view that the content of natural law is independent of any will, including that of God.
While the German natural theorist and historian Samuel von Pufendorf (1632-94) takes the opposite view. His great work was the De Jure Naturae et Gentium, 1672, and its English translation is ‘Of the Law of Nature and Nations, 1710. Pufendorf was influenced by Descartes, Hobbes and the scientific revolution of the 17th century, his ambition was to introduce a newly scientific ‘mathematical’ treatment on ethics and law, free from the tainted Aristotelian underpinning of ‘scholasticism’. Like that of his contemporary-Locke. His conception of natural laws include rational and religious principles, making it only a partial forerunner of more resolutely empiricist and political treatment in the Enlightenment.
Pufendorf launched his explorations in Plato’s dialogue ‘Euthyphro’, with whom the pious things are pious because the gods love them, or do the gods love them because they are pious? The dilemma poses the question of whether value can be conceived as the upshot o the choice of any mind, even a divine one. On the fist option the choice of the gods crates goodness and value. Even if this is intelligible it seems to make it impossible to praise the gods, for it is then vacuously true that they choose the good. On the second option we have to understand a source of value lying behind or beyond the will even of the gods, and by which they can be evaluated. The elegant solution of Aquinas is and is therefore distinct from is willing, but not distinct from him.
The dilemma arises whatever the source of authority is supposed to be. Do we care about the good because it is good, or do we just call good those things that we care about? It also generalizes to affect our understanding of the authority of other things: Mathematics, or necessary truth, for example, are truths necessary because we deem them to be so, or do we deem them to be so because they are necessary?
The natural aw tradition may either assume a stranger form, in which it is claimed that various fact’s entails of primary and secondary qualities, any of which is claimed that various facts entail values, reason by itself is capable of discerning moral requirements. As in the ethics of Knt, these requirements are supposed binding on all human beings, regardless of their desires.
The supposed natural or innate abilities of the mind to know the first principle of ethics and moral reasoning, wherein, those expressions are assigned and related to those that distinctions are which make in terms contribution to the function of the whole, as completed definitions of them, their phraseological impression is termed ‘synderesis’ (or, syntetesis) although traced to Aristotle, the phrase came to the modern era through St Jerome, whose scintilla conscientiae (gleam of conscience) wads a popular concept in early scholasticism. Nonetheless, it is mainly associated in Aquinas as an infallible natural, simple and immediate grasp of first moral principles. Conscience, by contrast, is ,more concerned with particular instances of right and wrong, and can be in error, under which the assertion that is taken as fundamental, at least for the purposes of the branch of enquiry in hand.
It is, nevertheless, the view interpreted within he particular states of law and morality especially associated with Aquinas and the subsequent scholastic tradition, showing for itself the enthusiasm for reform for its own sake. Or for ‘rational’ schemes thought up by managers and theorists, is therefore entirely misplaced. Major o exponent s of this theme include the British absolute idealist Herbert Francis Bradley (1846-1924) and Austrian economist and philosopher Friedrich Hayek. The notably the idealism of Bradley, there ids the same doctrine that change is contradictory and consequently unreal: The Absolute is changeless. A way of sympathizing a little with his idea is to reflect that any scientific explanation of change will proceed by finding an unchanging law operating, or an unchanging quantity conserved in the change, so that explanation of change always proceeds by finding that which is unchanged. The metaphysical problem of change is to shake off the idea that each moment is created afresh, and to obtain a conception of events or processes as having a genuinely historical reality, Really extended and unfolding in time, as opposed to being composites of discrete temporal atoms. A step towards this end may be to see time itself not as an infinite container within which discrete events are located, bu as a kind of logical construction from the flux of events. This relational view of time was advocated by Leibniz and a subject of the debate between him and Newton’s Absolutist pupil, Clarke.
Generally, nature is an indefinitely mutable term, changing as our scientific conception of the world changes, and often best seen as signifying a contrast with something considered not part of nature. The term applies both to individual species (it is the nature of gold to be dense or of dogs to be friendly), and also to the natural world as a whole. The sense in which it applies to species quickly links up with ethical and aesthetic ideals: A thing ought to realize its nature, what is natural is what it is good for a thing to become, it is natural for humans to be healthy or two-legged, and departure from this is a misfortune or deformity,. The associations of what is natural with what it is good to become is visible in Plato, and is the central idea of Aristotle’s philosophy of nature. Unfortunately, the pinnacle of nature in this sense is the mature adult male citizen, with he rest of hat we would call the natural world, including women, slaves, children and other species, not quite making it.
Nature in general can, however, function as a foil to any idea inasmuch as a source of ideals: In this sense fallen nature is contrasted with a supposed celestial realization of the ‘forms’. The theory of ‘forms’ is probably the most characteristic, and most contested of the doctrines of Plato. In the background, i.e., the Pythagorean conception of form as the initial orientation to physical nature, bu also the sceptical doctrine associated with the Greek philosopher Cratylus, and is sometimes thought to have been a teacher of Plato before Socrates. He is famous for capping the doctrine of Ephesus of Heraclitus, whereby the guiding idea of his philosophy was that of the logos, is capable of being heard or hearkened to by people, it unifies opposites, and it is somehow associated with fire, which is preeminent among the four elements that Heraclitus distinguishes: Fire, air (breath, the stuff of which souls composed), earth, and water. Although he is principally remembered for the doctrine of the ‘flux’ of all things, and the famous statement that you cannot step into the same river twice, for new waters are ever flowing in upon you. The more extreme implication of the doctrine of flux, e.g., the impossibility of categorizing things truly, do not seem consistent with his general epistemology and views of meaning, and were to his follower Cratylus, although the proper conclusion of his views was that the flux cannot be captured in words. According to Aristotle, he eventually held that since ‘regarding that which everywhere in every respect is changing nothing ids just to stay silent and wag one’s finger. Plato ‘s theory of forms can be seen in part as an action against the impasse to which Cratylus was driven.
The Galilean world view might have been expected to drain nature of its ethical content, however, the term seldom loses its normative force, and the belief in universal natural laws provided its own set of ideals. In the 18th century for example, a painter or writer could be praised as natural, where the qualities expected would include normal (universal) topics treated with simplicity, economy , regularity and harmony. Later on, nature becomes an equally potent emblem of irregularity, wildness, and fertile diversity, but also associated with progress of human history, its incurring definition that has been taken to fit many things as well as transformation, including ordinary human self-consciousness. Nature, being in contrast with in integrated phenomenon may include (1) that which is deformed or grotesque or fails to achieve its proper form or function or just the statistically uncommon or unfamiliar, (2) the supernatural, or the world of gods and invisible agencies, (3) the world of rationality and unintelligence, conceived of as distinct from the biological and physical order, or the product of human intervention, and (5) related to that, the world of convention and artifice.
Different conceptions of nature continue to have ethical overtones, foe example, the conception of ‘nature red in tooth and claw’ often provides a justification for aggressive personal and political relations, or the idea that it is women’s nature to be one thing or another is taken to be a justification for differential social expectations. The term functions as a fig-leaf for a particular set of stereotypes, and is a proper target of much feminist writings. Feminist epistemology has asked whether different ways of knowing for instance with different criteria of justification, and different emphases on logic and imagination, characterize male and female attempts to understand the world. Such concerns include awareness of the ‘masculine’ self-image, itself a socially variable and potentially distorting picture of what thought and action should be. Again, there is a spectrum of concerns from the highly theoretical principles to the relatively practical. In this latter area particular attention is given to the institutional biases that stand in the way of equal opportunities in science and other academic pursuits, or the ideologies that stand in the way of women seeing themselves as leading contributors to various disciplines. However, to more radical feminists such concerns merely exhibit women wanting for themselves the same power and rights over others that men have claimed, and failing to confront the real problem, which is how to live without such symmetrical powers and rights.
In biological determinism, not only influences but constraints and makes inevitable our development as persons with a variety of traits. At its silliest the view postulates such entities as a gene predisposing people to poverty, and it is the particular enemy of thinkers stressing the parental, social, and political determinants of the way we are.
The philosophy of social science is more heavily intertwined with actual social science than in the case of other subjects such as physics or mathematics, since its question is centrally whether there can be such a thing as sociology. The idea of a ‘science of man’, devoted to uncovering scientific laws determining the basic dynamic s of human interactions was a cherished ideal of the Enlightenment and reached its heyday with the positivism of writers such as the French philosopher and social theorist Auguste Comte (1798-1957), and the historical materialism of Marx and his followers. Sceptics point out that what happens in society is determined by peoples’ own ideas of what should happen, and like fashions those ideas change in unpredictable ways as self-consciousness is susceptible to change by any number of external event s: Unlike the solar system of celestial mechanics a society is not at all a closed system evolving in accordance with a purely internal dynamic, but constantly responsive to shocks from outside.
The sociological approach to human behaviour is based on the premise that all social behaviour has a biological basis, and seeks to understand that basis in terms of genetic encoding for features that are then selected for through evolutionary history. The philosophical problem is essentially one of methodology: Of finding criteria for identifying features that can usefully be explained in this way, and fo r finding criteria for assessing various genetic stories that might provide useful explanations.
Among the features that are proposed for this kind o f explanation are such things as male dominance, male promiscuity versus female fidelity, propensities to sympathy and other emotions, and the limited altruism characteristic of human beings. The strategy has proved unnecessarily controversial, with proponents accused of ignoring the influence of environmental and social factors in moulding people’s characteristics, e.g., at the limit of silliness, by postulating a ‘gene for poverty’, however, there is no need for the approach to commit such errors, since the feature explained sociobiological may be indexed to environment: For instance, it ma y be a propensity to develop some feature in some other environments (for even a propensity to develop propensities . . .) The main problem is to separate genuine explanation from speculative, just so stories which may or may not identify as really selective mechanisms.
Subsequently, in the 19th century attempts were made to base ethical reasoning on the presumed facts about evolution. The movement is particularly associated with the English philosopher of evolution Herbert Spencer (1820-1903),. His first major work was the book Social Statics (1851), which advocated an extreme political libertarianism. The Principles of Psychology was published in 1855, and his very influential Education advocating natural development of intelligence, the creation of pleasurable interest, and the importance of science in the curriculum, appeared in 1861. His First Principles (1862) was followed over the succeeding years by volumes on the Principles of biology and psychology, sociology and ethics. Although he attracted a large public following and attained the stature of a sage, his speculative work has not lasted well, and in his own time there was dissident voices. T.H. Huxley said that Spencer’s definition of a tragedy was a deduction killed by a fact. Writer and social prophet Thomas Carlyle (1795-1881) called him a perfect vacuum, and the American psychologist and philosopher William James (1842-1910) wondered why half of England wanted to bury him in Westminister Abbey, and talked of the ‘hurdy-gurdy’ monotony of him, his whole system wooden, as if knocked together out of cracked hemlock.
The premise is that later elements in an evolutionary path are better than earlier ones, the application of this principle then requires seeing western society, laissez-faire capitalism, or some other object of approval, as more evolved than more ‘primitive’ social forms. Neither the principle nor the applications command much respect. The version of evolutionary ethics called ‘social Darwinism’ emphasizes the struggle for natural selection, and drawn the conclusion that we should glorify such struggle, usually by enhancing competitive and aggressive relations between people in society or between societies themselves. More recently the relation between evolution and ethics has been re-thought in the light of biological discoveries concerning altruism and kin-selection.
In that, the study of the say in which a variety of higher mental function may be adaptions applicable of a psychology of evolution, a formed in response to selection pressures on human populations through evolutionary time. Candidates for such theorizing include material and paternal motivations, capabilities for love and friendship, the development of language as a signalling system, cooperative and aggressive tendencies, our emotional repertoires, our moral reaction, including the disposition to direct and punish those who cheat on an agreement or who turn towards free-riders-those of which who take away the things of others, our cognitive structure and many others. Evolutionary psychology goes hand-in-hand with neurophysiological evidence about the underlying circuitry in the brain which subserves the psychological mechanisms it claims to identify.
For all that, an essential part of the British absolute idealist Herbert Bradley (1846-1924) was largely on the ground s that the self-sufficiency individualized through community and one’s self is to contribute to social and other ideals. However, truth as formulated in language is always partial, and dependent upon categories that themselves are inadequate to the harmonious whole. Nevertheless, these self-contradictory elements somehow contribute to the harmonious whole, or Absolute, lying beyond categorization. Although absolute idealism maintains few adherents today, Bradley’s general dissent from empiricism, his holism, and the brilliance and style of his writing continue to make him the most interesting of the late 19th century writers influenced by the German philosopher Friedrich Hegel (1770-1831).
Understandably, something less than the fragmented division that belonging of Bradley’s case has a preference, voiced much earlier by the German philosopher, mathematician and polymath was Gottfried Leibniz (1646-1716), for categorical monadic properties over relations. He was particularly troubled by the relation between that which ids known and the more that knows it. In philosophy, the Romantics took from the German philosopher and founder of critical philosophy Immanuel Kant (1724-1804) both the emphasis on free-will and the doctrine that reality is ultimately spiritual, with nature itself a mirror of the human soul. To fix upon one among alternatives as the one to be taken, Friedrich Schelling (1775-1854) foregathers nature of becoming a creative spirit whose aspiration is ever further and more to completed self-realization. Although a movement of more general to naturalized imperative. Romanticism drew on the same intellectual and emotional resources as German idealism was increasingly culminating in the philosophy of Hegel (1770-1831) and of absolute idealism.
Being such in comparison with nature may include (1) that which is deformed or grotesque, or fails to achieve its proper form or function, or just the statistically uncommon or unfamiliar, (2) the supernatural, or th world of gods and invisible agencies, (3) the world of rationality and intelligence, conceived of as distinct from the biological and physical order, (4) that which is manufactured and artefactual, or the product of human invention, and (5) related to it, the world of convention and artifice.
Different conceptions of nature continue to have ethical overtones, for example, the conception of ‘nature red in tooth and claw’ often provide a justification for aggressive personal and political relations, or the idea that it is a women’s nature to be one thing or another, as taken to be a justification for differential social expectations. The term functions as a fig-leaf for a particular set of stereotype, and is a proper target of much ‘feminist’ writing.
This brings to question, that most of all ethics are contributively distributed as an understanding for which a dynamic function in and among the problems that are affiliated with human desire and needs the achievements of happiness, or the distribution of goods. The central problem specific to thinking about the environment is the independent value to place on ‘such-things’ as preservation of species, or protection of the wilderness. Such protection can be supported as a mans to ordinary human ends, for instance, when animals are regarded as future sources of medicines or other benefits. Nonetheless, many would want to claim a non-utilitarian, absolute value for the existence of wild things and wild places. It is in their value that thing consist. They put u in our proper place, and failure to appreciate this value is not only an aesthetic failure but one of due humility and reverence, a moral disability. The problem is one of expressing this value, and mobilizing it against utilitarian agents for developing natural areas and exterminating species, more or less at will.
Many concerns and disputed cluster around the idea associated with the term ‘substance’. The substance of a thin may be considered in: (1) Its essence, or that which makes it what it is. This will ensure that the substance of a thing is that which remains through change in properties. Again, in Aristotle, this essence becomes more than just the matter, but a unity of matter and form. (2) That which can exist by itself, or does not need a subject for existence, in the way that properties need objects, hence (3) that which bears properties, as a substance is then the subject of predication, that about which things are said as opposed to the things said about it. Substance in the last two senses stands opposed to modifications such as quantity, quality, relations, etc. it is hard to keep this set of ideas distinct from the doubtful notion of a substratum, something distinct from any of its properties, and hence, as an incapable characterization. The notion of substances tend to disappear in empiricist thought in fewer of the sensible questions of things with the notion of that in which they infer of giving way to an empirical notion of their regular occurrence. However, this is in turn is problematic, since it only makes sense to talk of the occurrence of instance of qualities, not of quantities themselves. So the problem of what it is for a value quality to be the instance that remains.
Metaphysics inspired by modern science tends to reject the concept of substance in favour of concepts such as that of a field or a process, each of which may seem to provide a better example of a fundamental physical category.
It must be spoken of a concept that is deeply embedded in 18th century aesthetics, but deriving from the 1st century rhetorical treatise On the Sublime, by Longinus. The sublime is great, fearful, noble, calculated to arouse sentiments of pride and majesty, as well as awe and sometimes terror. According to Alexander Gerard’s writing in 1759, ‘When a large object is presented, the mind expands itself to the extent of that objects, and is filled with one grand sensation, which totally possessing it, composes it into a solemn sedateness and strikes it with deep silent wonder, and administration’: It finds such a difficulty in spreading itself to the dimensions of its object, as enliven and invigorates which this occasions, it sometimes images itself present in every part of the sense which it contemplates, and from the sense of this immensity, feels a noble pride, and entertains a lofty conception of its own capacity.
In Kant’s aesthetic theory the sublime ‘raises the soul above the height of vulgar complacency’. We experience the vast spectacles of nature as ‘absolutely great’ and of irresistible might and power. This perception is fearful, but by conquering this fear, and by regarding as small ‘those things of which we are wont to be solicitous’ we quicken our sense of moral freedom. So we turn the experience of frailty and impotence into one of our true, inward moral freedom as the mind triumphs over nature, and it is this triumph of reason that is truly sublime. Kant thus paradoxically places our sense of the sublime in an awareness of ourselves as transcending nature, than in an awareness of ourselves as a frail and insignificant part of it.
Nevertheless, the doctrine that all relations are internal was a cardinal thesis of absolute idealism, and a central point of attack by the British philosophers George Edward Moore (1873-1958) and Bertrand Russell (1872-1970). It is a kind of ‘essentialism’, stating that if two things stand in some relationship, then they could not be what they are, did they not do so, if, for instance, I am wearing a hat mow, then when we imagine a possible situation that we would be got to describe as my not wearing the hat now, we would strictly not be imaging as one and the hat, but only some different individual.
The countering partitions a doctrine that bears some resemblance to the metaphysically based view of the German philosopher and mathematician Gottfried Leibniz (1646-1716), that if a person had any other attributes that the ones he has, he would not have been the AME person. Leibniz thought that when asked hat would have happened if Peter had not denied Christ. That being that if I am asking what would have happened if Peter had not been Peter, denying Christ is contained in the complete notion of Peter. But he allowed that by the name ‘Peter’ might be understood as ‘what is involved in those attributes [of Peter] from which the denial does not follow’. In order that we are held accountable to allow of external relations, in that these being relations which individuals could have or not depending upon contingent circumstances. The relations of ideas is used by the Scottish philosopher David Hume (1711-76) in the First Enquiry of Theoretical Knowledge. All the objects of human reason or enquiring naturally, be divided into two kinds: To unit all the , ‘relations of ideas’ and ‘matter of fact ‘ (Enquiry Concerning Human Understanding) the terms reflect the belief that any thing that can be known dependently must be internal to the mind, and hence transparent to us.
In Hume, objects of knowledge are divided into matter of fact (roughly empirical things known by means of impressions) and the relation of ideas. The contrast, also called “Hume’s Fork’, is a version of the speculative deductivity distinction, but reflects the 17th and early 18th centauries behind that the deductivity is established by chains of infinite certainty as comparable to ideas. It is extremely important that in the period between Descartes and J.S. Mill that a demonstration is not, but only a chain of ‘intuitive’ comparable ideas, whereby a principle or maxim can be established by reason alone. It ids in this sense that the English philosopher John Locke (1632-1704) who believed that theological and moral principles are capable of demonstration, and Hume denies that they are, and also denies that scientific enquiries proceed in demonstrating its results.
A mathematical proof is formally inferred as to an argument that is used to show the truth of a mathematical assertion. In modern mathematics, a proof begins with one or more statements called premises and demonstrates, using the rules of logic, that if the premises are true then a particular conclusion must also be true.
The accepted methods and strategies used to construct a convincing mathematical argument have evolved since ancient times and continue to change. Consider the Pythagorean theorem, named after the 5th century Bc Greek mathematician and philosopher Pythagoras, which states that in a right-angled triangle, the square of the hypotenuse is equal to the sum of the squares of the other two sides. Many early civilizations considered this theorem true because it agreed with their observations in practical situations. But the early Greeks, among others, realized that observation and commonly held opinion do not guarantee mathematical truth. For example, before the 5th century Bc it was widely believed that all lengths could be expressed as the ratio of two whole numbers. But an unknown Greek mathematician proved that this was not true by showing that the length of the diagonal of a square with an area of 1 is the irrational number Ã.
The Greek mathematician Euclid laid down some of the conventions central to modern mathematical proofs. His book The Elements, written about 300 Bc, contains many proofs in the fields of geometry and algebra. This book illustrates the Greek practice of writing mathematical proofs by first clearly identifying the initial assumptions and then reasoning from them in a logical way in order to obtain a desired conclusion. As part of such an argument, Euclid used results that had already been shown to be true, called theorems, or statements that were explicitly acknowledged to be self-evident, called axioms; this practice continues today.
In the 20th century, proofs have been written that are so complex that no one person understands every argument used in them. In 1976, a computer was used to complete the proof of the four-colour theorem. This theorem states that four colours are sufficient to colour any map in such a way that regions with a common boundary line have different colours. The use of a computer in this proof inspired considerable debate in the mathematical community. At issue was whether a theorem can be considered proven if human beings have not actually checked every detail of the proof.
The study of the relations of deductibility among sentences in a logical calculus which benefits the prof theory. Deductibility is defined purely syntactically, that is, without reference to the intended interpretation of the calculus. The subject was founded by the mathematician David Hilbert (1862-1943) in the hope that strictly finitary methods would provide a way of proving the consistency of classical mathematics, but the ambition was torpedoed by Godel’s second incompleteness theorem.
What is more, the use of a model to test for consistencies in an ‘axiomatized system’ which is older than modern logic. Descartes’ algebraic interpretation of Euclidean geometry provides a way of showing that if the theory of real numbers is consistent, so is the geometry. Similar representation had been used by mathematicians in the 19th century, for example to show that if Euclidean geometry is consistent, so are various non-Euclidean geometries. Model theory is the general study of this kind of procedure: The ‘proof theory’ studies relations of deductibility between formulae of a system, but once the notion of an interpretation is in place we can ask whether a formal system meets certain conditions. In particular, can it lead us from sentences that are true under some interpretation? And if a sentence is true under all interpretations, is it also a theorem of the system? We can define a notion of validity (a formula is valid if it is true in all interpret rations) and semantic consequence (a formula ‘B’ is a semantic consequence of a set of formulae, written {A1 . . . An} ⊨B, if it is true in all interpretations in which they are true) Then the central questions for a calculus will be whether all and only its theorems are valid, and whether {A1 . . . An} ⊨ B if and only if {A1 . . . An} ⊢B. There are the questions of the soundness and completeness of a formal system. For the propositional calculus this turns into the question of whether the proof theory delivers as theorems all and only ‘tautologies’. There are many axiomatizations of the propositional calculus that are consistent and complete. The mathematical logician Kurt Gödel (1906-78) proved in 1929 that the first-order predicate under every interpretation is a theorem of the calculus. In that mathematical method for solving those physical problems that can be stated in the form that a certain value definite integral shall have a stationary value for small changes of the functions in the integrands and of the limit of integration.
The Euclidean geometry is the greatest example of the pure ‘axiomatic method’, and as such had incalculable philosophical influence as a paradigm of rational certainty. It had no competition until the 19th century when it was realized that the fifth axiom of his system (parallel lines never meet) could be denied without inconsistency, leading to Riemannian spherical geometry. The significance of Riemannian geometry lies in its use and extension of both Euclidean geometry and the geometry of surfaces, leading to a number of generalized differential geometries. Its most important effect was that it made a geometrical application possible for some major abstractions of tensor analysis, leading to the pattern and concepts for general relativity later used by Albert Einstein in developing his theory of relativity. Riemannian geometry is also necessary for treating electricity and magnetism in the framework of general relativity. The fifth chapter of Euclid’s Elements, is attributed to the mathematician Eudoxus, and contains a precise development of the real number, work which remained unappreciated until rediscovered in the 19th century.
The Axiom, in logic and mathematics, is a basic principle that is assumed to be true without proof. The use of axioms in mathematics stems from the ancient Greeks, most probably during the 5th century Bc, and represents the beginnings of pure mathematics as it is known today. Examples of axioms are the following: 'No sentence can be true and false at the same time' (the principle of contradiction); 'If equals are added to equals, the sums are equal'. 'The whole is greater than any of its parts'. Logic and pure mathematics begin with such unproved assumptions from which other propositions (theorems) are derived. This procedure is necessary to avoid circularity, or an infinite regression in reasoning. The axioms of any system must be consistent with one another, that is, they should not lead to contradictions. They should be independent in the sense that they cannot be derived from one another. They should also be few in number. Axioms have sometimes been interpreted as self-evident truths. The present tendency is to avoid this claim and simply to assert that an axiom is assumed to be true without proof in the system of which it is a part.
The terms 'axiom' and 'postulate' are often used synonymously. Sometimes the word axiom is used to refer to basic principles that are assumed by every deductive system, and the term postulate is used to refer to first principles peculiar to a particular system, such as Euclidean geometry. Infrequently, the word axiom is used to refer to first principles in logic, and the term postulate is used to refer to first principles in mathematics.
The applications of game theory are wide-ranging and account for steadily growing interest in the subject. Von Neumann and Morgenstern indicated the immediate utility of their work on mathematical game theory by linking it with economic behaviour. Models can be developed, in fact, for markets of various commodities with differing numbers of buyers and sellers, fluctuating values of supply and demand, and seasonal and cyclical variations, as well as significant structural differences in the economies concerned. Here game theory is especially relevant to the analysis of conflicts of interest in maximizing profits and promoting the widest distribution of goods and services. Equitable division of property and of inheritance is another area of legal and economic concern that can be studied with the techniques of game theory.
In the social sciences, n-person game theory has interesting uses in studying, for example, the distribution of power in legislative procedures. This problem can be interpreted as a three-person game at the congressional level involving vetoes of the president and votes of representatives and senators, analyzed in terms of successful or failed coalitions to pass a given bill. Problems of majority rule and individual decision making are also amenable to such study.
Sociologists have developed an entire branch of game theory devoted to the study of issues involving group decision making. Epidemiologists also make use of game theory, especially with respect to immunization procedures and methods of testing a vaccine or other medication. Military strategists turn to game theory to study conflicts of interest resolved through 'battles' where the outcome or payoff of a given war game is either victory or defeat. Usually, such games are not examples of zero-sum games, for what one player loses in terms of lives and injuries is not won by the victor. Some uses of game theory in analyses of political and military events have been criticized as a dehumanizing and potentially dangerous oversimplification of necessarily complicating factors. Analysis of economic situations is also usually more complicated than zero-sum games because of the production of goods and services within the play of a given 'game'.
All is the same in the classical theory of the syllogism, a term in a categorical proposition is distributed if the proposition entails any proposition obtained from it by substituting a term denoted by the original. For example, in ‘all dogs bark’ the term ‘dogs’ is distributed, since it entails ‘all terriers bark’, which is obtained from it by a substitution. In ‘Not all dogs bark’, the same term is not distributed, since it may be true while ‘not all terriers bark’ is false.
When a representation of one system by another is usually more familiar, in and for itself, that those extended in representation that their workings are supposed analogous to that of the first. This one might model the behaviour of a sound wave upon that of waves in water, or the behaviour of a gas upon that to a volume containing moving billiard balls. While nobody doubts that models have a useful ‘heuristic’ role in science, there has been intense debate over whether a good model, or whether an organized structure of laws from which it can be deduced and suffices for scientific explanation. As such, the debate of topic was inaugurated by the French physicist Pierre Marie Maurice Duhem (1861-1916), in ‘The Aim and Structure of Physical Theory’ (1954) by which Duhem’s conception of science is that it is simply a device for calculating as science provides deductive system that is systematic, economical, and predictive, but not that represents the deep underlying nature of reality. Steadfast and holding of its contributive thesis that in isolation, and since other auxiliary hypotheses will always be needed to draw empirical consequences from it. The Duhem thesis implies that refutation is a more complex matter than might appear. It is sometimes framed as the view that a single hypothesis may be retained in the face of any adverse empirical evidence, if we prepared to make modifications elsewhere in our system, although strictly speaking this is a stronger thesis, since it may be psychologically impossible to make consistent revisions in a belief system to accommodate, say, the hypothesis that there is a hippopotamus in the room when visibly there is not.
Primary and secondary qualities are the division associated with the 17th-century rise of modern science, wit h its recognition that the fundamental explanatory properties of things that are not the qualities that perception most immediately concerns. There latter are the secondary qualities, or immediate sensory qualities, including colour, taste, smell, felt warmth or texture, and sound. The primary properties are less tied to there deliverance of one particular sense, and include the size, shape, and motion of objects. In Robert Boyle (1627-92) and John Locke (1632-1704) the primary qualities are scientifically tractable, objective qualities essential to anything material, are of a minimal listing of size, shape, and mobility, i.e., the state of being at rest or moving. Locke sometimes adds number, solidity, texture (where this is thought of as the structure of a substance, or way in which it is made out of atoms). The secondary qualities are the powers to excite particular sensory modifications in observers. Once, again, that Locke himself thought in terms of identifying these powers with the texture of objects that, according to corpuscularian science of the time, were the basis of an object’s causal capacities. The ideas of secondary qualities are sharply different from these powers, and afford us no accurate impression of them. For Renè Descartes (1596-1650), this is the basis for rejecting any attempt to think of knowledge of external objects as provided by the senses. But in Locke our ideas of primary qualities do afford us an accurate notion of what shape, size,. And mobility are. In English-speaking philosophy the first major discontent with the division was voiced by the Irish idealist George Berkeley (1685-1753), who probably took for a basis of his attack from Pierre Bayle (1647-1706), who in turn cites the French critic Simon Foucher (1644-96). Modern thought continues to wrestle with the difficulties of thinking of colour, taste, smell, warmth, and sound as real or objective properties to things independent of us.
Continuing as such, is the doctrine advocated by the American philosopher David Lewis (1941-2002), in that different possible worlds are to be thought of as existing exactly as this one does. Thinking in terms of possibilities is thinking of real worlds where things are different. The view has been charged with making it impossible to see why it is good to save the child from drowning, since there is still a possible world in which she (or her counterpart) drowned, and from the standpoint of the universe it should make no difference which world is actual. Critics also charge either that the notion fails to fit with a coherent theory lf how we know about possible worlds, or with a coherent theory of why we are interested in them, but Lewis denied that any other way of interpreting modal statements is tenable.
The proposal set forth that characterizes the ‘modality’ of a proposition as the notion for which it is true or false. The most important division is between propositions true of necessity, and those true as things are: Necessary as opposed to contingent propositions. Other qualifiers sometimes called ‘modal’ include the tense indicators, ‘it will be the case that ‘p’, or ‘it was the case that ‘p’, and there are affinities between the ‘deontic’ indicators, ‘it ought to be the case that ‘p’, or ‘it is permissible that ‘p’, and the of necessity and possibility.
The aim of a logic is to make explicit the rules by which inferences may be drawn, than to study the actual reasoning processes that people use, which may or may not conform to those rules. In the case of deductive logic, if we ask why we need to obey the rules, the most general form of answer is that if we do not we contradict ourselves(or, strictly speaking, we stand ready to contradict ourselves. Someone failing to draw a conclusion that follows from a set of premises need not be contradicting him or herself, but only failing to notice something. However, he or she is not defended against adding the contradictory conclusion to his or fer set of beliefs.) There is no equally simple answer in the case of inductive logic, which is in general a less robust subject, but the aim will be to find reasoning such hat anyone failing to conform to it will have improbable beliefs. Traditional logic dominated the subject until the 19th century., and has become increasingly recognized in the 20th century, in that finer work that were done within that tradition, but syllogistic reasoning is now generally regarded as a limited special case of the form of reasoning that can be reprehend within the promotion and predated values, these form the heart of modern logic, as their central notions or qualifiers, variables, and functions were the creation of the German mathematician Gottlob Frége, who is recognized as the father of modern logic, although his treatment of a logical system ass an abreact mathematical structure, or algebraic, has been heralded by the English mathematician and logician George Boole (1815-64), his pamphlet The Mathematical Analysis of Logic (1847) pioneered the algebra of classes. The work was made of in An Investigation of the Laws of Thought (1854). Boole also published many works in our mathematics, and on the theory of probability. His name is remembered in the title of Boolean algebra, and the algebraic operations he investigated are denoted by Boolean operations.
The syllogistic, or categorical syllogism is the inference of one proposition from two premises. For example is, ‘all horses have tails, and things with tails are four legged, so all horses are four legged. Each premise has one term in common with the other premises. The term that ds not occur in the conclusion is called the middle term. The major premise of the syllogism is the premise containing the predicate of the contraction (the major term). And the minor premise contains its subject (the minor term). So the first premise of the example in the minor premise the second the major term. So the first premise of the example is the minor premise, the second the major premise and ‘having a tail’ is the middle term. This enable syllogisms that there of a classification, that according to the form of the premises and the conclusions. The other classification is by figure, or way in which the middle term is placed or way in within the middle term is placed in the premise.
Although the theory of the syllogism dominated logic until the 19th century, it remained a piecemeal affair, able to deal with only relations valid forms of valid forms of argument. There have subsequently been reargued actions attempting, but in general it has been eclipsed by the modern theory of quantification, the predicate calculus is the heart of modern logic, having proved capable of formalizing the calculus rationing processes of modern mathematics and science. In a first-order predicate calculus the variables range over objects: In a higher-order calculus the may range over predicate and functions themselves. The fist-order predicated calculus with identity includes ‘=’ as primitive (undefined) expression: In a higher-order calculus I t may be defined by law that χ- y iff (∀F)(Fχ↔Fy), which gives grater expressive power for less complexity.
Modal logic was of great importance historically, particularly in the light of the deity, but was not a central topic of modern logic in its gold period as the beginning of the 20th century. It was, however, revived by the American logician and philosopher Irving Lewis (1883-1964), although he wrote extensively on most central philosophical topis, he is remembered principally as a critic of the intentional nature of modern logic, and as the founding father of modal logic. His two independent proofs showing that from a contradiction anything follows a relevance logic, using a notion of entailment stronger than that of strict implication.
The imparting information has been conduced or carried out of the prescribed procedures, as impeding of something that tajes place in the chancing encounter out to be to enter ons’s mind may from time to time occasion of various doctrines concerning th necessary properties, ;east of mention, by adding to a prepositional or predicated calculus two operator, □and ◊(sometimes written ‘N’ and ‘M’),meaning necessarily and possible, respectfully. These like ‘p ➞◊p and □p ➞p will be wanted. Controversial these include □p ➞□□p (if a proposition is necessary,. It its necessarily, characteristic of a system known as S4) and ◊p ➞□◊p (if as preposition is possible, it its necessarily possible, characteristic of the system known as S5). The classical modal theory for modal logic, due to the American logician and philosopher (1940-) and the Swedish logician Sig Kanger, involves valuing prepositions not true or false simpiciter, but as true or false at possible worlds with necessity then corresponding to truth in all worlds, and possibility to truth in some world. Various different systems of modal logic result from adjusting the accessibility relation between worlds.
In Saul Kripke, gives the classical modern treatment of the topic of reference, both clarifying the distinction between names and definite description, and opening te door to many subsequent attempts to understand the notion of reference in terms of a causal link between the use of a term and an original episode of attaching a name to the subject.
One of the three branches into which ‘semiotic’ is usually divided, the study of semantical meaning of words, and the relation of signs to the degree to which the designs are applicable. In that, in formal studies, a semantics is provided for a formal language when an interpretation of ‘model’ is specified. However, a natural language comes ready interpreted, and the semantic problem is not that of specification but of understanding the relationship between terms of various categories (names, descriptions, predicate, adverbs . . . ) and their meaning. An influential proposal by attempting to provide a truth definition for the language, which will involve giving a full structure of different kinds have on the truth conditions of sentences containing them.
Holding that the basic casse of reference is the relation between a name and the persons or object which it names. The philosophical problems include trying to elucidate that relation, to understand whether other semantic relations, such s that between a predicate and the property it expresses, or that between a description an what it describes, or that between myself or the word ‘I’, are examples of the same relation or of very different ones. A great deal of modern work on this was stimulated by the American logician Saul Kripke’s, Naming and Necessity (1970). It would also be desirable to know whether we can refer to such things as objects and how to conduct the debate about each and issue. A popular approach, following Gottlob Frége, is to argue that the fundamental unit of analysis should be the whole sentence. The reference of a term becomes a derivative notion it is whatever it is that defines the term’s contribution to the trued condition of the whole sentence. There need be nothing further to say about it, given that we have a way of understanding the attribution of meaning or truth-condition to sentences. Other approach, searching for a more substantive possibly that causality or psychological or social constituents are pronounced between words and things.
However, following Ramsey and the Italian mathematician G. Peano (1858-1932), it has been customary to distinguish logical paradoxes that depend upon a notion of reference or truth (semantic notions) such as those of the ‘Liar family, Berry, Richard, etc. form the purely logical paradoxes in which no such notions are involved, such as Russell’s paradox, or those of Canto and Burali-Forti. Paradoxes of the fist type sem to depend upon an element of self-reference, in which a sentence is about itself, or in which a phrase refers to something about itself, or in which a phrase refers to something defined by a set of phrases of which it is itself one. It is to feel that this element is responsible for the contradictions, although self-reference itself is often benign (for instance, the sentence ‘All English sentences should have a verb’, includes itself happily in the domain of sentences it is talking about), so the difficulty lies in forming a condition that existence only pathological self-reference. Paradoxes of the second kind then need a different treatment. Whilst the distinction is convenient. In allowing set theory to proceed by circumventing the latter paradoxes by technical mans, even when there is no solution to the semantic paradoxes, it may be a way of ignoring the similarities between the two families. There is still th possibility that while there is no agreed solution to the semantic paradoxes, our understand of Russell’s paradox may be imperfect as well.
Truth and falsity are two classical truth-values that a statement, proposition or sentence can take, as it is supposed in classical (two-valued) logic, that each statement has one of these values, and non has both. A statement is then false if and only if it is not true. The basis of this scheme is that to each statement there corresponds a determinate truth condition, or way the world must be for it to be true: If this condition obtains the statement is true, and otherwise false. Statements may indeed be felicitous or infelicitous in other dimensions (polite, misleading, apposite, witty, etc.) but truth is the central normative notion governing assertion. Considerations o vagueness may introduce greys into this black-and-white scheme. For the issue to be true, any suppressed premise or background framework of thought necessary make an agreement valid, or a position tenable, a proposition whose truth is necessary for either the truth or the falsity of another statement. Thus if ‘p’ presupposes ‘q’, ‘q’ must be true for ‘p’ to be either true or false. In the theory of knowledge, the English philologer and historian George Collingwood (1889-1943), announces hat any proposition capable of truth or falsity stand on bed of ‘absolute presuppositions’ which are not properly capable of truth or falsity, since a system of thought will contain no way of approaching such a question (a similar idea later voiced by Wittgenstein in his work On Certainty). The introduction of presupposition therefore mans that either another of a truth value is fond, ‘intermediate’ between truth and falsity, or the classical logic is preserved, but it is impossible to tell whether a particular sentence empresses a preposition that is a candidate for truth and falsity, without knowing more than the formation rules of the language. Each suggestion carries coss, and there is some consensus that at least whowhere definite descriptions are involved, examples equally given by regarding the overall sentence as false as the existence claim fails, and explaining the data that the English philosopher Frederick Strawson (1919-) relied upon as the effects of ‘implicature’.
Views about the meaning of terms will often depend on classifying the implicature of sayings involving the terms as implicatures or as genuine logical implications of what is said. Implicatures may be divided into two kinds: Conversational implicatures of the two kinds and the more subtle category of conventional implicatures. A term may as a matter of convention carry an implicature, thus one of the relations between ‘he is poor and honest’ and ‘he is poor but honest’ is that they have the same content (are true in just the same conditional) but the second has implicatures (that the combination is surprising or significant) that the first lacks.
It is, nonetheless, that we find in classical logic a proposition that may be true or false,. In that, if the former, it is said to take the truth-value true, and if the latter the truth-value false. The idea behind the terminological phrases is the analogues between assigning a propositional variable one or other of these values, as is done in providing an interpretation for a formula of the propositional calculus, and assigning an object as the value of any other variable. Logics with intermediate value are called ‘many-valued logics’.
Nevertheless, an existing definition of the predicate’ . . . is true’ for a language that satisfies convention ‘T’, the material adequately condition laid down by Alfred Tarski, born Alfred Teitelbaum (1901-83), whereby his methods of ‘recursive’ definition, enabling us to say for each sentence what it is that its truth consists in, but giving no verbal definition of truth itself. The recursive definition or the truth predicate of a language is always provided in a ‘metalanguage’, Tarski is thus committed to a hierarchy of languages, each with its associated, but different truth-predicate. Whist this enables the approach to avoid the contradictions of paradoxical contemplations, it conflicts with the idea that a language should be able to ay everything that there is to say, and other approaches have become increasingly important.
So, that the truth condition of a statement is the condition for which the world must meet if the statement is to be true. To know this condition is equivalent to knowing the meaning of the statement. Although this sounds as if it gives a solid anchorage for meaning, some of the securities disappear when it turns out that the truth condition can only be defined by repeating the very same statement: The truth condition of ‘now is white’ is that ‘snow is white’, the truth condition of ‘Britain would have capitulated had Hitler invaded’, is that ‘Britain would have capitulated had Hitler invaded’. It is disputed whether this element of running-on-the-spot disqualifies truth conditions from playing the central role in a substantives theory of meaning. Truth-conditional theories of meaning are sometimes opposed by the view that to know the meaning of a statement is to be able to use it in a network of inferences.
Taken to be the view, inferential semantics take on the role of sentence in inference give a more important key to their meaning than this ‘external’ relations to things in the world. The meaning of a sentence becomes its place in a network of inferences that it legitimates. Also known as functional role semantics, procedural semantics, or conception to the coherence theory of truth, and suffers from the same suspicion that it divorces meaning from any clar association with things in the world.
Moreover, a theory of semantic truth be that of the view if language is provided with a truth definition, there is a sufficient characterization of its concept of truth, as there is no further philosophical chapter to write about truth: There is no further philosophical chapter to write about truth itself or truth as shared across different languages. The view is similar to the disquotational theory.
The redundancy theory, or also known as the ‘deflationary view of truth’ fathered by Gottlob Frége and the Cambridge mathematician and philosopher Frank Ramsey (1903-30), who showed how the distinction between the semantic paradoses, such as that of the Liar, and Russell’s paradox, made unnecessary the ramified type theory of Principia Mathematica, and the resulting axiom of reducibility. By taking all the sentences affirmed in a scientific theory that use some terms e.g., quark, and to a considerable degree of replacing the term by a variable instead of saying that quarks have such-and-such properties, the Ramsey sentence says that there is something that has those properties. If the process is repeated for all of a group of the theoretical terms, the sentence gives ‘topic-neutral’ structure of the theory, but removes any implication that we know what the terms so treated denote. It leaves open the possibility of identifying the theoretical item with whatever it is that best fits the description provided. However, it was pointed out by the Cambridge mathematician Newman, that if the process is carried out for all except the logical bones of a theory, then by the Löwenheim-Skolem theorem, the result will be interpretable, and the content of the theory may reasonably be felt to have been lost.
All the while, both Frége and Ramsey are agreed that the essential claim is that the predicate’ . . . is true’ does not have a sense, i.e., expresses no substantive or profound or explanatory concept that ought to be the topic of philosophical enquiry. The approach admits of different versions, but centres on the points (1) that ‘it is true that ‘p’ says no more nor less than ‘p’ (hence, redundancy): (2) that in less direct contexts, such as ‘everything he said was true’, or ‘all logical consequences of true propositions are true’, the predicate functions as a device enabling us to generalize than as an adjective or predicate describing the things he said, or the kinds of propositions that follow from true preposition. For example, the second ma y translate as ‘(∀p, q)(p & p ➞q ➞q)’ where there is no use of a notion of truth.
There are technical problems in interpreting all uses of the notion of truth in such ways, nevertheless, they are not generally felt to be insurmountable. The approach needs to explain away apparently substantive uses of the notion, such as ‘science aims at the truth’, or ‘truth is a norm governing discourse’. Postmodern writing frequently advocates that we must abandon such norms. Along with a discredited ‘objective’ conception of truth. Perhaps, we can have the norms even when objectivity is problematic, since they can be framed without mention of truth: Science wants it to be so that whatever science holds that ‘p’, then ‘p’. Discourse is to be regulated by the principle that it is wrong to assert ‘p’, when ‘not-p’.
Something that tends of something in addition of content, or coming by way to justify such a position can very well be more that in addition to several reasons, as to bring in or join of something might that there be more so as to a larger combination for us to consider the simplest formulation , is that the claim that expression of the form ‘S is true’ mean the same as expression of the form ‘S’. Some philosophers dislike the ideas of sameness of meaning, and if this I disallowed, then the claim is that the two forms are equivalent in any sense of equivalence that matters. This is, it makes no difference whether people say ‘Dogs bark’ id Tue, or whether they say, ‘dogs bark’. In the former representation of what they say of the sentence ‘Dogs bark’ is mentioned, but in the later it appears to be used, of the claim that the two are equivalent and needs careful formulation and defence. On the face of it someone might know that ‘Dogs bark’ is true without knowing what it means (for instance, if he kids in a list of acknowledged truths, although he does not understand English), and tis is different from knowing that dogs bark. Disquotational theories are usually presented as versions of the ‘redundancy theory of truth’.
The relationship between a set of premises and a conclusion when the conclusion follows from the premise,. Many philosophers identify this with it being logically impossible that the premises should all be true, yet the conclusion false. Others are sufficiently impressed by the paradoxes of strict implication to look for a stranger relation, which would distinguish between valid and invalid arguments within the sphere of necessary propositions. The seraph for a strange notion is the field of relevance logic.
From a systematic theoretical point of view, we may imagine the process of evolution of an empirical science to be a continuous process of induction. Theories are evolved and are expressed in short compass as statements of as large number of individual observations in the form of empirical laws, from which the general laws can be ascertained by comparison. Regarded in this way, the development of a science bears some resemblance to the compilation of a classified catalogue. It is , a it were, a purely empirical enterprise.
But this point of view by no means embraces the whole of the actual process, for it slurs over the important part played by intuition and deductive thought in the development of an exact science. As soon as a science has emerged from its initial stages, theoretical advances are no longer achieved merely by a process of arrangement. Guided by empirical data, the investigators rather develops a system of thought which, in general, it is built up logically from a small number of fundamental assumptions, the so-called axioms. We call such a system of thought a ‘theory’. The theory finds the justification for its existence in the fact that it correlates a large number of single observations, and is just here that the ‘truth’ of the theory lies.
Corresponding to the same complex of empirical data, there may be several theories, which differ from one another to a considerable extent. But as regards the deductions from the theories which are capable of being tested, the agreement between the theories may be so complete, that it becomes difficult to find any deductions in which the theories differ from each other. As an example, a case of general interest is available in the province of biology, in the Darwinian theory of the development of species by selection in the struggle for existence, and in the theory of development which is based on the hypophysis of the hereditary transmission of acquired characters. THE Origin of Species was principally successful in marshalling the evidence for evolution, than providing a convincing mechanisms for genetic change. And Darwin himself remained open to the search for additional mechanisms, while also remaining convinced that natural selection was at the hart of it. It was only with the later discovery of the gene as the unit of inheritance that the synthesis known as ‘neo-Darwinism’ became the orthodox theory of evolution in the life sciences.
In the 19th century the attempt to base ethical reasoning o the presumed facts about evolution, the movement is particularly associated with the English philosopher of evolution Herbert Spencer (1820-1903). The premise is that later elements in an evolutionary path are better than earlier ones: The application of this principle then requires seeing western society, laissez-faire capitalism, or some other object of approval, as more evolved than more ‘primitive’ social forms. Neither the principle nor the applications command much respect. The version of evolutionary ethics called ‘social Darwinism’ emphasises the struggle for natural selection, and draws the conclusion that we should glorify and assist such struggle, usually by enhancing competition and aggressive relations between people in society or between evolution and ethics has been re-thought in the light of biological discoveries concerning altruism and kin-selection.
Once again, the psychology proving attempts are founded to evolutionary principles, in which a variety of higher mental functions may be adaptations, forced in response to selection pressures on the human populations through evolutionary time. Candidates for such theorizing include material and paternal motivations, capacities for love and friendship, the development of language as a signalling system cooperative and aggressive , our emotional repertoire, our moral and reactions, including the disposition to detect and punish those who cheat on agreements or who ‘free-ride’ on =the work of others, our cognitive structures, nd many others. Evolutionary psychology goes hand-in-hand with neurophysiological evidence about the underlying circuitry in the brain which subserves the psychological mechanisms it claims to identify. The approach was foreshadowed by Darwin himself, and William James, as well as the sociology of E.O. Wilson. The term of use are applied, more or less aggressively, especially to explanations offered in sociobiology and evolutionary psychology.
Another assumption that is frequently used to legitimate the real existence of forces associated with the invisible hand in neoclassical economics derives from Darwin’s view of natural selection as a war-like competing between atomized organisms in the struggle for survival. In natural selection as we now understand it, cooperation appears to exist in complementary relation to competition. It is complementary relationships between such results that are emergent self-regulating properties that are greater than the sum of parts and that serve to perpetuate the existence of the whole.
According to E.O Wilson, the ‘human mind evolved to believe in the gods’ and people ‘need a sacred narrative’ to have a sense of higher purpose. Yet it id also clear that the ‘gods’ in his view are merely human constructs and, therefore, there is no basis for dialogue between the world-view of science and religion. ‘Science for its part’, said Wilson, ‘will test relentlessly every assumption about the human condition and in time uncover the bedrock of the moral an religious sentiments. The eventual result of the competition between each of the other, will be the secularization of the human epic and of religion itself.
Man has come to the threshold of a state of consciousness, regarding his nature and his relationship to te Cosmos, in terms that reflect ‘reality’. By using the processes of nature as metaphor, to describe the forces by which it operates upon and within Man, we come as close to describing ‘reality’ as we can within the limits of our comprehension. Men will be very uneven in their capacity for such understanding, which, naturally, differs for different ages and cultures, and develops and changes over the course of time. For these reasons it will always be necessary to use metaphor and myth to provide ‘comprehensible’ guides to living. In thus way. Man’s imagination and intellect play vital roles on his survival and evolution.
Since so much of life both inside and outside the study is concerned with finding explanations of things, it would be desirable to have a concept of what counts as a good explanation from bad. Under the influence of ‘logical positivist’ approaches to the structure of science, it was felt that the criterion ought to be found in a definite logical relationship between the ‘exlanans’ (that which does the explaining) and the explanandum (that which is to be explained). The approach culminated in the covering law model of explanation, or the view that an event is explained when it is subsumed under a law of nature, that is, its occurrence is deducible from the law plus a set of initial conditions. A law would itself be explained by being deduced from a higher-order or covering law, in the way that Johannes Kepler(or Keppler, 1571-1630), was by way of planetary motion that the laws were deducible from Newton’s laws of motion. The covering law model may be adapted to include explanation by showing that something is probable, given a statistical law. Questions for the covering law model include querying for the covering law are necessary to explanation (we explain whether everyday events without overtly citing laws): Querying whether they are sufficient (it ma y not explain an event just to say that it is an example of the kind of thing that always happens). And querying whether a purely logical relationship is adapted to capturing the requirements we make of explanations. These may include, for instance, that we have a ‘feel’ for what is happening, or that the explanation proceeds in terms of things that are familiar to us or unsurprising, or that we can give a model of what is going on, and none of these notions is captured in a purely logical approach. Recent work, therefore, has tended to stress the contextual and pragmatic elements in requirements for explanation, so that what counts as good explanation given one set of concerns may not do so given another.
The argument to the best explanation is the view that once we can select the best of any in something in explanations of an event, then we are justified in accepting it, or even believing it. The principle needs qualification, since something it is unwise to ignore the antecedent improbability of a hypothesis which would explain the data better than others, e.g., the best explanation of a coin falling heads 530 times in 1,000 tosses might be that it is biassed to give a probability of heads of 0.53 but it might be more sensible to suppose that it is fair, or to suspend judgement.
In a philosophy of language is considered as the general attempt to understand the components of a working language, the relationship th understanding speaker has to its elements, and the relationship they bear to the world. The subject therefore embraces the traditional division of semiotic into syntax, semantics, an d pragmatics. The philosophy of language thus mingles with the philosophy of mind, since it needs an account of what it is in our understanding that enables us to use language. It so mingles with the metaphysics of truth and the relationship between sign and object. Much as much is that the philosophy in the 20th century, has been informed by the belief that philosophy of language is the fundamental basis of all philosophical problems, in that language is the distinctive exercise of mind, and the distinctive way in which we give shape to metaphysical beliefs. Particular topics will include the problems of logical form,. And the basis of the division between syntax and semantics, as well as problems of understanding the number and nature of specifically semantic relationships such as meaning, reference, predication, and quantification. Pragmatics include that of speech acts, while problems of rule following and the indeterminacy of translation infect philosophies of both pragmatics and semantics.
On this conception, to understand a sentence is to know its truth-conditions, and, yet, in a distinctive way the conception has remained central that those who offer opposing theories characteristically define their position by reference to it. The Concepcion of meaning s truth-conditions need not and should not be advanced as being in itself as complete account of meaning. For instance, one who understands a language must have some idea of the range of speech acts contextually performed by the various types of sentence in the language, and must have some idea of the insufficiencies of various kinds of speech act. The claim of the theorist of truth-conditions should rather be targeted on the notion of content: If indicative sentence differ in what they strictly and literally say, then this difference is fully accounted for by the difference in the truth-conditions.
The meaning of a complex expression is a function of the meaning of its constituent. This is just as a sentence of what it is for an expression to be semantically complex. It is one of th initial attractions of the conception of meaning truth-conditions tat it permits a smooth and satisfying account of th way in which the meaning of s complex expression is a function of the meaning of its constituents. On the truth-conditional conception, to give the meaning of an expression is to state the contribution it makes to the truth-conditions of sentences in which it occurs. For singular terms-proper names, indexical, and certain pronouns-this is done by stating the reference of the terms in question. For predicates, it is done either by stating the conditions under which the predicate is true of arbitrary objects, or by stating th conditions under which arbitrary atomic sentences containing it are true. The meaning of a sentence-forming operator is given by stating its contribution to the truth-conditions of as complex sentence, as a function of he semantic values of the sentences on which it operates.
The theorist of truth conditions should insist that not every true statement about the reference of an expression is fit to be an axiom in a meaning-giving theory of truth for a language, such is the axiom: ‘London’ refers to the city in which there was a huge fire in 1666, is a true statement about the reference of ‘London’. It is a consequent of a theory which substitutes this axiom for no different a term than of our simple truth theory that ‘London is beautiful’ is true if and only if the city in which there was a huge fire in 1666 is beautiful. Since a subject can understand the name ‘London’ without knowing that last-mentioned truth condition, this replacement axiom is not fit to be an axiom in a meaning-specifying truth theory. It is, of course, incumbent on a theorised meaning of truth conditions, to state in a way which does not presuppose any previous, non-truth conditional conception of meaning
Among the many challenges facing the theorist of truth conditions, two are particularly salient and fundamental. First, the theorist has to answer the charge of triviality or vacuity, second, the theorist must offer an account of what it is for a person’s language to be truly describable by as semantic theory containing a given semantic axiom.
Since the content of a claim that the sentence ‘Paris is beautiful’ is true amounts to no more than the claim that Paris is beautiful, we can trivially describers understanding a sentence, if we wish, as knowing its truth-conditions, but this gives us no substantive account of understanding whatsoever. Something other than grasp of truth conditions must provide the substantive account. The charge rests upon what has been called the redundancy theory of truth, the theory which, somewhat more discriminatingly. Horwich calls the minimal theory of truth. Its conceptual representation that the concept of truth is exhausted by the fact that it conforms to the equivalence principle, the principle that for any proposition ‘p’, it is true that ‘p’ if and only if ‘p’. Many different philosophical theories of truth will, with suitable qualifications, accept that equivalence principle. The distinguishing feature of the minimal theory is its claim that the equivalence principle exhausts the notion of truth. It is now widely accepted, both by opponents and supporters of truth conditional theories of meaning, that it is inconsistent to accept both minimal theory of ruth and a truth conditional account of meaning. If the claim that the sentence ‘Paris is beautiful’ is true is exhausted by its equivalence to the claim that Paris is beautiful, it is circular to try of its truth conditions. The minimal theory of truth has been endorsed by the Cambridge mathematician and philosopher Plumpton Ramsey (1903-30), and the English philosopher Jules Ayer, the later Wittgenstein, Quine, Strawson. Horwich and-confusing and inconsistently if this article is correct-Frége himself. but is the minimal theory correct?
The minimal theory treats instances of the equivalence principle as definitional of truth for a given sentence, but in fact, it seems that each instance of the equivalence principle can itself be explained. The truths from which such an instance as: ‘London is beautiful’ is true if and only if London is beautiful. This would be a pseudo-explanation if the fact that ‘London’ refers to London consists in part in the fact that ‘London is beautiful’ has the truth-condition it does. But it is very implausible, it is, after all, possible to understand the name ‘London’ without understanding the predicate ‘is beautiful’.
Sometimes, however, the counterfactual conditional is known as subjunctive conditionals, insofar as a counterfactual conditional is a conditional of the form ‘if p were to happen q would’, or ‘if p were to have happened q would have happened’, where the supposition of ‘p’ is contrary to the known fact that ‘not-p’. Such assertions are nevertheless, use=ful ‘if you broken the bone, the X-ray would have looked different’, or ‘if the reactor were to fail, this mechanism wold click in’ are important truths, even when we know that the bone is not broken or are certain that the reactor will not fail. It is arguably distinctive of laws of nature that yield counterfactuals (‘if the metal were to be heated, it would expand’), whereas accidentally true generalizations may not. It is clear that counterfactuals cannot be represented by the material implication of the propositional calculus, since that conditionals comes out true whenever ‘p’ is false, so there would be no division between true and false counterfactuals.
Although the subjunctive form indicates a counterfactual, in many contexts it does not seem to matter whether we use a subjunctive form, or a simple conditional form: ‘If you run out of water, you will be in trouble’ seems equivalent to ‘if you were to run out of water, you would be in trouble’, in other contexts there is a big difference: ‘If Oswald did not kill Kennedy, someone else did’ is clearly true, whereas ‘if Oswald had not killed Kennedy, someone would have’ is most probably false.
The best-known modern treatment of counterfactuals is that of David Lewis, which evaluates them as true or false according to whether ‘q’ is true in the ‘most similar’ possible worlds to ours in which ‘p’ is true. The similarity-ranking this approach needs has proved controversial, particularly since it may need to presuppose some notion of the same laws of nature, whereas art of the interest in counterfactuals is that they promise to illuminate that notion. There is a growing awareness tat the classification of conditionals is an extremely tricky business, and categorizing them as counterfactuals or not be of limited use.
The pronouncing of any conditional; preposition of the form ‘if p then q’. The condition hypothesizes, ‘p’. Its called the antecedent of the conditional, and ‘q’ the consequent. Various kinds of conditional have been distinguished. The weaken in that of material implication, merely telling us that with not-p. or q. stronger conditionals include elements of modality, corresponding to the thought that ‘if p is true then q must be true’. Ordinary language is very flexible in its use of the conditional form, and there is controversy whether, yielding different kinds of conditionals with different meanings, or pragmatically, in which case there should be one basic meaning which case there should be one basic meaning, with surface differences arising from other implicatures.
We now turn to a philosophy of meaning and truth, for which it is especially associated with the American philosopher of science and of language (1839-1914), and the American psychologist philosopher William James (1842-1910), wherefore the study in Pragmatism is given to various formulations by both writers, but the core is the belief that the meaning of a doctrine is the same as the practical effects of adapting it. Peirce interpreted of theocratical sentence ids only that of a corresponding practical maxim (telling us what to do in some circumstance). In James the position issues in a theory of truth, notoriously allowing that belief, including for example, belief in God, are the widest sense of the works satisfactorially in the widest sense of the word. On James’s view almost any belief might be respectable, and even rue, provided it works (but working is no simple matter for James). The apparently subjectivist consequences of tis were wildly assailed by Russell (1872-1970), Moore (1873-1958), and others in the early years of the 20 century. This led to a division within pragmatism between those such as the American educator John Dewey (1859-1952), whose humanistic conception of practice remains inspired by science, and the more idealistic route that especially by the English writer F.C.S. Schiller (1864-1937), embracing the doctrine that our cognitive efforts and human needs actually transform the reality that we seek to describe. James often writes as if he sympathizes with this development. For instance, in The Meaning of Truth (1909), he considers the hypothesis that other people have no minds (dramatized in the sexist idea of an ‘automatic sweetheart’ or female zombie) and remarks hat the hypothesis would not work because it would not satisfy our egoistic craving for the recognition and admiration of others. The implication that this is what makes it true that the other persons have minds in the disturbing part.
Modern pragmatists such as the American philosopher and critic Richard Rorty (1931-) and some writings of the philosopher Hilary Putnam (1925-) who have usually tried to dispense with an account of truth and concentrate, as perhaps James should have done, upon the nature of belief and its relations with human attitude, emotion, and need. The driving motivation of pragmatism is the idea that belief in the truth on te one hand must have a close connection with success in action on the other. One way of cementing the connection is found in the idea that natural selection must have adapted us to be cognitive creatures because belief have effects, as they work. Pragmatism can be found in Kant’s doctrine of the primary of practical over pure reason, and continues to play an influential role in the theory of meaning and of truth.
In case of fact, the philosophy of mind is the modern successor to behaviourism, as do the functionalism that its early advocates were Putnam (1926-) and Sellars (1912-89), and its guiding principle is that we can define mental states by a triplet of relations they have on other mental stares, what effects they have on behaviour. The definition need not take the form of a simple analysis, but if w could write down the totality of axioms, or postdates, or platitudes that govern our theories about what things of other mental states, and our theories about what things are apt to cause (for example), a belief state, what effects it would have on a variety of other mental states, and what effects it is likely to have on behaviour, then we would have done all tat is needed to make the state a proper theoretical notion. It could be implicitly defied by these theses. Functionalism is often compared with descriptions of a computer, since according to mental descriptions correspond to a description of a machine in terms of software, that remains silent about the underlaying hardware or ‘realization’ of the program the machine is running. The principle advantage of functionalism include its fit with the way we know of mental states both of ourselves and others, which is via their effects on behaviour and other mental states. As with behaviourism, critics charge that structurally complex items tat do not bear mental states might nevertheless, imitate the functions that are cited. According to this criticism functionalism is too generous and would count too many things as having minds. It is also queried whether functionalism is too paradoxical, able to see mental similarities only when there is causal similarity, when our actual practices of interpretations enable us to ascribe thoughts and desires to different from our own, it may then seem as though beliefs and desires can be ‘variably realized’ causal architecture, just as much as they can be in different neurophysiological states.
The philosophical movement of Pragmatism had a major impact on American culture from the late 19th century to the present. Pragmatism calls for ideas and theories to be tested in practice, by assessing whether acting upon the idea or theory produces desirable or undesirable results. According to pragmatists, all claims about truth, knowledge, morality, and politics must be tested in this way. Pragmatism has been critical of traditional Western philosophy, especially the notion that there are absolute truths and absolute values. Although pragmatism was popular for a time in France, England, and Italy, most observers believe that it encapsulates an American faith in know-how and practicality and an equally American distrust of abstract theories and ideologies.
In mentioning the American psychologist and philosopher we find William James, who helped to popularize the philosophy of pragmatism with his book Pragmatism: A New Name for Old Ways of Thinking (1907). Influenced by a theory of meaning and verification developed for scientific hypotheses by American philosopher C. S. Peirce, James held that truth is what works, or has good experimental results. In a related theory, James argued the existence of God is partly verifiable because many people derive benefits from believing.
The Association for International Conciliation first published William James’s pacifist statement, 'The Moral Equivalent of War', in 1910. James, a highly respected philosopher and psychologist, was one of the founders of pragmatism-a philosophical movement holding that ideas and theories must be tested in practice to assess their worth. James hoped to find a way to convince men with a long-standing history of pride and glory in war to evolve beyond the need for bloodshed and to develop other avenues for conflict resolution. Spelling and grammar represent standards of the time.
Pragmatists regard all theories and institutions as tentative hypotheses and solutions. For this reason they believed that efforts to improve society, through such means as education or politics, must be geared toward problem solving and must be ongoing. Through their emphasis on connecting theory to practice, pragmatist thinkers attempted to transform all areas of philosophy, from metaphysics to ethics and political philosophy.
Pragmatism sought a middle ground between traditional ideas about the nature of reality and radical theories of nihilism and irrationalism, which had become popular in Europe in the late 19th century. Traditional metaphysics assumed that the world has a fixed, intelligible structure and that human beings can know absolute or objective truths about the world and about what constitutes moral behaviour. Nihilism and irrationalism, on the other hand, denied those very assumptions and their certitude. Pragmatists today still try to steer a middle course between contemporary offshoots of these two extremes.
The ideas of the pragmatists were considered revolutionary when they first appeared. To some critics, pragmatism’s refusal to affirm any absolutes carried negative implications for society. For example, pragmatists do not believe that a single absolute idea of goodness or justice exists, but rather that these concepts are changeable and depend on the context in which they are being discussed. The absence of these absolutes, critics feared, could result in a decline in moral standards. The pragmatists’ denial of absolutes, moreover, challenged the foundations of religion, government, and schools of thought. As a result, pragmatism influenced developments in psychology, sociology, education, semiotics (the study of signs and symbols), and scientific method, as well as philosophy, cultural criticism, and social reform movements. Various political groups have also drawn on the assumptions of pragmatism, from the progressive movements of the early 20th century to later experiments in social reform.
Pragmatism is best understood in its historical and cultural context. It arose during the late 19th century, a period of rapid scientific advancement typified by the theories of British biologist Charles Darwin, whose theories suggested to many thinkers that humanity and society are in a perpetual state of progress. During this same period a decline in traditional religious beliefs and values accompanied the industrialization and material progress of the time. In consequence it became necessary to rethink fundamental ideas about values, religion, science, community, and individuality.
The three most important pragmatists are American philosophers Charles Sanders Peirce, William James, and John Dewey. Peirce was primarily interested in scientific method and mathematics; his objective was to infuse scientific thinking into philosophy and society, and he believed that human comprehension of reality was becoming ever greater and that human communities were becoming increasingly progressive. Peirce developed pragmatism as a theory of meaning-in particular, the meaning of concepts used in science. The meaning of the concept 'brittle', for example, is given by the observed consequences or properties that objects called 'brittle' exhibit. For Peirce, the only rational way to increase knowledge was to form mental habits that would test ideas through observation, experimentation, or what he called inquiry. Many philosophers known as logical positivists, a group of philosophers who have been influenced by Peirce, believed that our evolving species was fated to get ever closer to Truth. Logical positivists emphasize the importance of scientific verification, rejecting the assertion of positivism that personal experience is the basis of true knowledge.
James moved pragmatism in directions that Peirce strongly disliked. He generalized Peirce’s doctrines to encompass all concepts, beliefs, and actions; he also applied pragmatist ideas to truth as well as to meaning. James was primarily interested in showing how systems of morality, religion, and faith could be defended in a scientific civilization. He argued that sentiment, as well as logic, is crucial to rationality and that the great issues of life-morality and religious belief, for example-are leaps of faith. As such, they depend upon what he called 'the will to believe' and not merely on scientific evidence, which can never tell us what to do or what is worthwhile. Critics charged James with relativism (the belief that values depend on specific situations) and with crass expediency for proposing that if an idea or action works the way one intends, it must be right. But James can more accurately be described as a pluralist-someone who believes the world to be far too complex for any one philosophy to explain everything.
Dewey’s philosophy can be described as a version of philosophical naturalism, which regards human experience, intelligence, and communities as ever-evolving mechanisms. Using their experience and intelligence, Dewey believed, human beings can solve problems, including social problems, through inquiry. For Dewey, naturalism led to the idea of a democratic society that allows all members to acquire social intelligence and progress both as individuals and as communities. Dewey held that traditional ideas about knowledge, truth, and values, in which absolutes are assumed, are incompatible with a broadly Darwinian world-view in which individuals and society are progressing. In consequence, he felt that these traditional ideas must be discarded or revised. Indeed, for pragmatists, everything people know and do depends on a historical context and is thus tentative rather than absolute.
Many followers and critics of Dewey believe he advocated elitism and social engineering in his philosophical stance. Others think of him as a kind of romantic humanist. Both tendencies are evident in Dewey’s writings, although he aspired to synthesize the two realms.
The pragmatist tradition was revitalized in the 1980s by American philosopher Richard Rorty, who has faced similar charges of elitism for his belief in the relativism of values and his emphasis on the role of the individual in attaining knowledge. Interest has renewed in the classic pragmatists-Pierce, James, and Dewey-have an alternative to Rorty’s interpretation of the tradition.
The Philosophy of Mind, is the branch of philosophy that considers mental phenomena such as sensation, perception, thought, belief, desire, intention, memory, emotion, imagination, and purposeful action. These phenomena, which can be broadly grouped as thoughts and experiences, are features of human beings; many of them are also found in other animals. Philosophers are interested in the nature of each of these phenomena as well as their relationships to one another and to physical phenomena, such as motion.
The most famous exponent of dualism was the French philosopher René Descartes, who maintained that body and mind are radically different entities and that they are the only fundamental substances in the universe. Dualism, however, does not show how these basic entities are connected.
In the work of the German philosopher Gottfried Wilhelm Leibniz, the universe is held to consist of an infinite number of distinct substances, or monads. This view is pluralistic in the sense that it proposes the existence of many separate entities, and it is monistic in its assertion that each monad reflects within itself the entire universe.
Other philosophers have held that knowledge of reality is not derived from a priori principles, but is obtained only from experience. This type of metaphysics is called empiricism. Still another school of philosophy has maintained that, although an ultimate reality does exist, it is altogether inaccessible to human knowledge, which is necessarily subjective because it is confined to states of mind. Knowledge is therefore not a representation of external reality, but merely a reflection of human perceptions. This view is known as skepticism or agnosticism in respect to the soul and the reality of God.
The 18th-century German philosopher Immanuel Kant published his influential work The Critique of Pure Reason in 1781. Three years later, he expanded on his study of the modes of thinking with an essay entitled 'What is Enlightenment'? In this 1784 essay, Kant challenged readers to 'dare to know', arguing that it was not only a civic but also a moral duty to exercise the fundamental freedoms of thought and expression.
Several major viewpoints were combined in the work of Kant, who developed a distinctive critical philosophy called transcendentalism. His philosophy is agnostic in that it denies the possibility of a strict knowledge of ultimate reality; it is empirical in that it affirms that all knowledge arises from experience and is true of objects of actual and possible experience; and it is rationalistic in that it maintains the a priori character of the structural principles of this empirical knowledge.
These principles are held to be necessary and universal in their application to experience, for in Kant's view the mind furnishes the archetypal forms and categories (space, time, causality, substance, and relation) to its sensations, and these categories are logically anterior to experience, although manifested only in experience. Their logical anteriority to experience makes these categories or structural principles transcendental; they transcend all experience, both actual and possible. Although these principles determine all experience, they do not in any way affect the nature of things in themselves. The knowledge of which these principles are the necessary conditions must not be considered, therefore, as constituting a revelation of things as they are in themselves. This knowledge concerns things only insofar as they appear to human perception or as they can be apprehended by the senses. The argument by which Kant sought to fix the limits of human knowledge within the framework of experience and to demonstrate the inability of the human mind to penetrate beyond experience strictly by knowledge to the realm of ultimate reality constitutes the critical feature of his philosophy, giving the key word to the titles of his three leading treatises, Critique of Pure Reason, Critique of Practical Reason, and Critique of Judgment. In the system propounded in these works, Kant sought also to reconcile science and religion in a world of two levels, comprising noumena, objects conceived by reason although not perceived by the senses, and phenomena, things as they appear to the senses and are accessible to material study. He maintained that, because God, freedom, and human immortality are noumenal realities, these concepts are understood through moral faith rather than through scientific knowledge. With the continuous development of science, the expansion of metaphysics to include scientific knowledge and methods became one of the major objectives of metaphysicians.
Some of Kant's most distinguished followers, notably Johann Gottlieb Fichte, Friedrich Schelling, Georg Wilhelm Friedrich Hegel, and Friedrich Schleiermacher, negated Kant's criticism in their elaborations of his transcendental metaphysics by denying the Kantian conception of the thing-in-itself. They thus developed an absolute idealism in opposition to Kant's critical transcendentalism.
Since the formation of the hypothesis of absolute idealism, the development of metaphysics has resulted in as many types of metaphysical theory as existed in pre-Kantian philosophy, despite Kant's contention that he had fixed definitely the limits of philosophical speculation. Notable among these later metaphysical theories are radical empiricism, or pragmatism, a native American form of metaphysics expounded by Charles Sanders Peirce, developed by William James, and adapted as instrumentalism by John Dewey; voluntarism, the foremost exponents of which are the German philosopher Arthur Schopenhauer and the American philosopher Josiah Royce; phenomenalism, as it is exemplified in the writings of the French philosopher Auguste Comte and the British philosopher Herbert Spencer; emergent evolution, or creative evolution, originated by the French philosopher Henri Bergson; and the philosophy of the organism, elaborated by the British mathematician and philosopher Alfred North Whitehead. The salient doctrines of pragmatism are that the chief function of thought is to guide action, that the meaning of concepts is to be sought in their practical applications, and that truth should be tested by the practical effects of belief; according to instrumentalism, ideas are instruments of action, and their truth is determined by their role in human experience. In the theory of voluntarism the will is postulated as the supreme manifestation of reality. The exponents of phenomenalism, who are sometimes called positivists, contend that everything can be analyzed in terms of actual or possible occurrences, or phenomena, and that anything that cannot be analyzed in this manner cannot be understood. In emergent or creative evolution, the evolutionary process is characterized as spontaneous and unpredictable rather than mechanistically determined. The philosophy of the organism combines an evolutionary stress on constant process with a metaphysical theory of God, the eternal objects, and creativity.
In the 20th century the validity of metaphysical thinking has been disputed by the logical positivists (see Analytic and Linguistic Philosophy; Positivism) and by the so-called dialectical materialism of the Marxists. The basic principle maintained by the logical positivists is the verifiability theory of meaning. According to this theory a sentence has factual meaning only if it meets the test of observation. Logical positivists argue that metaphysical expressions such as 'Nothing exists except material particles' and 'Everything is part of one all-encompassing spirit' cannot be tested empirically. Therefore, according to the verifiability theory of meaning, these expressions have no factual cognitive meaning, although they can have an emotive meaning relevant to human hopes and feelings.
The dialectical materialists assert that the mind is conditioned by and reflects material reality. Therefore, speculations that conceive of constructs of the mind as having any other than material reality are themselves unreal and can result only in delusion. To these assertions metaphysicians reply by denying the adequacy of the verifiability theory of meaning and of material perception as the standard of reality. Both logical positivism and dialectical materialism, they argue, conceal metaphysical assumptions, for example, that everything is observable or at least connected with something observable and that the mind has no distinctive life of its own. In the philosophical movement known as existentialism, thinkers have contended that the questions of the nature of being and of the individual's relationship to it are extremely important and meaningful in terms of human life. The investigation of these questions is therefore considered valid whether or not its results can be verified objectively.
Since the 1950s the problems of systematic analytical metaphysics have been studied in Britain by Stuart Newton Hampshire and Peter Frederick Strawson, the former concerned, in the manner of Spinoza, with the relationship between thought and action, and the latter, in the manner of Kant, with describing the major categories of experience as they are embedded in language. In the U.S. metaphysics has been pursued much in the spirit of positivism by Wilfred Stalker Sellars and Willard Van Orman Quine. Sellars has sought to express metaphysical questions in linguistic terms, and Quine has attempted to determine whether the structure of language commits the philosopher to asserting the existence of any entities whatever and, if so, what kind. In these new formulations the issues of metaphysics and ontology remain vital.
n the 17th century, French philosopher Reneé Descartes proposed that only two substances ultimately exist; mind and body. Yet, if the two are entirely distinct, as Descartes believed, how can one substance interact with the other? How, for example, is the intention of a human mind able to cause movement in the person’s limbs? The issue of the interaction between mind and body is known in philosophy as the mind-body problem.
Many fields other than philosophy share an interest in the nature of mind. In religion, the nature of mind is connected with various conceptions of the soul and the possibility of life after death. In many abstract theories of mind there is considerable overlap between philosophy and the science of psychology. Once part of philosophy, psychology split off and formed a separate branch of knowledge in the 19th century. While psychology uses scientific experiments to study mental states and events, philosophy uses reasoned arguments and thought experiments in seeking to understand the concepts that underlie mental phenomena. Also influenced by philosophy of mind is the field of artificial intelligence (AI), which endeavours to develop computers that can mimic what the human mind can do. Cognitive science attempts to integrate the understanding of mind provided by philosophy, psychology, AI, and other disciplines. Finally, all of these fields benefit from the detailed understanding of the brain that has emerged through neuroscience in the late 20th century.
Philosophers use the characteristics of inward accessibility, subjectivity, intentionality, goal-directedness, creativity and freedom, and consciousness to distinguish mental phenomena from physical phenomena.
Perhaps the most important characteristic of mental phenomena is that they are inwardly accessible, or available to us through introspection. We each know our own minds-our sensations, thoughts, memories, desires, and fantasies-in a direct sense, by internal reflection. We also know our mental states and mental events in a way that no one else can. In other words, we have privileged access to our own mental states.
Certain mental phenomena, those we generally call experiences, have a subjective nature-that is, they have certain characteristics we become aware of when we reflect. For instance, there is ‘something it is like’ to feel pain, or have an itch, or see something red. These characteristics are subjective in that they are accessible to the subject of the experience, the person who has the experience, but not to others.
Other mental phenomena, which we broadly refer to as thoughts, have a characteristic philosophers call intentionality. Intentional thoughts are about other thoughts or objects, which are represented as having certain properties or as being related to one another in a certain way. The belief that California is west of Nevada, for example, is about California and Nevada and represents the former as being west of the latter. Although we have privileged access to our intentional states, many of them do not seem to have a subjective nature, at least not in the way that experiences do.
A number of mental phenomena appear to be connected to one another as elements in an intelligent, goal-directed system. The system works as follows: First, our sense organs are stimulated by events in our environment; next, by virtue of these stimulations, we perceive things about the external world; finally, we use this information, as well as information we have remembered or inferred, to guide our actions in ways that further our goals. Goal-directedness seems to accompany only mental phenomena.
Another important characteristic of mind, especially of human minds, is the capacity for choice and imagination. Rather than automatically converting past influences into future actions, individual minds are capable of exhibiting creativity and freedom. For instance, we can imagine things we have not experienced and can act in ways that no one expects or could predict.
Mental phenomena are conscious, and consciousness may be the closest term we have for describing what is special about mental phenomena. Minds are sometimes referred to as consciousness, yet it is difficult to describe exactly what consciousness is. Although consciousness is closely related to inward accessibility and subjectivity, these very characteristics seem to hinder us in reaching an objective scientific understanding of it.
Although philosophers have written about mental phenomena since ancient times, the philosophy of mind did not garner much attention until the work of French philosopher René Descartes in the 17th century. Descartes’s work represented a turning point in thinking about mind by making a strong distinction between bodies and minds, or the physical and the mental. This duality between mind and body, known as Cartesian dualism, has posed significant problems for philosophy ever since.
Descartes believed there are two basic kinds of things in the world, a belief known as substance dualism. For Descartes, the principles of existence for these two groups of things-bodies and minds-are completely different from one another: Bodies exist by being extended in space, while minds exist by being conscious. According to Descartes, nothing can be done to give a body thought and consciousness. No matter how we shape a body or combine it with other bodies, we cannot turn the body into a mind, a thing that is conscious, because being conscious is not a way of being extended.
For Descartes, a person consists of a human body and a human mind causally interacting with one another. For example, the intentions of a human being may cause that person’s limbs to move. In this way, the mind can affect the body. In addition, the sense organs of a human being may be affected by light, pressure, or sound, external sources which in turn affect the brain, affecting mental states. Thus the body may affect the mind. Exactly how mind can affect body, and vice versa, is a central issue in the philosophy of mind, and is known as the mind-body problem. According to Descartes, this interaction of mind and body is peculiarly intimate. Unlike the interaction between a pilot and his ship, the connection between mind and body more closely resembles two substances that have been thoroughly mixed together.
In response to the mind-body problem arising from Descartes’s theory of substance dualism, a number of philosophers have advocated various forms of substance monism, the doctrine that there is ultimately just one kind of thing in reality. In the 18th century, Irish philosopher George Berkeley claimed there were no material objects in the world, only minds and their ideas. Berkeley thought that talk about physical objects was simply a way of organizing the flow of experience. Near the turn of the 20th century, American psychologist and philosopher William James proposed another form of substance monism. James claimed that experience is the basic stuff from which both bodies and minds are constructed.
Most philosophers of mind today are substance monists of a third type: They are materialists who believe that everything in the world is basically material, or a physical object. Among materialists, there is still considerable disagreement about the status of mental properties, which are conceived as properties of bodies or brains. Materialists who are property dualists believe that mental properties are an additional kind of property or attribute, not reducible to physical properties. Property dualists have the problem of explaining how such properties can fit into the world envisaged by modern physical science, according to which there are physical explanations for all things.
Materialists who are property monists believe that there is ultimately only one type of property, although they disagree on whether or not mental properties exist in material form. Some property monists, known as reductive materialists, hold that mental properties exist simply as a subset of relatively complex and nonbasic physical properties of the brain. Reductive materialists have the problem of explaining how the physical states of the brain can be inwardly accessible and have a subjective character, as mental states do. Other property monists, known as eliminative materialists, consider the whole category of mental properties to be a mistake. According to them, mental properties should be treated as discredited postulates of an outmoded theory. Eliminative materialism is difficult for most people to accept, since we seem to have direct knowledge of our own mental phenomena by introspection and because we use the general principles we understand about mental phenomena to predict and explain the behaviour of others.
Philosophy of mind concerns itself with a number of specialized problems. In addition to the mind-body problem, important issues include those of personal identity, immortality, and artificial intelligence.
During much of Western history, the mind has been identified with the soul as presented in Christian theology. According to Christianity, the soul is the source of a person’s identity and is usually regarded as immaterial; thus it is capable of enduring after the death of the body. Descartes’s conception of the mind as a separate, nonmaterial substance fits well with this understanding of the soul. In Descartes’s view, we are aware of our bodies only as the cause of sensations and other mental phenomena. Consequently our personal essence is composed more fundamentally of mind and the preservation of the mind after death would constitute our continued existence.
The mind conceived by materialist forms of substance monism does not fit as neatly with this traditional concept of the soul. With materialism, once a physical body is destroyed, nothing enduring remains. Some philosophers think that a concept of personal identity can be constructed that permits the possibility of life after death without appealing to separate immaterial substances. Following in the tradition of 17th-century British philosopher John Locke, these philosophers propose that a person consists of a stream of mental events linked by memory. It is these links of memory, rather than a single underlying substance, that provides the unity of a single consciousness through time. Immortality is conceivable if we think of these memory links as connecting a later consciousness in heaven with an earlier one on earth.
The field of artificial intelligence also raises interesting questions for the philosophy of mind. People have designed machines that mimic or model many aspects of human intelligence, and there are robots currently in use whose behaviour is described in terms of goals, beliefs, and perceptions. Such machines are capable of behaviour that, were it exhibited by a human being, would surely be taken to be free and creative. As an example, in 1996 an IBM computer named Deep Blue won a chess game against Russian world champion Garry Kasparov under international match regulations. Moreover, it is possible to design robots that have some sort of privileged access to their internal states. Philosophers disagree over whether such robots truly think or simply appear to think and whether such robots should be considered to be conscious
Dualism, in philosophy, the theory that the universe is explicable only as a whole composed of two distinct and mutually irreducible elements. In Platonic philosophy the ultimate dualism is between 'being' and 'nonbeing'-that is, between ideas and matter. In the 17th century, dualism took the form of belief in two fundamental substances: mind and matter. French philosopher René Descartes, whose interpretation of the universe exemplifies this belief, was the first to emphasize the irreconcilable difference between thinking substance (mind) and extended substance (matter). The difficulty created by this view was to explain how mind and matter interact, as they apparently do in human experience. This perplexity caused some Cartesians to deny entirely any interaction between the two. They asserted that mind and matter are inherently incapable of affecting each other, and that any reciprocal action between the two is caused by God, who, on the occasion of a change in one, produces a corresponding change in the other. Other followers of Descartes abandoned dualism in favor of monism.
In the 20th century, reaction against the monistic aspects of the philosophy of idealism has to some degree revived dualism. One of the most interesting defences of dualism is that of Anglo-American psychologist William McDougall, who divided the universe into spirit and matter and maintained that good evidence, both psychological and biological, indicates the spiritual basis of physiological processes. French philosopher Henri Bergson in his great philosophic work Matter and Memory likewise took a dualistic position, defining matter as what we perceive with our senses and possessing in itself the qualities that we perceive in it, such as colour and resistance. Mind, on the other hand, reveals itself as memory, the faculty of storing up the past and utilizing it for modifying our present actions, which otherwise would be merely mechanical. In his later writings, however, Bergson abandoned dualism and came to regard matter as an arrested manifestation of the same vital impulse that composes life and mind.
Dualism, in philosophy, the theory that the universe is explicable only as a whole composed of two distinct and mutually irreducible elements. In Platonic philosophy the ultimate dualism is between 'being' and 'nonbeing-that is, between ideas and matter. In the 17th century, dualism took the form of belief in two fundamental substances: mind and matter. French philosopher René Descartes, whose interpretation of the universe exemplifies this belief, was the first to emphasize the irreconcilable difference between thinking substance (mind) and extended substance (matter). The difficulty created by this view was to explain how mind and matter interact, as they apparently do in human experience. This perplexity caused some Cartesians to deny entirely any interaction between the two. They asserted that mind and matter are inherently incapable of affecting each other, and that any reciprocal action between the two is caused by God, who, on the occasion of a change in one, produces a corresponding change in the other. Other followers of Descartes abandoned dualism in favor of monism.
In the 20th century, reaction against the monistic aspects of the philosophy of idealism has to some degree revived dualism. One of the most interesting defences of dualism is that of Anglo-American psychologist William McDougall, who divided the universe into spirit and matter and maintained that good evidence, both psychological and biological, indicates the spiritual basis of physiological processes. French philosopher Henri Bergson in his great philosophic work Matter and Memory likewise took a dualistic position, defining matter as what we perceive with our senses and possessing in itself the qualities that we perceive in it, such as colour and resistance. Mind, on the other hand, reveals itself as memory, the faculty of storing up the past and utilizing it for modifying our present actions, which otherwise would be merely mechanical. In his later writings, however, Bergson abandoned dualism and came to regard matter as an arrested manifestation of the same vital impulse that composes life and mind.
For many people understanding the place of mind in nature is the greatest philosophical problem. Mind is often though to be the last domain that stubbornly resists scientific understanding and philosophers defer over whether they find that cause for celebration or scandal. The mind-body problem in the modern era was given its definitive shape by Descartes, although the dualism that he espoused is in some form whatever there is a religious or philosophical tradition there is a religious or philosophical tradition whereby the soul may have an existence apart from the body. While most modern philosophers of mind would reject the imaginings that lead us to think that this makes sense, there is no consensus over the best way to integrate our understanding of people as bearers of physical properties lives on the other.
Occasionalism find from it term as employed to designate the philosophical system devised by the followers of the 17th-century French philosopher René Descartes, who, in attempting to explain the interrelationship between mind and body, concluded that God is the only cause. The occasionalists began with the assumption that certain actions or modifications of the body are preceded, accompanied, or followed by changes in the mind. This assumed relationship presents no difficulty to the popular conception of mind and body, according to which each entity is supposed to act directly on the other; these philosophers, however, asserting that cause and effect must be similar, could not conceive the possibility of any direct mutual interaction between substances as dissimilar as mind and body.
According to the occasionalists, the action of the mind is not, and cannot be, the cause of the corresponding action of the body. Whenever any action of the mind takes place, God directly produces in connection with that action, and by reason of it, a corresponding action of the body; the converse process is likewise true. This theory did not solve the problem, for if the mind cannot act on the body (matter), then God, conceived as mind, cannot act on matter. Conversely, if God is conceived as other than mind, then he cannot act on mind. A proposed solution to this problem was furnished by exponents of radical empiricism such as the American philosopher and psychologist William James. This theory disposed of the dualism of the occasionalists by denying the fundamental difference between mind and matter.
Generally, along with consciousness, that experience of an external world or similar scream or other possessions, takes upon itself the visual experience or deprive of some normal visual experience, that this, however, does not perceive the world accurately. In its frontal experiment. As researchers reared kittens in total darkness, except that for five hours a day the kittens were placed in an environment with only vertical lines. When the animals were later exposed to horizontal lines and forms, they had trouble perceiving these forms.
Philosophers have long debated the role of experience in human perception. In the late 17th century, Irish philosopher William Molyneux wrote to his friend, English philosopher John Locke, and asked him to consider the following scenario: Suppose that you could restore sight to a person who was blind. Using only vision, would that person be able to tell the difference between a cube and a sphere, which she or he had previously experienced only through touch? Locke, who emphasized the role of experience in perception, thought the answer was no. Modern science actually allows us to address this philosophical question, because a very small number of people who were blind have had their vision restored with the aid of medical technology.
Two researchers, British psychologist Richard Gregory and British-born neurologist Oliver Sacks, have written about their experiences with men who were blind for a long time due to cataracts and then had their vision restored late in life. When their vision was restored, they were often confused by visual input and were unable to see the world accurately. For instance, they could detect motion and perceive colours, but they had great difficulty with complex stimuli, such as faces. Much of their poor perceptual ability was probably due to the fact that the synapses in the visual areas of their brains had received little or no stimulation throughout their lives. Thus, without visual experience, the visual system does not develop properly.
Visual experience is useful because it creates memories of past stimuli that can later serve as a context for perceiving new stimuli. Thus, you can think of experience as a form of context that you carry around with you. A visual illusion occurs when your perceptual experience of a stimulus is substantially different from the actual stimulus you are viewing. In the previous example, you saw the green circles as different sizes, even though they were actually the same size. To experience another illusion, look at the illustration entitled 'Zöllner Illusion'. What shape do you see? You may see a trapezoid that is wider at the top, but the actual shape is a square. Such illusions are natural artifacts of the way our visual systems work. As a result, illusions provide important insights into the functioning of the visual system. In addition, visual illusions are fun to experience.
Consider the pair of illusions in the accompanying illustration, “Illusions of Length.” These illusions are called geometrical illusions, because they use simple geometrical relationships to produce the illusory effects. The first illusion, the Müller-Lyer illusion, is one of the most famous illusions in psychology. Which of the two horizontal lines is longer? Although your visual system tells you that the lines are not equal, a ruler would tell you that they are equal. The second illusion is called the Ponzo illusion. Once again, the two lines do not appear to be equal in length, but they are. For further information about illusions
Prevailing states of consciousness, are not as simple, or agreed-upon by any steadfast and held definition of itself, in so, that, consciousness exists. Attempted definitions tend to be tautological (for example, consciousness defined s awareness) or merely descriptive (for example, consciousness described as sensations, thoughts, or feelings). Despite this problem of definition, the subject of consciousness has had a remarkable history. At one time the primary subject matter of psychology, consciousness as an area of study suffered an almost total demise, later reemerging to become a topic of current interest.
René Descartes applied rigorous scientific methods of deduction to his exploration of philosophical questions. Descartes is probably best known for his pioneering work in philosophical skepticism. Author Tom Sorell examines the concepts behind Descartes’s work Meditationes de Prima Philosophia (1641; Meditations on First Philosophy), focussing on its unconventional use of logic and the reactions it aroused. Most of the philosophical discussions of consciousness arose from the mind-body issues posed by the French philosopher and mathematician René Descartes in the 17th century. Descartes asked: Is the mind, or consciousness, independent of matter? Is consciousness extended (physical) or unextended (nonphysical)? Is consciousness determinative, or is it determined? English philosophers such as John Locke equated consciousness with physical sensations and the information they provide, whereas European philosophers such as Gottfried Wilhelm Leibniz and Immanuel Kant gave a more central and active role to consciousness.
The philosopher who most directly influenced subsequent exploration of the subject of consciousness was the 19th-century German educator Johann Friedrich Herbart, who wrote that ideas had quality and intensity and that they may inhibit or facilitate one another. Thus, ideas may pass from “states of reality” (consciousness) to “states of tendency” (unconsciousness), with the dividing line between the two states being described as the threshold of consciousness. This formulation of Herbart clearly presages the development, by the German psychologist and physiologist Gustav Theodor Fechner, of the psychophysical measurement of sensation thresholds, and the later development by Sigmund Freud of the concept of the unconscious.
The experimental analysis of consciousness dates from 1879, when the German psychologist Wilhelm Max Wundt started his research laboratory. For Wundt, the task of psychology was the study of the structure of consciousness, which extended well beyond sensations and included feelings, images, memory, attention, duration, and movement. Because early interest focussed on the content and dynamics of consciousness, it is not surprising that the central methodology of such studies was introspection; that is, subjects reported on the mental contents of their own consciousness. This introspective approach was developed most fully by the American psychologist Edward Bradford Titchener at Cornell University. Setting his task as that of describing the structure of the mind, Titchener attempted to detail, from introspective self-reports, the dimensions of the elements of consciousness. For example, taste was “dimensionalized” into four basic categories: sweet, sour, salt, and bitter. This approach was known as structuralism.
By the 1920s, however, a remarkable revolution had occurred in psychology that was to essentially remove considerations of consciousness from psychological research for some 50 years: Behaviourism captured the field of psychology. The main initiator of this movement was the American psychologist John Broadus Watson. In a 1913 article, Watson stated, “I believe that we can write a psychology and never use the terms consciousness, mental states, mind . . . imagery and the like.” Psychologists then turned almost exclusively to behaviour, as described in terms of stimulus and response, and consciousness was totally bypassed as a subject. A survey of eight leading introductory psychology texts published between 1930 and the 1950s found no mention of the topic of consciousness in five texts, and in two it was treated as a historical curiosity.
Beginning in the late 1950s, however, interest in the subject of consciousness returned, specifically in those subjects and techniques relating to altered states of consciousness: sleep and dreams, meditation, biofeedback, hypnosis, and drug-induced states. Much of the surge in sleep and dream research was directly fuelled by a discovery relevant to the nature of consciousness. A physiological indicator of the dream state was found: At roughly 90-minute intervals, the eyes of sleepers were observed to move rapidly, and at the same time the sleepers' brain waves would show a pattern resembling the waking state. When people were awakened during these periods of rapid eye movement, they almost always reported dreams, whereas if awakened at other times they did not. This and other research clearly indicated that sleep, once considered a passive state, was instead an active state of consciousness (see Dreaming; Sleep).
During the 1960s, an increased search for “higher levels” of consciousness through meditation resulted in a growing interest in the practices of Zen Buddhism and Yoga from Eastern cultures. A full flowering of this movement in the United States was seen in the development of training programs, such as Transcendental Meditation, that were self-directed procedures of physical relaxation and focussed attention. Biofeedback techniques also were developed to bring body systems involving factors such as blood pressure or temperature under voluntary control by providing feedback from the body, so that subjects could learn to control their responses. For example, researchers found that persons could control their brain-wave patterns to some extent, particularly the so-called alpha rhythms generally associated with a relaxed, meditative state. This finding was especially relevant to those interested in consciousness and meditation, and a number of “alpha training” programs emerged.
Another subject that led to increased interest in altered states of consciousness was hypnosis, which involves a transfer of conscious control from the subject to another person. Hypnotism has had a long and intricate history in medicine and folklore and has been intensively studied by psychologists. Much has become known about the hypnotic state, relative to individual suggestibility and personality traits; the subject has now largely been demythologized, and the limitations of the hypnotic state are fairly well known. Despite the increasing use of hypnosis, however, much remains to be learned about this unusual state of focussed attention.
Finally, many people in the 1960s experimented with the psychoactive drugs known as hallucinogens, which produce disorders of consciousness. The most prominent of these drugs are lysergic acid diethylamide, or LSD; mescaline, and psilocybin; the latter two have long been associated with religious ceremonies in various cultures. LSD, because of its radical thought-modifying properties, was initially explored for its so-called mind-expanding potential and for its psychotomimetic effects (imitating psychoses). Little positive use, however, has been found for these drugs, and their use is highly restricted.
Scientists have long considered the nature of consciousness without producing a fully satisfactory definition. In the early 20th century American philosopher and psychologist William James suggested that consciousness is a mental process involving both attention to external stimuli and short-term memory. Later scientific explorations of consciousness mostly expanded upon James’s work. In this article from a 1997 special issue of Scientific American, Nobel laureate Francis Crick, who helped determine the structure of DNA, and fellow biophysicist Christof Koch explain how experiments on vision might deepen our understanding of consciousness.
As the concept of a direct, simple linkage between environment and behaviour became unsatisfactory in recent decades, the interest in altered states of consciousness may be taken as a visible sign of renewed interest in the topic of consciousness. That persons are active and intervening participants in their behaviour has become increasingly clear. Environments, rewards, and punishments are not simply defined by their physical character. Memories are organized, not simply stored. An entirely new area called cognitive psychology has emerged that canters on these concerns. In the study of children, increased attention is being paid to how they understand, or perceive, the world at different ages. In the field of animal behaviour, researchers increasingly emphasize the inherent characteristics resulting from the way a species has been shaped to respond adaptively to the environment. Humanistic psychologists, with a concern for self-actualization and growth, have emerged after a long period of silence. Throughout the development of clinical and industrial psychology, the conscious states of persons in terms of their current feelings and thoughts were of obvious importance. The role of consciousness, however, was often deemphasized in favor of unconscious needs and motivations. Trends can be seen, however, toward a new emphasis on the nature of states of consciousness.
Perception (psychology), spreads of a process by which organisms interpret and organize sensation to produce a meaningful experience of the world. Sensation usually refers to the immediate, relatively unprocessed result of stimulation of sensory receptors in the eyes, ears, nose, tongue, or skin. Perception, on the other hand, better describes one’s ultimate experience of the world and typically involves further processing of sensory input. In practice, sensation and perception are virtually impossible to separate, because they are part of one continuous process.
Our sense organs translate physical energy from the environment into electrical impulses processed by the brain. For example, light, in the form of electromagnetic radiation, causes receptor cells in our eyes to activate and send signals to the brain. But we do not understand these signals as pure energy. The process of perception allows us to interpret them as objects, events, people, and situations.
Without the ability to organize and interpret sensations, life would seem like a meaningless jumble of colours, shapes, and sounds. A person without any perceptual ability would not be able to recognize faces, understand language, or avoid threats. Such a person would not survive for long. In fact, many species of animals have evolved exquisite sensory and perceptual systems that aid their survival.
Organizing raw sensory stimuli into meaningful experiences involves cognition, a set of mental activities that includes thinking, knowing, and remembering. Knowledge and experience are extremely important for perception, because they help us make sense of the input to our sensory systems. To understand these ideas, try to read the following passage:
You could probably read the text, but not as easily as when you read letters in their usual orientation. Knowledge and experience allowed you to understand the text. You could read the words because of your knowledge of letter shapes, and maybe you even have some prior experience in reading text upside down. Without knowledge of letter shapes, you would perceive the text as meaningless shapes, just as people who do not know Chinese or Japanese see the characters of those languages as meaningless shapes. Reading, then, is a form of visual perception.
Note that as above, whereby you did not stop to read every single letter carefully. Instead, you probably perceived whole words and phrases. You may have also used context to help you figure out what some of the words must be. For example, recognizing upside may have helped you predict down, because the two words often occur together. For these reasons, you probably overlooked problems with the individual letters-some of them, such as the n in down, are mirror images of normal letters. You would have noticed these errors immediately if the letters were right side up, because you have much more experience seeing letters in that orientation.
How people perceive a well-organized pattern or whole, instead of many separate parts, is a topic of interest in Gestalt psychology. According to Gestalt psychologists, the whole is different than the sum of its parts. Gestalt is a German word meaning configuration or pattern.
The three founders of Gestalt psychology were German researchers Max Wertheimer, Kurt Koffka, and Wolfgang Köhler. These men identified a number of principles by which people organize isolated parts of a visual stimulus into groups or whole objects. There are five main laws of grouping: proximity, similarity, continuity, closure, and common fate. A sixth law, that of simplicity, encompasses all of these laws.
Although most often applied to visual perception, the Gestalt laws also apply to perception in other senses. When we listen to music, for example, we do not hear a series of disconnected or random tones. We interpret the music as a whole, relating the sounds to each other based on how similar they are in pitch, how close together they are in time, and other factors. We can perceive melodies, patterns, and form in music. When a song is transposed to another key, we still recognize it, even though all of the notes have changed.
The law of proximity states that the closer objects are to one another, the more likely we are to mentally group them together. In the illustration below, we perceive as groups the boxes that are closest to one another. Note that we do not see the second and third boxes from the left as a pair, because they are spaced farther apart.
The law of similarity leads us to link together parts of the visual field that are similar in colour, lightness, texture, shape, or any other quality. That is why, in the following illustration, we perceive rows of objects instead of columns or other arrangements.
The law of continuity leads us to see a line as continuing in a particular direction, rather than making an abrupt turn. In the drawing on the left below, we see a straight line with a curved line running through it. Notice that we do not see the drawing as consisting of the two pieces in the drawing on the right.
According to the law of closure, we prefer complete forms to incomplete forms. Thus, in the drawing below, we mentally close the gaps and perceive a picture of a duck. This tendency allows us to perceive whole objects from incomplete and imperfect forms.
The law of common fate leads us to group together objects that move in the same direction. In the following illustration, imagine that three of the balls are moving in one direction, and two of the balls are moving in the opposite direction. If you saw these in actual motion, you would mentally group the balls that moved in the same direction. Because of this principle, we often see flocks of birds or schools of fish as one unit.
Central to the approach of Gestalt psychologists is the law of prägnanz, or simplicity. This general notion, which encompasses all other Gestalt laws, states that people intuitively prefer the simplest, most stable of possible organizations. For example, look at the illustration below. You could perceive this in a variety of ways: as three overlapping disks; as one whole disk and two partial disks with slices cut out of their right sides; or even as a top view of three-dimensional, cylindrical objects. The law of simplicity states that you will see the illustration as three overlapping disks, because that is the simplest interpretation.
Not only does perception involve organization and grouping, it also involves distinguishing an object from its surroundings. Notice that once you perceive an object, the area around that object becomes the background. For example, when you look at your computer monitor, the wall behind it becomes the background. The object, or figure, is closer to you, and the background, or ground, is farther away.
Gestalt psychologists have devised ambiguous figure-ground relationships-that is, drawings in which the figure and ground can be reversed-to illustrate their point that the whole is different from the sum of its parts. Consider the accompanying illustration entitled “Figure and Ground.” You may see a white vase as the figure, in which case you will see it displayed on a dark ground. However, you may also see two dark faces that point toward one another. Notice that when you do so, the white area of the figure becomes the ground. Even though your perception may alternate between these two possible interpretations, the parts of the illustration are constant. Thus, the illustration supports the Gestalt position that the whole is not determined solely by its parts. The Dutch artist M. C. Escher was intrigued by ambiguous figure-ground relationships.
Although such illustrations may fool our visual systems, people are rarely confused about what they see. In the real world, vases do not change into faces as we look at them. Instead, our perceptions are remarkably stable. Considering that we all experience rapidly changing visual input, the stability of our perceptions is more amazing than the occasional tricks that fool our perceptual systems. How we perceive a stable world is due, in part, to a number of factors that maintain perceptual constancy.
As we view an object, the image it projects on the retinas of our eyes changes with our viewing distance and angle, the level of ambient light, the orientation of the object, and other factors. Perceptual constancy allows us to perceive an object as roughly the same in spite of changes in the retinal image. Psychologists have identified a number of perceptual consistencies, including lightness constancy, colour constancy, shape constancy, and size constancy.
Lightness constancy means that our perception of an object’s lightness or darkness remains constant despite changes in illumination. To understand lightness constancy, try the following demonstration. First, take a plain white sheet of paper into a brightly lit room and note that the paper appears to be white. Then, turn out a few of the lights in the room. Note that the paper continues to appear white. Next, if it will not make the room pitch black, turn out some more lights. Note that the paper appears to be white regardless of the actual amount of light energy that enters the eye.
Lightness constancy illustrates an important perceptual principle: Perception is relative. Lightness constancy may occur because the white piece of paper reflects more light than any of the other objects in the room—regardless of the different lighting conditions. That is, you may have determined the lightness or darkness of the paper relative to the other objects in the room. Another explanation, proposed by 19th-century German physiologist Hermann von Helmholtz, is that we unconsciously take the lighting of the room into consideration when judging the lightness of objects.
Colour constancy is closely related to lightness constancy. Colour constancy means that we perceive the colour of an object as the same despite changes in lighting conditions. You have experienced colour constancy if you have ever worn a pair of sunglasses with coloured lenses. In spite of the fact that the coloured lenses change the colour of light reaching your retina, you still perceive white objects as white and red objects as red. The explanations for colour constancy parallel those for lightness constancy. One proposed explanation is that because the lenses tint everything with the same colour, we unconsciously “subtract” that colour from the scene, leaving the original colours.
Another perceptual constancy is shape constancy, which means that you perceive objects as retaining the same shape despite changes in their orientation. To understand shape constancy, hold a book in front of your face so that you are looking directly at the cover. The rectangular nature of the book should be very clear. Now, rotate the book away from you so that the bottom edge of the cover is much closer to you than the top edge. The image of the book on your retina will now be quite different. In fact, the image will now be trapezoidal, with the bottom edge of the book larger on your retina than the top edge. (Try to see the trapezoid by closing one eye and imagining the cover as a two-dimensional shape.) In spite of this trapezoidal retinal image, you will continue to see the book as rectangular. In large measure, shape constancy occurs because your visual system takes depth into consideration.
Depth perception also plays a major role in size constancy, the tendency to perceive objects as staying the same size despite changes in our distance from them. When an object is near to us, its image on the retina is large. When that same object is far away, its image on the retina is small. In spite of the changes in the size of the retinal image, we perceive the object as the same size. For example, when you see a person at a great distance from you, you do not perceive that person as very small. Instead, you think that the person is of normal size and far away. Similarly, when we view a skyscraper from far away, its image on our retina is very small-yet we perceive the building as very large.
Psychologists have proposed several explanations for the phenomenon of size constancy. First, people learn the general size of objects through experience and use this knowledge to help judge size. For example, we know that insects are smaller than people and that people are smaller than elephants. In addition, people take distance into consideration when judging the size of an object. Thus, if two objects have the same retinal image size, the object that seems farther away will be judged as larger. Even infants seem to possess size constancy.
Another explanation for size constancy involves the relative sizes of objects. According to this explanation, we see objects as the same size at different distances because they stay the same size relative to surrounding objects. For example, as we drive toward a stop sign, the retinal image sizes of the stop sign relative to a nearby tree remain constant-both images grow larger at the same rate.
Depth perception is the ability to see the world in three dimensions and to perceive distance. Although this ability may seem simple, depth perception is remarkable when you consider that the images projected on each retina are two-dimensional. From these flat images, we construct a vivid three-dimensional world. To perceive depth, we depend on two main sources of information: binocular disparity, a depth cue that requires both eyes; and monocular cues, which allow us to perceive depth with just one eye.
An autostereogram is a remarkable kind of two-dimensional image that appears three-dimensional (3-D) when viewed in the right way. To see the 3-D image, first make sure you are viewing the expanded version of this picture. Then try to focus your eyes on a point in space behind the picture, keeping your gaze steady. An image of a person playing a piano will appear.
Beaus our eyes are spaced about 7 cm (about 3 in) apart, the left and right retinas receive slightly different images. This difference in the left and right images is called binocular disparity. The brain integrates these two images into a single three-dimensional image, allowing us to perceive depth and distance.
For a demonstration of binocular disparity, fully extend your right arm in front of you and hold up your index finger. Now, alternate closing your right eye and then your left eye while focussing on your index finger. Notice that your finger appears to jump or shift slightly-a consequence of the two slightly different images received by each of your retinas. Next, keeping your focus on your right index finger, hold your left index finger up much closer to your eyes. You should notice that the nearer finger creates a double image, which is an indication to your perceptual system that it is at a different depth than the farther finger. When you alternately close your left and right eyes, notice that the nearer finger appears to jump much more than the more distant finger, reflecting a greater amount of binocular disparity.
You have probably experienced a number of demonstrations that use binocular disparity to provide a sense of depth. A stereoscope is a viewing device that presents each eye with a slightly different photograph of the same scene, which generates the illusion of depth. The photographs are taken from slightly different perspectives, one approximating the view from the left eye and the other representing the view from the right eye. The View-Master, a children’s toy, is a modern type of stereoscope.
Filmmakers have made use of binocular disparity to create 3-D (three-dimensional) movies. In 3-D movies, two slightly different images are projected onto the same screen. Viewers wear special glasses that use coloured filters (as for most 3-D movies) or polarizing filters (as for 3-D IMAX movies). The filters separate the image so that each eye receives the image intended for it. The brain combines the two images into a single three-dimensional image. Viewers who watch the film without the glasses see a double image.
Another phenomenon that makes use of binocular disparity is the autostereogram. The autostereogram is a two-dimensional image that can appear three-dimensional without the use of special glasses or a stereoscope. Several different types of autostereograms exist. The most popular, based on the single-image random dot stereogram, seemingly becomes three-dimensional when the viewer relaxes or defocuses the eyes, as if focussing on a point in space behind the image. The two-dimensional image usually consists of random dots or lines, which, when viewed properly, coalesce into a previously unseen three-dimensional image. This type of autostereogram was first popularized in the Magic Eye series of books in the early 1990s, although its invention traces back to 1979. Most autostereograms are produced using computer software. The mechanism by which autostereograms work is complex, but they employ the same principle as the stereoscope and 3-D movies. That is, each eye receives a slightly different image, which the brain fuses into a single three-dimensional image.
Although binocular disparity is a very useful depth cue, it is only effective over a fairly short range-less than 3 m (10 ft). As our distance from objects increases, the binocular disparity decreases-that is, the images received by each retina become more and more similar. Therefore, for distant objects, your perceptual system cannot rely on binocular disparity as a depth cue. However, you can still determine that some objects are nearer and some farther away because of monocular cues about depth.
To portray a realistic three-dimensional world on a two-dimensional canvas, artists must make use of a variety of depth cues. It was not until the 1400s, during the Italian Renaissance, that artists began to understand linear perspective fully and to portray depth convincingly. Shown here are several paintings that produce a sense of depth.
Close one eye and look around you. Notice the richness of depth that you experience. How does this sharp sense of three-dimensionality emerge from input to a single two-dimensional retina? The answer lies in monocular cues, or cues to depth that are effective when viewed with only one eye.
The problem of encoding depth on the two-dimensional retina is quite similar to the problem faced by an artist who wishes to realistically portray depth on a two-dimensional canvas. Some artists are amazingly adept at doing so, using a variety of monocular cues to give their works a sense of depth.
Although there are many kinds of monocular cues, the most important are interposition, atmospheric perspective, texture gradient, linear perspective, size cues, height cues, and motion parallax.
People commonly rely on interposition, or the overlap between objects, to judge distances. When one object partially obscures our view of another object, we judge the covered object as farther away from us.
Probably the most important monocular cue is interposition, or overlap. When one object overlaps or partly blocks our view of another object, we judge the covered object as being farther away from us. This depth cue is all around us-look around you and notice how many objects are partly obscured by other objects. To understand how much we rely on interposition, try this demonstration. Hold two pens, one in each hand, a short distance in front of your eyes. Hold the pens several centimetres apart so they do not overlap, but move one pen just slightly farther away from you than the other. Now close one eye. Without binocular vision, notice how difficult it is to judge which pen is more distant. Now, keeping one eye closed, move your hands closer and closer together until one pen moves in front of the other. Notice how interposition makes depth perception much easier.
When we look out over vast distances, faraway points look hazy or blurry. This effect is known as atmospheric perspective, and it helps us to judge distances. In this picture, the ridges that are farther away appear hazier and less detailed than the closer ridges.
The air contains microscopic particles of dust and moisture that make distant objects look hazy or blurry. This effect is called atmospheric perspective or aerial perspective, and we use it to judge distance. In the anthem, ’Oh Canada’ it draws reference to the effect of atmospheric perspectives, which makes distant mountains appear bluish or purple. When you are standing on a mountain, you see brown earth, gray rocks, and green trees and grass-but little that is purple. When you are looking at a mountain from a distance, however, atmospheric particles bend the light so that the rays that reach your eyes lie in the blue or purple part of the colour spectrum. This same effect makes the sky appear blue.
An influential American psychologist, James J. Gibson, was among the first people to recognize the importance of texture gradient in perceiving depth. A texture gradient arises whenever we view a surface from a slant, rather than directly from above. Most surfaces-such as the ground, a road, or a field of flowers-have a texture. The texture becomes denser and less detailed as the surface recedes into the background, and this information helps us to judge depth. For example, look at the floor or ground around you. Notice that the apparent texture of the floor changes over distance. The texture of the floor near you appears more detailed than the texture of the floor farther away. When objects are placed at different locations along a texture gradient, judging their distance from you becomes fairly easy.
Linear perspective means that parallel lines, such as the white lines of this road, appear to converge with greater distance and reach a vanishing point at the horizon. We use our knowledge of linear perspective to help us judge distances.
Artists have learned to make great use of linear perspective in representing a three-dimensional world on a two-dimensional canvas. Linear perspective refers to the fact that parallel lines, such as railroad tracks, appear to converge with distance, eventually reaching a vanishing point at the horizon. The more the lines converge, the farther away they appear.
When estimating an object’s distance from us, we take into account the size of its image relative to other objects. This depth cue is known as relative size. In this photograph, because we assume that the aeroplanes are the same size, we judge the aeroplanes that take up less of the image as being farther away from the camera.
Another visual cue to apparent depth is closely related to size constancy. According to size constancy, even though the size of the retinal image may change as an object moves closer to us or farther from us, we perceive that object as staying about the same size. We are able to do so because we take distance into consideration. Thus, if we assume that two objects are the same size, we perceive the object that casts a smaller retinal image as farther away than the object that casts a larger retinal image. This depth cue is known as relative size, because we consider the size of an object’s retinal image relative to other objects when estimating its distance.
Another depth cue involves the familiar size of objects. Through experience, we become familiar with the standard size of certain objects, such as houses, cars, aeroplanes, people, animals, books, and chairs. Knowing the size of these objects helps us judge our distance from them and from objects around them.
When judging an object’s distance, we consider its height in our visual field relative to other objects. The closer an object is to the horizon in our visual field, the farther away we perceive it to be. For example, the wildebeest that are higher in this photograph appear farther away than those that are lower.
We perceive points nearer to the horizon as more distant than points that are farther away from the horizon. This means that below the horizon, objects higher in the visual field appear farther away than those that are lower. Above the horizon, objects lower in the visual field appear farther away than those that are higher. For example, in the accompanying picture entitled 'Relative Height', the animals higher in the photo appear farther away than the animals lower in the photo. But above the horizon, the clouds lower in the photo appear farther away than the clouds higher in the photo. This depth cue is called relative elevation or relative height, because when judging an object’s distance, we consider its height in our visual field relative to other objects.
The monocular cues discussed so far-interposition, atmospheric perspective, texture gradient, linear perspective, size cues, and height cues-are sometimes called pictorial cues, because artists can use them to convey three-dimensional information. Another monocular cue cannot be represented on a canvas. Motion parallax occurs when objects at different distances from you appear to move at different rates when you are in motion. The next time you are driving along in a car, pay attention to the rate of movement of nearby and distant objects. The fence near the road appears to whiz past you, while the more distant hills or mountains appear to stay in virtually the same position as you move. The rate of an object’s movement provides a cue to its distance.
Although motion plays an important role in depth perception, the perception of motion is an important phenomenon in its own right. It allows a baseball outfielder to calculate the speed and trajectory of a ball with extraordinary accuracy. Automobile drivers rely on motion perception to judge the speeds of other cars and avoid collisions. A cheetah must be able to detect and respond to the motion of antelopes, its chief prey, in order to survive.
Initially, you might think that you perceive motion when an object’s image moves from one part of your retina to another part of your retina. In fact, that is what occurs if you are staring straight ahead and a person walks in front of you. Motion perception, however, is not that simple-if it were, the world would appear to move every time we moved our eyes. Keep in mind that you are almost always in motion. As you walk along a path, or simply move your head or your eyes, images from many stationary objects move around on your retina. How does your brain know which movement on the retina is due to your own motion and which is due to motion in the world? Understanding that distinction is the problem that faces psychologists who want to explain motion perception.
One explanation of motion perception involves a form of unconscious inference. That is, when we walk around or move our head in a particular way, we unconsciously expect that images of stationary objects will move on our retina. We discount such movement on the retina as due to our own bodily motion and perceive the objects as stationary.
In contrast, when we are moving and the image of an object does not move on our retina, we perceive that object as moving. Consider what happens as a person moves in front of you and you track that person’s motion with your eyes. You move your head and your eyes to follow the person’s movement, with the result that the image of the person does not move on your retina. The fact that the person’s image stays in roughly the same part of the retina leads you to perceive the person as moving.
Psychologist James J. Gibson thought that this explanation of motion perception was too complicated. He reasoned that perception does not depend on internal thought processes. He thought, instead, that the objects in our environment contain all the information necessary for perception. Think of the aerial acrobatics of a fly. Clearly, the fly is a master of motion and depth perception, yet few people would say the fly makes unconscious inferences. Gibson identified a number of cues for motion detection, including the covering and uncovering of background. Research has shown that motion detection is, in fact, much easier against a background. Thus, as a person moves in front of you, that person first covers and then uncovers portions of the background.
People may perceive motion when none actually exists. For example, motion pictures are really a series of slightly different still pictures flashed on a screen at a rate of 24 pictures, or frames, per second. From this rapid succession of still images, our brain perceives fluid motion-a phenomenon known as stroboscopic movement. For more information about illusions of motion, see Illusion: Illusory Motion .
Experience in interacting with the world is vital to perception. For instance, kittens raised without visual experience or deprived of normal visual experience do not perceive the world accurately. In one experiment, researchers reared kittens in total darkness, except that for five hours a day the kittens were placed in an environment with only vertical lines. When the animals were later exposed to horizontal lines and forms, they had trouble perceiving these forms.
Philosophers have long debated the role of experience in human perception. In the late 17th century, Irish philosopher William Molyneux wrote to his friend, English philosopher John Locke, and asked him to consider the following scenario: Suppose that you could restore sight to a person who was blind. Using only vision, would that person be able to tell the difference between a cube and a sphere, which she or he had previously experienced only through touch? Locke, who emphasized the role of experience in perception, thought the answer was no. Modern science actually allows us to address this philosophical question, because a very small number of people who were blind have had their vision restored with the aid of medical technology.
Two researchers, British psychologist Richard Gregory and British-born neurologist Oliver Sacks, have written about their experiences with men who were blind for a long time due to cataracts and then had their vision restored late in life. When their vision was restored, they were often confused by visual input and were unable to see the world accurately. For instance, they could detect motion and perceive colours, but they had great difficulty with complex stimuli, such as faces. Much of their poor perceptual ability was probably due to the fact that the synapses in the visual areas of their brains had received little or no stimulation throughout their lives. Thus, without visual experience, the visual system does not develop properly.
Visual experience is useful because it creates memories of past stimuli that can later serve as a context for perceiving new stimuli. Thus, you can think of experience as a form of context that you carry around with you.
Ordinarily, when you read, you use the context of your prior experience with words to process the words you are reading. Context may also occur outside of you, as in the surrounding elements in a visual scene. When you are reading and you encounter an unusual word, you may be able to determine the meaning of the word by its context. Your perception depends on the context.
Although context is useful most of the time, on some rare occasions context can lead you to misperceive a stimulus. Look at Example B in the 'Context Effects' illustration. Which of the green circles is larger? You may have guessed that the green circle on the right is larger. In fact, the two circles are the same size. Your perceptual system was fooled by the context of the surrounding red circles.
Against a background of slanted lines, a perfect square appears trapezoidal-that is, wider at the top than at the bottom. This illusion may occur because the lines create a sense of depth, making the top of the square seem farther away and larger.
A visual illusion occurs when your perceptual experience of a stimulus is substantially different from the actual stimulus you are viewing. In the previous example, you saw the green circles as different sizes, even though they were actually the same size. To experience another illusion, look at the illustration entitled 'Zöllner Illusion'. What shape do you see? You may see a trapezoid that is wider at the top, but the actual shape is a square. Such illusions are natural artifacts of the way our visual systems work. As a result, illusions provide important insights into the functioning of the visual system. In addition, visual illusions are fun to experience.
An ascribing notion to awaiting the idea that something debated finds to its intent of meaning the explicit significance of the same psychology that is immeasurably the scientific study of behaviour and the mind. This definition contains three elements. The first is that psychology is a scientific enterprise that obtains knowledge through systematic and objective methods of observation and experimentation. Second is that psychologists study behaviour, which refers to any action or reaction that can be measured or observed-such as the blink of an eye, an increase in heart rate, or the unruly violence that often erupts in a mob. Third is that psychologists study the mind, which refers to both conscious and unconscious mental states. These states cannot actually be seen, only inferred from observable behaviour.
Many people think of psychologists as individuals who dispense advice, analyze personality, and help those who are troubled or mentally ill. But psychology is far more than the treatment of personal problems. Psychologists strive to understand the mysteries of human nature-why people think, feel, and act as they do. Some psychologists also study animal behaviour, using their findings to determine laws of behaviour that apply to all organisms and to formulate theories about how humans behave and think.
With its broad scope, psychology investigates an enormous range of phenomena: learning and memory, sensation and perception, motivation and emotion, thinking and language, personality and social behaviour, intelligence, infancy and child development, mental illness, and much more. Furthermore, psychologists examine these topics from a variety of complementary perspectives. Some conduct detailed biological studies of the brain, others explore how we process information; others analyze the role of evolution, and still others study the influence of culture and society.
Psychologists seek to answer a wide range of important questions about human nature: Are individuals genetically predisposed at birth to develop certain traits or abilities? How accurate are people at remembering faces, places, or conversations from the past? What motivates us to seek out friends and sexual partners? Why do so many people become depressed and behave in ways that seem self-destructive? Do intelligence test scores predict success in school, or later in a career? What causes prejudice, and why is it so widespread? Can the mind be used to heal the body? Discoveries from psychology can help people understand themselves, relate better to others, and solve the problems that confront them.
The term psychology comes from two Greek words: psyche, which means “soul,” and logos, "the study of." These root words were first combined in the 16th century, at a time when the human soul, spirit, or mind was seen as distinct from the body.
Psychology overlaps with other sciences that investigate behaviour and mental processes. Certain parts of the field share much with the biological sciences, especially physiology, the biological study of the functions of living organisms and their parts. Like physiologists, many psychologists study the inner workings of the body from a biological perspective. However, psychologists usually focus on the activity of the brain and nervous system.
The social sciences of sociology and anthropology, which study human societies and cultures, also intersect with psychology. For example, both psychology and sociology explore how people behave when they are in groups. However, psychologists try to understand behaviour from the vantage point of the individual, whereas sociologists focus on how behaviour is shaped by social forces and social institutions. Anthropologists investigate behaviour as well, paying particular attention to the similarities and differences between human cultures around the world.
Psychology is closely connected with psychiatry, which is the branch of medicine specializing in mental illnesses. The study of mental illness is one of the largest areas of research in psychology. Psychiatrists and psychologists differ in their training. A person seeking to become a psychiatrist first obtains a medical degree and then engages in further formal medical education in psychiatry. Most psychologists have a doctoral graduate degree in psychology.
The study of psychology draws on two kinds of research: basic and applied. Basic researchers seek to test general theories and build a foundation of knowledge, while applied psychologists study people in real-world settings and use the results to solve practical human problems. There are five major areas of research: biopsychology, clinical psychology, cognitive psychology, developmental psychology, and social psychology. Both basic and applied research is conducted in each of these fields of psychology.
This section describes basic research and other activities of psychologists in the five major fields of psychology. Applied research is discussed in the Practical Applications of Psychology section of this article.
Magnetic resonance imaging (MRI) reveals structural differences between a normal adult brain, left, and the brain of a person with schizophrenia, right. The schizophrenic brain has enlarged ventricles (fluid-filled cavities), shown in light gray. However, not all people with schizophrenia show this abnormality.
How do body and mind interact? Are body and mind fundamentally different parts of a human being, or are they one and the same, interconnected in important ways? Inspired by this classic philosophical debate, many psychologists specialize in biopsychology, the scientific study of the biological underpinnings of behaviour and mental processes.
At the heart of this perspective is the notion that human beings, like other animals, have an evolutionary history that predisposes them to behave in ways that are uniquely adaptive for survival and reproduction. Biopsychologists work in a variety of subfields. Researchers in the field of ethology observe fish, reptiles, birds, insects, primates, and other animal species in their natural habitats. Comparative psychologists study animal behaviour and make comparisons among different species, including humans. Researchers in evolutionary psychology theorize about the origins of human aggression, altruism, mate selection, and other behaviours. Those in behavioural genetics seek to estimate the extent to which human characteristics such as personality, intelligence, and mental illness are inherited.
Particularly important to biopsychology is a growing body of research in behavioural neuroscience, the study of the links between behaviour and the brain and nervous system. Facilitated by computer-assisted imaging techniques that enable researchers to observe the living human brain in action, this area is generating great excitement. In the related area of cognitive neuroscience, researchers record physical activity in different regions of the brain as the subject reads, speaks, solves math problems, or engages in other mental tasks. Their goal is to pinpoint activities in the brain that correspond to different operations of the mind. In addition, many biopsychologists are involved in psychopharmacology, the study of how drugs affect mental and behavioural functions.
This chart illustrates the percentage of people in the United States who experience a particular mental illness at some point during their lives. The figures are derived from the National Comorbidity Survey, in which researchers interviewed more than 8000 people aged 15 to 54 years. Homeless people and those living in prisons, nursing homes, or other institutions were not included in the survey.
Clinical psychology is dedicated to the study, diagnosis, and treatment of mental illnesses and other emotional or behavioural disorders. More psychologists work in this field than in any other branch of psychology. In hospitals, community clinics, schools, and in private practice, they use interviews and tests to diagnose depression, anxiety disorders, schizophrenia, and other mental illnesses. People with these psychological disorders often suffer terribly. They experience disturbing symptoms that make it difficult for them to work, relate to others, and cope with the demands of everyday life.
Over the years, scientists and mental health professionals have made great strides in the treatment of psychological disorders. For example, advances in psychopharmacology have led to the development of drugs that relieve severe symptoms of mental illness. Clinical psychologists usually cannot prescribe drugs, but they often work in collaboration with a patient’s physician. Drug treatment is often combined with psychotherapy, a form of intervention that relies primarily on verbal communication to treat emotional or behavioural problems. Over the years, psychologists have developed many different forms of psychotherapy. Some forms, such as psychoanalysis, focus on resolving internal, unconscious conflicts stemming from childhood and past experiences. Other forms, such as cognitive and behavioural therapies, focus more on the person’s current level of functioning and try to help the individual change distressing thoughts, feelings, or behaviours.
In addition to studying and treating mental disorders, many clinical psychologists study the normal human personality and the ways in which individuals differ from one another. Still others administer a variety of psychological tests, including intelligence tests and personality tests. These tests are commonly given to individuals in the workplace or in school to assess their interests, skills, and level of functioning. Clinical psychologists also use tests to help them diagnose people with different types of psychological disorders.
The field of counselling psychology is closely related to clinical psychology. Counselling psychologists may treat mental disorders, but they more commonly treat people with less-severe adjustment problems related to marriage, family, school, or career. Many other types of professionals care for and treat people with psychological disorders, including psychiatrists, psychiatric social workers, and psychiatric nurses.
To take the Stroop test, name aloud each colour in the two columns at left as quickly as you can. Next, look at the right side of the illustration and quickly name the colours in which the words are printed. Which task took longer to complete? The test, devised in 1935 by American psychologist John Stroop, shows that people cannot help but process word meanings, and that this processing interferes with the colour-naming task.
How do people learn from experience? How and where in the brain are visual images, facts, and personal memories stored? What causes forgetting? How do people solve problems or make difficult life decisions? Does language limit the way people think? And to what extent are people influenced by information outside of conscious awareness?
These are the kinds of questions posed within cognitive psychology, the scientific study of how people acquire, process, and utilize information. Cognition refers to the process of knowing and encompasses nearly the entire range of conscious and unconscious mental processes: sensation and perception, conditioning and learning, attention and consciousness, sleep and dreaming, memory and forgetting, reasoning and decision making, imagining, problem solving, and language.
Decades ago, the invention of digital computers gave cognitive psychologists a powerful new way of thinking about the human mind. They began to see human beings as information processors who receive input, process and store information, and produce output. This approach became known as the information-processing model of cognition. As computers have become more sophisticated, cognitive psychologists have extended the metaphor. For example, most researchers now reject the idea that information is processed in linear, sequential steps. Instead they find that the human mind is capable of parallel processing, in which multiple operations are carried out simultaneously.
In this information-processing model of memory, information that enters the brain is briefly recorded in sensory memory. If we focus our attention on it, the information may become part of working memory (also called short-term memory), where it can be manipulated and used. Through encoding techniques such as repetition and rehearsal, information may be transferred to long-term memory. Retrieving long-term memories makes them active again in working memory.
Are people programmed by inborn biological dispositions? Or is an individual's fate molded by culture, family, peers, and other socializing influences within the environment? These questions about the roles of nature and nurture are central to the study of human development.
An incredibly complex array of influences, including families, acquaintances, mass media, and society as a whole, help determine the moral development of children. Although a rash of violent incidents in American schools in the late 1990s focussed attention on deviant youth behaviour, the vast majority of children seem to function harmoniously with others. In this August 1999 article from Scientific American, William Damon, director of the Centre on Adolescence at Stanford University in California, explores recent findings on how young people develop morality.
Developmental psychology focuses on the changes that come with age. By comparing people of different ages, and by tracking individuals over time, researchers in this area study the ways in which people mature and change over the life span. Within this area, those who specialize in child development or child psychology study physical, intellectual, and social development in fetuses, infants, children, and adolescents. Recognizing that human development is a lifelong process, other developmental psychologists study the changes that occur throughout adulthood. Still others specialize in the study of old age, even the process of dying.
A 'shock generator', top, was used by American psychologist Stanley Milgram in experiments designed to test the obedience of people to authority. An experimenter instructed subjects to administer what they believed were painful electric shocks to Mr. Wallace, bottom, an accomplice of the experimenter who was strapped into a chair and connected to the generator by electrodes on his skin. No actual shocks occurred. The experimenter ordered the subjects to continue as the shocks increased to a level the subjects believed were dangerous or even lethal. In Milgram’s initial study, 65 percent of people obeyed the experimenter and delivered the maximum shock of 450 volts. Milgram discusses his conclusions in this sound clip.
Social psychology is the scientific study of how people think, feel, and behave in social situations. Researchers in this field ask questions such as, How do we form impressions of others? How are people persuaded to change their attitudes or beliefs? What causes people to conform in group situations? What leads someone to help or ignore a person in need? Under what circumstances do people obey or resist orders?
By observing people in real-world social settings, and by carefully devising experiments to test people’s social behaviour, social psychologists learn about the ways people influence, perceive, and interact with one another. The study of social influence includes topics such as conformity, obedience to authority, the formation of attitudes, and the principles of persuasion. Researchers interested in social perception study how people come to know and evaluate one another, how people form group stereotypes, and the origins of prejudice. Other topics of particular interest to social psychologists include physical attraction, love and intimacy, aggression, altruism, and group processes. Many social psychologists are also interested in cultural influences on interpersonal behaviour.
Whereas basic researchers test theories about mind and behaviour, applied psychologists are motivated by a desire to solve practical human problems. Four particularly active areas of application are health, education, business, and law.
Today, many psychologists work in the emerging area of health psychology, the application of psychology to the promotion of physical health and the prevention and treatment of illness. Researchers in this area have shown that human health and well-being depends on both biological and psychological factors.
Many psychologists in this area study psychophysiological disorders (also called psychosomatic disorders), conditions that are brought on or influenced by psychological states, most often stress. These disorders include high blood pressure, headaches, asthma, and ulcers. Researchers have discovered that chronic stress is associated with an increased risk of coronary heart disease. In addition, stress can compromise the body's immune system and increase susceptibility to illness.
Health psychologists also study how people cope with stress. They have found that people who have family, friends, and other forms of social support are healthier and live longer than those who are more isolated. Other researchers in this field examine the psychological factors that underlie smoking, drinking, drug abuse, risky sexual practices, and other behaviours harmful to health.
Psychologists in all branches of the discipline contribute to our understanding of teaching, learning, and education. Some help develop standardized tests used to measure academic aptitude and achievement. Others study the ages at which children become capable of attaining various cognitive skills, the effects of rewards on their motivation to learn, computerized instruction, bilingual education, learning disabilities, and other relevant topics. Perhaps the best-known application of psychology to the field of education occurred in 1954 when, in the case of Brown v. Board of Education, the Supreme Court of the United States outlawed the segregation of public schools by race. In its ruling, the Court cited psychological studies suggesting that segregation had a damaging effect on black students and tended to encourage prejudice.
In addition to the contributions of psychology as a whole, two fields within psychology focus exclusively on education: educational psychology and school psychology. Educational psychologists seek to understand and improve the teaching and learning process within the classroom and other educational settings. Educational psychologists study topics such as intelligence and ability testing, student motivation, discipline and classroom management, curriculum plans, and grading. They also test general theories about how students learn most effectively. School psychologists work in elementary and secondary school systems administering tests, making placement recommendations, and counselling children with academic or emotional problems.
The business world, psychology is applied in the workplace and in the marketplace. Industrial-organizational (I-O) psychology focuses on human behaviour in the workplace and other organizations. I-O psychologists conduct research, teach in business schools or universities, and work in private industry. Many I-O psychologists study the factors that influence worker motivation, satisfaction, and productivity. Others study the personal traits and situations that foster great leadership. Still others focus on the processes of personnel selection, training, and evaluation. Studies have shown, for example, that face-to-face interviews sometimes result in poor hiring decisions and may be biassed by the applicant’s gender, race, and physical attractiveness. Studies have also shown that certain standardized tests can help to predict on-the-job performance. See Industrial-Organizational Psychology.
Consumer psychology is the study of human decision making and behaviour in the marketplace. In this area, researchers analyze the effects of advertising on consumers’ attitudes and buying habits. Consumer psychologists also study various aspects of marketing, such as the effects of packaging, price, and other factors that lead people to purchase one product rather than another.
Many psychologists today work in the legal system. They consult with attorneys, testify in court as expert witnesses, counsel prisoners, teach in law schools, and research various justice-related issues. Sometimes referred to as forensic psychologists, those who apply psychology to the law study a range of issues, including jury selection, eyewitness testimony, confessions to police, lie-detector tests, the death penalty, criminal profiling, and the insanity defence.
Studies in forensic psychology have helped to illuminate weaknesses in the legal system. For example, based on trial-simulation experiments, researchers have found that jurors are often biassed by various facts not in evidence-that is, facts the judge tells them to disregard. In studying eyewitness testimony, researchers have staged mock crimes and asked witnesses to identify the assailant or recall other details. These studies have revealed that under certain conditions eyewitnesses are highly prone to error.
Psychologists in this area often testify in court as expert witnesses. In cases involving the insanity defence, forensic clinical psychologists are often called to court to give their opinion about whether individual defendants are sane or insane. Used as a legal defence, insanity means that defendants, because of a mental disorder, cannot appreciate the wrongfulness of their conduct or control it. Defendants who are legally insane at the time of the offense may be absolved of criminal responsibility for their conduct and judged not guilty. Psychologists are often called to testify in court on other controversial matters as well, including the accuracy of eyewitness testimony, the mental competence (fitness) of defendants to stand trial, and the reliability of early childhood memories.
Psychology has applications in many other domains of human life. Environmental psychologists focus on the relationship between people and their physical surroundings. They study how street noise, heat, architectural design, population density, and crowding affect people’s behaviour and mental health. In a related field, human factors psychologists work on the design of appliances, furniture, tools, and other manufactured items in order to maximize their comfort, safety, and convenience. Sports psychologists advise athletes and study the physiological, perceptual-motor, motivational, developmental, and social aspects of athletic performance. Other psychologists specialize in the study of political behaviour, religion, sexuality, or behaviour in the military.
Psychologists from all areas of specialization use the scientific method to test their theories about behaviour and mental processes. A theory is an organized set of principles that is designed to explain and predict some phenomenon. Good theories also provide specific testable predictions, or hypotheses, about the relation between two or more variables. Formulating a hypothesis to be tested is the first important step in conducting research.
Over the years, psychologists have devised numerous ways to test their hypotheses and theories. Many studies are conducted in a laboratory, usually located at a university. The laboratory setting allows researchers to control what happens to their subjects and make careful and precise observations of behaviour. For example, a psychologist who studies memory can bring volunteers into the lab, ask them to memorize a list of words or pictures, and then test their recall of that material seconds, minutes, or days later.
As indicated by the term field research, studies may also be conducted in real-world locations. For example, a psychologist investigating the reliability of eyewitness testimony might stage phony crimes in the street and then ask unsuspecting bystanders to identify the culprit from a set of photographs. Psychologists observe people in a wide variety of other locations outside the laboratory, including classrooms, offices, hospitals, college dormitories, bars, restaurants, and prisons.
In both laboratory and field settings, psychologists conduct their research using a variety of methods. Among the most common methods are archival studies, case studies, surveys, naturalistic observations, correlational studies, experiments, literature reviews, and measures of brain activity.
One way to learn about people is through archival studies, an examination of existing records of human activities. Psychological researchers often examine old newspaper stories, medical records, birth certificates, crime reports, popular books, and artwork. They may also examine statistical trends of the past, such as crime rates, birth rates, marriage and divorce rates, and employment rates. The strength of such measures is that by observing people only secondhand, researchers cannot unwittingly influence the subjects by their presence. However, available records of human activity are not always complete or detailed enough to be useful.
Archival studies are particularly valuable for examining cultural or historical trends. For example, in one study of physical attractiveness, researchers wanted to know if American standards of female beauty have changed over several generations. These researchers looked through two popular women’s magazines between 1901 and 1981 and examined the measurements of the female models. They found that “curvaceousness” (as measured by the bust-to-waist ratio) varied over time, with a boyish, slender look considered desirable in some time periods but not in others.
Sometimes psychologists interview, test, observe, and investigate the backgrounds of specific individuals in detail. Such case studies are conducted when researchers believe that an in-depth look at one individual will reveal something important about people in general.
Case studies often take a great deal of time to complete, and the results may be limited by the fact that the subject is atypical. Yet case studies have played a prominent role in the development of psychology. Austrian physician Sigmund Freud based his theory of psychoanalysis on his experiences with troubled patients. Swiss psychologist Jean Piaget first began to formulate a theory of intellectual development by questioning his own children. Neuroscientists learn about how the human brain works by testing patients who have suffered brain damage. Cognitive psychologists learn about human intelligence by studying child prodigies and other gifted individuals. Social psychologists learn about group decision making by analyzing the policy decisions of government and business groups. When an individual is exceptional in some way, or when a hypothesis can be tested only through intensive, long-term observation, the case study is a valuable method.
An electroencephalogram, or EEG, is a recording of the action potential, or electrical, activity of the cerebral cortex of the brain. An EEG is made by attaching electrodes to the scalp, then collecting, amplifying, and recording the electrical impulses of the brain.
Biopsychologists interested in the links between brain and behaviour use a variety of specialized techniques in their research. One approach is to observe and test patients who have suffered damage to a specific region of the brain to determine what mental functions and behaviours were affected by that damage. British-born neurologist Oliver Sacks has written several books in which he describes case studies of brain-damaged patients who exhibited specific deficits in their speech, memory, sleep, and even in their personalities.
This positron emission tomography (PET) scan of the brain shows the activity of brain cells in the resting state and during three types of auditory stimulation. PET uses radioactive substances introduced into the brain to measure such brain functions as cerebral metabolism, blood flow and volume, oxygen use, and the formation of neurotransmitters. This imaging method collects data from many different angles, feeding the information into a computer that produces a series of cross-sectional images.
A second approach is to physically alter the brain and measure the effects of that change on behaviour. The alteration can be achieved in different ways. For example, animal researchers often damage or destroy a specific region of a laboratory animal’s brain through surgery. Other researchers might spark or inhibit activity in the brain through the use of drugs or electrical stimulation.
This magnetic resonance imaging (MRI) scan of a normal adult head shows the brain, airways, and soft tissues of the face. The large cerebral cortex, appearing in yellow and green, forms the bulk of the brain tissue; the circular cerebellum, Centre left, in red, and the elongated brainstem, Centre, in red, are also prominently shown.
Another way to study the relationship between the brain and behaviour is to record the activity of the brain with machines while a subject engages in certain behaviours or activities. One such instrument is the electroencephalograph, a device that can detect, amplify, and record the level of electrical activity in the brain by means of metal electrodes taped to the scalp.
Advances in technology in the early 1970s allowed psychologists to see inside the living human brain for the first time without physically cutting into it. Today, psychologists use a variety of sophisticated brain-imaging techniques. The computerized axial tomography (CT or CAT) scan provides a computer-enhanced X-ray image of the brain. The more advanced positron emission tomography (PET) scan tracks the level of activity in specific parts of the brain by measuring the amount of glucose being used there. These measurements are then fed to a computer, which produces a colour-coded image of brain activity. Another technique is magnetic resonance imaging (MRI), which produces high-resolution cross-sectional images of the brain. A high-speed version of MRI known as functional MRI produces moving images of the brain as its activity changes in real time. These relatively new brain imaging techniques have generated great excitement, because they allow researchers to identify parts of the brain that are active while people read, speak, listen to music, solve math problems, and engage in other mental activities.
In contrast with the in-depth study of one person, surveys describe a specific population or group of people. Surveys involve asking people a series of questions about their behaviours, thoughts, or opinions. Surveys can be conducted in person, over the phone, or through the mail. Most surveys study a specific group-for example, college students, working mothers, men, or homeowners. Rather than questioning every person in the group, survey researchers choose a representative sample of people and generalize the findings to the larger population.
Surveys may pertain to almost any topic. Often surveys ask people to report their feelings about various social and political issues, the TV shows they watch, or the consumer products they purchase. Surveys are also used to learn about people’s sexual practices; to estimate the use of cigarettes, alcohol, and other drugs; and to approximate the proportion of people who experience feelings of life satisfaction, loneliness, and other psychological states that cannot be directly observed.
Surveys must be carefully designed and conducted to ensure their accuracy. The results can be influenced, and biassed, by two factors: who the respondents are and how the questions are asked. For a survey to be accurate, the sample being questioned must be representative of the population on key characteristics such as sex, race, age, region, and cultural background. To ensure similarity to the larger population, survey researchers usually try to make sure that they have a random sample, a method of selection in which everyone in the population has an equal chance of being chosen.
When the sample is not random, the results can be misleading. For example, prior to the 1936 United States presidential election, pollsters for the magazine Literary Digest mailed postcards to more than 10 million people who were listed in telephone directories or as registered owners of automobiles. The cards asked for whom they intended to vote. Based on the more than 2 million ballots that were returned, the Literary Digest predicted that Republican candidate Alfred M. Landon would win in a landslide over Democrat Franklin D. Roosevelt. At the time, however, more Republicans than Democrats owned telephones and automobiles, skewing the poll results. In the election, Landon won only two states.
The results of survey research can also be influenced by the way that questions are asked. For example, when asked about 'welfare', a majority of Americans in one survey said that the government spends too much money. But when asked about 'assistance to the poor', significantly fewer people gave this response.
In naturalistic observation, the researcher observes people as they behave in the real world. The researcher simply records what occurs and does not intervene in the situation. Psychologists use naturalistic observation to study the interactions between parents and children, doctors and patients, police and citizens, and managers and workers.
Naturalistic observation is common in anthropology, in which field workers seek to understand the everyday life of a culture. Ethologists, who study the behaviour of animals in their natural habitat, also use this method. For example, British ethologist Jane Goodall spent many years in African jungles observing chimpanzees-their social structure, courting rituals, struggles for dominance, eating habits, and other behaviours. Naturalistic observation is also common among developmental psychologists who study social play, parent-child attachments, and other aspects of child development. These researchers observe children at home, in school, on the playground, and in other settings.
Case studies, surveys, and naturalistic observations are used to describe behaviour. Correlational studies are further designed to find statistical connections, or correlations, between variables so that some factors can be used to predict others.
A correlation is a statistical measure of the extent to which two variables are associated. A positive correlation exists when two variables increase or decrease together. For example, frustration and aggression are positively correlated, meaning that as frustration rises, so do acts of aggression. More of one means more of the other. A negative correlation exists when increases in one variable are accompanied by decreases in the other, and vice versa. For example, friendships and stress-induced illness are negatively correlated, meaning that the more close friends a person has, the fewer stress-related illnesses the person suffers. More of one means less of the other.
Based on correlational evidence, researchers can use one variable to make predictions about another variable. But researchers must use caution when drawing conclusions from correlations. It is nature-but incorrect-to assume that because one variable predicts another, the first must have caused the second. For example, one might assume that frustration triggers aggression, or that friendships foster health. Regardless of how intuitive or accurate these conclusions may be, correlation does not prove causation. Thus, although it is possible that frustration causes aggression, there are other ways to interpret the correlation. For example, it is possible that aggressive people are more likely to suffer social rejection and become frustrated as a result.
Correlations enable researchers to predict one variable from another. But to determine if one variable actually causes another, psychologists must conduct experiments. In an experiment, the psychologist manipulates one factor in a situation-keeping other aspects of the situation constant-and then observes the effect of the manipulation on behaviour. The people whose behaviour is being observed are the subjects of the experiment. The factor that an experimenter varies (the proposed cause) is known as the independent variable, and the behaviour being measured (the proposed effect) is called the dependent variable. In a test of the hypothesis that frustration triggers aggression, frustration would be the independent variable, and aggression the dependent variable.
There are three requirements for conducting a valid scientific experiment: (1) control over the independent variable, (2) the use of a comparison group, and (3) the random assignment of subjects to conditions. In its most basic form, then, a typical experiment compares a large number of subjects who are randomly assigned to experience one condition with a group of similar subjects who are not. Those who experience the condition compose the experimental group, and those who do not make up the control group. If the two groups differ significantly in their behaviour during the experiment, that difference can be attributed to the presence of the condition, or independent variable. For example, to test the hypothesis that frustration triggers aggression, one group of researchers brought subjects into a laboratory, impeded their efforts to complete an important task (other subjects in the experiment were not impeded), and measured their aggressiveness toward another person. These researchers found that subjects who had been frustrated were more aggressive than those who had not been frustrated.
Psychologists use many different methods in their research. Yet no single experiment can fully prove a hypothesis, so the science of psychology builds slowly over time. First, a new discovery must be replicated. Replication refers to the process of conducting a second, nearly identical study to see if the initial findings can be repeated. If so, then researchers try to determine if these findings can be applied, transferred, or generalized to other settings. Generalizability refers to the extent to which a finding obtained under one set of conditions can also be obtained at another time, in another place, and in other populations.
Because the science of psychology proceeds in small increments, many studies must be conducted before clear patterns emerge. To summarize and interpret an entire body of research, psychologists rely on two methods. One method is a narrative review of the literature, in which a reviewer subjectively evaluates the strengths and weaknesses of the various studies on a topic and argues for certain conclusions. Another method is meta-analysis, a statistical procedure used to combine the results from many different studies. By meta-analysing a body of research, psychologists can often draw precise conclusions concerning the strength and breadth of support for a hypothesis.
Psychological research involving human subjects raises ethical concerns about the subject's right to privacy, the possible harm or discomfort caused by experimental procedures, and the use of deception. Over the years, psychologists have established various ethical guidelines. The American Psychological Association recommends that researchers (1) tell prospective subjects what they will experience so they can give informed consent to participate; (2) instruct subjects that they may withdraw from the study at any time; (3) minimize all harm and discomfort; (4) keep the subjects’ responses and behaviours confidential; and (5) debrief subjects who were deceived in some way by fully explaining the research after they have participated. Some psychologists argue that such rules should never be broken. Others say that some degree of flexibility is needed in order to study certain important issues, such as the effects of stress on test performance.
Laboratory experiments that use rats, mice, rabbits, pigeons, monkeys, and other animals are an important part of psychology, just as in medicine. Animal research serves three purposes in psychology: to learn more about certain types of animals, to discover general principles of behaviour that pertain to all species, and to study variables that cannot ethically be tested with human beings. But is it ethical to experiment on animals?
Some animal rights activists believe that it is wrong to use animals in experiments, particularly in those that involve surgery, drugs, social isolation, food deprivation, electric shock, and other potentially harmful procedures. These activists see animal experimentation as unnecessary and question whether results from such research can be applied to humans. Many activists also argue that like humans, animals have the capacity to suffer and feel pain. In response to these criticisms, many researchers point out that animal experimentation has helped to improve the quality of human life. They note that animal studies have contributed to the treatment of anxiety, depression, and other mental disorders. Animal studies have also contributed to our understanding of conditions such as Alzheimer’s disease, obesity, alcoholism, and the effects of stress on the immune system. Most researchers follow strict ethical guidelines that require them to minimize pain and discomfort to animals and to use the least invasive procedures possible. In addition, federal animal-protection laws in the United States require researchers to provide humane care and housing of animals and to tend to the psychological well-being of primates used in research.
One of the youngest sciences, psychology did not emerge as a formal discipline until the late 19th century. But its roots extend to the ancient past. For centuries, philosophers and religious scholars have wondered about the nature of the mind and the soul. Thus, the history of psychological thought begins in philosophy.
From about 600 to 300 Bc, Greek philosophers inquired about a wide range of psychological topics. They were especially interested in the nature of knowledge and how human beings come to know the world, a field of philosophy known as epistemology. The Greek philosopher Socrates and his followers, Plato and Aristotle, wrote about pleasure and pain, knowledge, beauty, desire, free will, motivation, common sense, rationality, memory, and the subjective nature of perception. They also theorized about whether human traits are innate or the product of experience. In the field of ethics, philosophers of the ancient world probed a variety of psychological questions: Are people inherently good? How can people attain happiness? What motives or drives do people have? Are human beings naturally social?
Second-century physician Galen was one of the most influential figures in ancient medicine, second in importance only to Hippocrates. Using animal dissection and other means, Galen proposed numerous theories about the function of different parts of the human body, most notably the brain, heart, and liver. He also derived an impressive understanding of the differences between veins and arteries. In the selection below, Galen discusses his idea that the optimal state, or “constitution,” of the body should be a perfect balance of various internal and external components.
Early thinkers also considered the causes of mental illness. Many ancient societies thought that mental illness resulted from supernatural causes, such as the anger of gods or possession by evil spirits. Both Socrates and Plato focussed on psychological forces as the cause of mental disturbance. For example, Plato thought madness results when a person’s irrational, animal-like psyche (mind or soul) overwhelms the intellectual, rational psyche. The Greek physician Hippocrates viewed mental disorders as stemming from natural causes, and he developed the first classification system for mental disorders. Galen, a Greek physician who lived in the 2nd century ad, echoed this belief in a physiological basis for mental disorders. He thought they resulted from an imbalance of the four bodily humours: black bile, yellow bile, blood, and phlegm. For example, Galen thought that melancholia (depression) resulted from a person having too much black bile.
More recently, many other men and women contributed to the birth of modern psychology. In the 1600s French mathematician and philosopher René Descartes theorized that the body and mind are separate entities. He regarded the body as a physical entity and the mind as a spiritual entity, and believed the two interacted only through the pineal gland, a tiny structure at the base of the brain. This position became known as dualism. According to dualism, the behaviour of the body is determined by mechanistic laws and can be measured in a scientific manner. But the mind, which transcends the material world, cannot be similarly studied.
English philosophers Thomas Hobbes and John Locke disagreed. They argued that all human experiences-including sensations, images, thoughts, and feelings-are physical processes occurring within the brain and nervous system. Therefore, these experiences are valid subjects of study. In this view, which later became known as monism, the mind and body are one and the same. Today, in light of years of research indicating that the physical and mental aspects of the human experience are intertwined, most psychologists reject a rigid dualist position. See Philosophy of Mind; Dualism; Monism.
Many philosophers of the past also debated the question of whether human knowledge is inborn or the product of experience. Nativists believed that certain elementary truths are innate to the human mind and need not be gained through experience. In contrast, empiricists believed that at birth, a person’s mind is like a tabula rasa, or blank slate, and that all human knowledge ultimately comes from sensory experience. Today, all psychologists agree that both types of factors are important in the acquisition of knowledge.
Modern psychology can also be traced to the study of physiology (a branch of biology that studies living organisms and their parts) and medicine. In the 19th century, physiologists began studying the human brain and nervous system, paying particular attention to the topic of sensation. For example, in the 1850s and 1860s German scientist Hermann von Helmholtz studied sensory receptors in the eye and ear, investigating topics such as the speed of neural impulses, colour vision, hearing, and space perception. Another important German scientist, Gustav Fechner, founded psychophysics, the study of the relationship between physical stimuli and our subjective sensations of those stimuli. Building on the work of his compatriot Ernst Weber, Fechner developed a technique for measuring people’s subjective sensations of various physical stimuli. He sought to determine the minimum intensity level of a stimulus that is needed to produce a sensation.
English naturalist Charles Darwin was particularly influential in the development of psychology. In 1859 Darwin published On the Origin of Species, in which he proposed that all living forms were a product of the evolutionary process of natural selection. Darwin had based his theory on plants and nonhuman animals, but he later asserted that people had evolved through similar processes, and that human anatomy and behaviour could be analyzed in the same way. Darwin’s theory of evolution invited comparisons between humans and other animals, and scientists soon began using animals in psychological research.
French neurologist Jean Martin Charcot shows colleagues a female patient with hysteria at La Salpêtrière, a Paris hospital. Charcot gained renown throughout Europe for his method of treating hysteria and other “nervous disorders” through hypnosis. Charcot’s belief that hysteria had psychological rather than physical origins influenced Austrian neurologist Sigmund Freud, who studied under Charcot.
In medicine, physicians were discovering new links between the brain and language. For example, French surgeon Paul Broca discovered that people who suffer damage to a specific part of the brain’s left hemisphere lose the ability to produce fluent speech. This area of the brain became known as Broca’s area. A German neurologist, Carl Wernicke, reported in 1874 that people with damage to a different area of the left hemisphere lose their ability to comprehend speech. This region became known as Wernicke’s area.
Other physicians focussed on the study of mental disorders. In the late 19th century, French neurologist Jean Charcot discovered that some of the patients he was treating for so-called nervous disorders could be cured through hypnosis, a psychological-not medical-form of intervention. Charcot’s work had a profound impact on Sigmund Freud, an Austrian neurologist whose theories would later revolutionize psychology.
Austrian physician Franz Fredrich Anton Mesmer pioneered the induction of trance-like states to cure medical ailments. Mesmer’s work sparked interest among some of his scientific colleagues but was later dismissed as charlatanism. Today, however, Mesmer is considered a pioneer in hypnosis, which is widely believed to be helpful in managing certain medical conditions.
Psychology was predated and somewhat influenced by various pseudoscientific schools of thought-that is, theories that had no scientific foundation. In the late 18th and early 19th centuries, Viennese physician Franz Joseph Gall developed phrenology, the theory that psychological traits and abilities reside in certain parts of the brain and can be measured by the bumps and indentations in the skull. Although phrenology found popular acceptance among the lay public in western Europe and the United States, most scientists ridiculed Gall’s ideas. However, research later confirmed the more general point that certain mental activities can be traced to specific parts of the brain.
Physicians in the 18th and 19th centuries used crude devices to treat mental illness, none of which offered any real relief. The circulating swing, top left, was used to spin depressed patients at high speed. American physician Benjamin Rush devised the tranquilizing chair, top right, to calm people with mania. The crib, bottom, was widely used to restrain violent patients.
Another Viennese physician of the 18th century, Franz Anton Mesmer, believed that illness was caused by an imbalance of magnetic fluids in the body. He believed he could restore the balance by passing his hands across the patient’s body and waving a magnetic wand over the infected area. Mesmer claimed that his patients would fall into a trance and awaken from it feeling better. The medical community, however, soundly rejected the claim. Today, Mesmer’s technique, known as mesmerism, is regarded as an early forerunner of modern hypnosis.
Modern psychology is deeply rooted in the older disciplines of philosophy and physiology. But the official birth of psychology is often traced to 1879, at the University of Leipzig, in Leipzig, Germany. There, physiologist Wilhelm Wundt established the first laboratory dedicated to the scientific study of the mind. Wundt’s laboratory soon attracted leading scientists and students from Europe and the United States. Among these were James McKeen Cattell, one of the first psychologists to study individual differences through the administration of 'mental tests', Emil Kraepelin, a German psychiatrist who postulated a physical cause for mental illnesses and in 1883 published the first classification system for mental disorders; and Hugo Münsterberg, the first to apply psychology to industry and the law. Wundt was extraordinarily productive over the course of his career. He supervised a total of 186 doctoral dissertations, taught thousands of students, founded the first scholarly psychological journal, and published innumerable scientific studies. His goal, which he stated in the preface of a book he wrote, was 'to mark out a new domain of science'.
Compared to the philosophers who preceded him, Wundt’s approach to the study of mind was based on systematic and rigorous observation. His primary method of research was introspection. This technique involved training people to concentrate and report on their conscious experiences as they reacted to visual displays and other stimuli. In his laboratory, Wundt systematically studied topics such as attention span, reaction time, vision, emotion, and time perception. By recruiting people to serve as subjects, varying the conditions of their experience, and then rigorously repeating all observations, Wundt laid the foundation for the modern psychology experiment.
In the United States, Harvard University professor William James observed the emergence of psychology with great interest. Although trained in physiology and medicine, James was fascinated by psychology and philosophy. In 1875 he offered his first course in psychology. In 1890 James published a two-volume book entitled Principles of Psychology. It immediately became the leading psychology text in the United States, and it brought James a worldwide reputation as a man of great ideas and inspiration. In 28 chapters, James wrote about the stream of consciousness, the formation of habits, individuality, the link between mind and body, emotions, the self, and other topics that inspired generations of psychologists. Today, historians consider James the founder of American psychology.
James’s students also made lasting contributions to the field. In 1883 G. Stanley Hall (who also studied with Wundt) established the first true American psychology laboratory in the United States at Johns Hopkins University, and in 1892 he founded and became the first president of the American Psychological Association. Mary Whiton Calkins created an important technique for studying memory and conducted one of the first studies of dreams. In 1905 she was elected the first female president of the American Psychological Association. Edward Lee Thorndike conducted some of the first experiments on animal learning and wrote a pioneering textbook on educational psychology.
During the first decades of psychology, two main schools of thought dominated the field: structuralism and functionalism. Structuralism was a system of psychology developed by Edward Bradford Titchener, an American psychologist who studied under Wilhelm Wundt. Structuralists believed that the task of psychology is to identify the basic elements of consciousness in much the same way that physicists break down the basic particles of matter. For example, Titchener identified four elements in the sensation of taste: sweet, sour, salty, and bitter. The main method of investigation in structuralism was introspection. The influence of structuralism in psychology faded after Titchener’s death in 1927.
In contradiction to the structuralist movement, William James promoted a school of thought known as functionalism, the belief that the real task of psychology is to investigate the function, or purpose, of consciousness rather than its structure. James was highly influenced by Darwin’s evolutionary theory that all characteristics of a species must serve some adaptive purpose. Functionalism enjoyed widespread appeal in the United States. Its three main leaders were James Rowland Angell, a student of James; John Dewey, who was also one of the foremost American philosophers and educators; and Harvey A. Carr, a psychologist at the University of Chicago.
In their efforts to understand human behavioural processes, the functional psychologists developed the technique of longitudinal research, which consists of interviewing, testing, and observing one person over a long period of time. Such a system permits the psychologist to observe and record the person’s development and how he or she reacts to different circumstances.
In the late 19th century Viennese neurologist Sigmund Freud developed a theory of personality and a system of psychotherapy known as psychoanalysis. According to this theory, people are strongly influenced by unconscious forces, including innate sexual and aggressive drives. In this 1938 British Broadcasting Corporation interview, Freud recounts the early resistance to his ideas and later acceptance of his work. Freud’s speech is slurred because he was suffering from cancer of the jaw. He died the following year.
Alongside Wundt and James, a third prominent leader of the new psychology was Sigmund Freud, a Viennese neurologist of the late 19th and early 20th century. Through his clinical practice, Freud developed a very different approach to psychology. After graduating from medical school, Freud treated patients who appeared to suffer from certain ailments but had nothing physically wrong with them. These patients were not consciously faking their symptoms, and often the symptoms would disappear through hypnosis, or even just by talking. On the basis of these observations, Freud formulated a theory of personality and a form of psychotherapy known as psychoanalysis. It became one of the most influential schools of Western thought of the 20th century.
Freud introduced his new theory in The Interpretation of Dreams (1889), the first of 24 books he would write. The theory is summarized in Freud’s last book, An Outline of Psychoanalysis, published in 1940, after his death. In contrast to Wundt and James, for whom psychology was the study of conscious experience, Freud believed that people are motivated largely by unconscious forces, including strong sexual and aggressive drives. He likened the human mind to an iceberg: The small tip that floats on the water is the conscious part, and the vast region beneath the surface comprises the unconscious. Freud believed that although unconscious motives can be temporarily suppressed, they must find a suitable outlet in order for a person to maintain a healthy personality.
To probe the unconscious mind, Freud developed the psychotherapy technique of free association. In free association, the patient reclines and talks about thoughts, wishes, memories, and whatever else comes to mind. The analyst tries to interpret these verbalizations to determine their psychological significance. In particular, Freud encouraged patients to free associate about their dreams, which he believed were the “royal road to the unconscious.” According to Freud, dreams are disguised expressions of deep, hidden impulses. Thus, as patients recount the conscious manifest content of dreams, the psychoanalyst tries to unmask the underlying latent content-what the dreams really mean.
From the start of psychoanalysis, Freud attracted followers, many of whom later proposed competing theories. As a group, these neo-Freudians shared the assumption that the unconscious plays an important role in a person’s thoughts and behaviours. Most parted company with Freud, however, over his emphasis on sex as a driving force. For example, Swiss psychiatrist Carl Jung theorized that all humans inherit a collective unconscious that contains universal symbols and memories from their ancestral past. Austrian physician Alfred Adler theorized that people are primarily motivated to overcome inherent feelings of inferiority. He wrote about the effects of birth order in the family and coined the term sibling rivalry. Karen Horney, a German-born American psychiatrist, argued that humans have a basic need for love and security, and become anxious when they feel isolated and alone.
Motivated by a desire to uncover unconscious aspects of the psyche, psychoanalytic researchers devised what are known as projective tests. A projective test asks people to respond to an ambiguous stimulus such as a word, an incomplete sentence, an inkblot, or an ambiguous picture. These tests are based on the assumption that if a stimulus is vague enough to accommodate different interpretations, then people will use it to project their unconscious needs, wishes, fears, and conflicts. The most popular of these tests are the Rorschach Inkblot Test, which consists of ten inkblots, and the Thematic Apperception Test, which consists of drawings of people in ambiguous situations.
Psychoanalysis has been criticized on various grounds and is not as popular as in the past. However, Freud’s overall influence on the field has been deep and lasting, particularly his ideas about the unconscious. Today, most psychologists agree that people can be profoundly influenced by unconscious forces, and that people often have a limited awareness of why they think, feel, and behave as they do. See Psychoanalysis; Psychotherapy: Psychodynamic Therapies.
In 1885 German philosopher Hermann Ebbinghaus conducted one of the first studies on memory, using himself as a subject. He memorized lists of nonsense syllables and then tested his memory of the syllables at intervals ranging from 20 minutes to 31 days. As shown in this curve, he found that he remembered less than 40 percent of the items after nine hours, but that the rate of forgetting levelled off over time.
In addition to Wundt, James, and Freud, many others scholars helped to define the science of psychology. In 1885 German philosopher Hermann Ebbinghaus conducted a series of classic experiments on memory, using nonsense syllables to establish principles of retention and forgetting. In 1896 American psychologist Lightner Witmer opened the first psychological clinic, which initially treated children with learning disorders. He later founded the first journal and training program in a new helping profession that he named clinical psychology. In 1905 French psychologist Alfred Binet devised the first major intelligence test in order to assess the academic potential of schoolchildren in Paris. The test was later translated and revised by Stanford University psychologist Lewis Terman and is now known as the Stanford-Binet intelligence test. In 1908 American psychologist Margaret Floy Washburn (who later became the second female president of the American Psychological Association) wrote an influential book called The Animal Mind, in which she synthesized animal research to that time.
In 1912 German psychologist Max Wertheimer discovered that when two stationary lights flash in succession, people see the display as a single light moving back and forth. This illusion inspired the Gestalt psychology movement, which was based on the notion that people tend to perceive a well-organized whole or pattern that is different from the sum of isolated sensations. Other leaders of Gestalt psychology included Wertheimer’s close associates Wolfgang Köhler and Kurt Koffka. Later, German American psychologist Kurt Lewin extended Gestalt psychology to studies of motivation, personality, social psychology, and conflict resolution. German American psychologist Fritz Heider then extended this approach to the study of how people perceive themselves and others.
In the late 19th century, American psychologist Edward L. Thorndike conducted some of the first experiments on animal learning. Thorndike formulated the law of effect, which states that behaviours that are followed by pleasant consequences will be more likely to be repeated in the future.
William James had defined psychology as 'the science of mental life'. But in the early 1900s, growing numbers of psychologists voiced criticism of the approach used by scholars to explore conscious and unconscious mental processes. These critics doubted the reliability and usefulness of the method of introspection, in which subjects are asked to describe their own mental processes during various tasks. They were also critical of Freud’s emphasis on unconscious motives. In search of more-scientific methods, psychologists gradually turned away from research on invisible mental processes and began to study only behaviour that could be observed directly. This approach, known as Behaviourism, ultimately revolutionized psychology and remained the dominant school of thought for nearly 50 years.
Russian physiologist Ivan Pavlov discovered a major type of learning, classical conditioning, by accident while conducting experiments on digestion in the early 1900s. He devoted the rest of his life to discovering the underlying principles of classical conditioning.
Among the first to lay the foundation for the new Behaviourism was American psychologist Edward Lee Thorndike. In 1898 Thorndike conducted a series of experiments on animal learning. In one study, he put cats into a cage, put food just outside the cage, and timed how long it took the cats to learn how to open an escape door that led to the food. Placing the animals in the same cage again and again, Thorndike found that the cats would repeat behaviours that worked and would escape more and more quickly with successive trials. Thorndike thereafter proposed the law of effect, which states that behaviours that are followed by a positive outcome are repeated, while those followed by a negative outcome or none at all are extinguished.
In 1906 Russian physiologist Ivan Pavlov-who had won a Nobel Prize two years earlier for his studies of digestion-stumbled onto one of the most important principles of learning and behaviour. Pavlov was investigating the digestive process in dogs by putting food in their mouths and measuring the flow of saliva. He found that after repeated testing, the dogs would salivate in anticipation of the food, even before he put it in their mouth. He soon discovered that if he rang a bell just before the food was presented each time, the dogs would eventually salivate at the mere sound of the bell. Pavlov had discovered a basic form of learning called classical conditioning (also referred to as Pavlovian conditioning) in which an organism comes to associate one stimulus with another. Later research showed that this basic process can account for how people form certain preferences and fears. See Learning: Classical Conditioning.
American psychologist John B. Watson believed psychologists should study observable behaviour instead of speculating about a person’s inner thoughts and feelings. Watson’s approach, which he termed Behaviourism, dominated psychology for the first half of the 20th century.
Although Thorndike and Pavlov set the stage for Behaviourism, it was not until 1913 that a psychologist set forward a clear vision for behaviorist psychology. In that year John Watson, a well-known animal psychologist at Johns Hopkins University, published a landmark paper entitled 'Psychology as the Behaviorist Views It'. Watson’s goal was nothing less than a complete redefinition of psychology. 'Psychology as the behaviorist views it'. Watson wrote, 'is a purely objective experimental branch of natural science. Its theoretical goal is the prediction and control of behaviour'. Watson narrowly defined psychology as the scientific study of behaviour. He urged his colleagues to abandon both introspection and speculative theories about the unconscious. Instead he stressed the importance of observing and quantifying behaviour. In light of Darwin’s theory of evolution, he also advocated the use of animals in psychological research, convinced that the principles of behaviour would generalize across all species.
American psychologist B. F. Skinner became famous for his pioneering research on learning and behaviour. During his 60-year career, Skinner discovered important principles of operant conditioning, a type of learning that involves reinforcement and punishment. A strict behaviorist, Skinner believed that operant conditioning could explain even the most complex of human behaviours.
Many American psychologists were quick to adopt Behaviourism, and animal laboratories were set up all over the country. Aiming to predict and control behaviour, the behaviorists’ strategy was to vary a stimulus in the environment and observe an organism's response. They saw no need to speculate about mental processes inside the head. For example, Watson argued that thinking was simply talking to oneself silently. He believed that thinking could be studied by recording the movement of certain muscles in the throat.
American psychologist B. F. Skinner designed an apparatus, now called a Skinner box, that allowed him to formulate important principles of animal learning. An animal placed inside the box is rewarded with a small bit of food each time it makes the desired response, such as pressing a lever or pecking a key. A device outside the box records the animal’s responses.
The most forceful leader of Behaviourism was B. F. Skinner, an American psychologist who began studying animal learning in the 1930s. Skinner coined the term reinforcement and invented a new research apparatus called the Skinner box for use in testing animals. Based on his experiments with rats and pigeons, Skinner identified a number of basic principles of learning. He claimed that these principles explained not only the behaviour of laboratory animals, but also accounted for how human beings learn new behaviours or change existing behaviours. He concluded that nearly all behaviour is shaped by complex patterns of reinforcement in a person’s environment, a process that he called operant conditioning (also referred to as instrumental conditioning). Skinner’s views on the causes of human behaviour made him one of the most famous and controversial psychologists of the 20th century.
Operant conditioning, pioneered by American psychologist B. F. Skinner, is the process of shaping behaviour by means of reinforcement and punishment. This illustration shows how a mouse can learn to manoeuver through a maze. The mouse is rewarded with food when it reaches the first turn in the maze (A). Once the first behaviour becomes ingrained, the mouse is not rewarded until it makes the second turn (B). After many times through the maze, the mouse must reach the end of the maze to receive its reward ©.
Skinner and others applied his findings to modify behaviour in the workplace, the classroom, the clinic, and other settings. In World War II (1939-1945), for example, he worked for the U.S. government on a top-secret project in which he trained pigeons to guide an armed glider plane toward enemy ships. He also invented the first teaching machine, which allowed students to learn at their own pace by solving a series of problems and receiving immediate feedback. In his popular book Walden Two (1948), Skinner presented his vision of a behaviorist utopia, in which socially adaptive behaviours are maintained by rewards, or positive reinforcements. Throughout his career, Skinner held firm to his belief that psychologists should focus on the prediction and control of behaviour.
Faced with a choice between psychoanalysis and Behaviourism, many psychologists in the 1950s and 1960s sensed a void in psychology’s conception of human nature. Freud had drawn attention to the darker forces of the unconscious, and Skinner was interested only in the effects of reinforcement on observable behaviour. Humanistic psychology was born out of a desire to understand the conscious mind, free will, human dignity, and the capacity for self-reflection and growth. An alternative to psychoanalysis and Behaviourism, humanistic psychology became known as 'the third force'.
The humanistic movement was led by American psychologists Carl Rogers and Abraham Maslow. According to Rogers, all humans are born with a drive to achieve their full capacity and to behave in ways that are consistent with their true selves. Rogers, a psychotherapist, developed person-entered therapy, a nonjudgmental, nondirective approach that helped clients clarify their sense of who they are in an effort to facilitate their own healing process. At about the same time, Maslow theorized that all people are motivated to fulfill a hierarchy of needs. At the bottom of the hierarchy are basic physiological needs, such as hunger, thirst, and sleep. Further up the hierarchy are needs for safety and security, needs for belonging and love, and esteem-related needs for status and achievement. Once these needs are met, Maslow believed, people strive for self-actualization, the ultimate state of personal fulfilment. As Maslow put it, 'A musician must make music, an artist must paint, a poet must write, if he is ultimately to be at peace with himself. What a man can be, he must be'.
Swiss psychologist Jean Piaget based his early theories of intellectual development on his questioning and observation of his own children. From these and later studies, Piaget concluded that all children pass through a predictable series of cognitive stages.
From the 1920s through the 1960s, Behaviourism dominated psychology in the United States. Eventually, however, psychologists began to move away from strict Behaviourism. Many became increasingly interested in cognition, a term used to describe all the mental processes involved in acquiring, storing, and using knowledge. Such processes include perception, memory, thinking, problem solving, imagining, and language. This shift in emphasis toward cognition had such a profound influence on psychology that it has often been called the cognitive revolution. The psychological study of cognition became known as cognitive psychology.
One reason for psychologists’ renewed interest in mental processes was the invention of the computer, which provided an intriguing metaphor for the human mind. The hardware of the computer was likened to the brain, and computer programs provided a step-by-step model of how information from the environment is input, stored, and retrieved to produce a response. Based on the computer metaphor, psychologists began to formulate information-processing models of human thought and behaviour.
In the 1950s American linguist Noam Chomsky proposed that the human brain is especially constructed to detect and reproduce language and that the ability to form and understand language is innate to all human beings. According to Chomsky, young children learn and apply grammatical rules and vocabulary as they are exposed to them and do not require initial formal teaching.
The pioneering work of Swiss psychologist Jean Piaget also inspired psychologists to study cognition. During the 1920s, while administering intelligence tests in schools, Piaget became interested in how children think. He designed various tasks and interview questions to reveal how children of different ages reason about time, nature, numbers, causality, morality, and other concepts. Based on his many studies, Piaget theorized that from infancy to adolescence, children advance through a predictable series of cognitive stages.
The cognitive revolution also gained momentum from developments in the study of language. Behaviorist B. F. Skinner had claimed that language is acquired according to the laws of operant conditioning, in much the same way that rats learn to press a bar for food pellets. In 1959, however, American linguist Noam Chomsky charged that Skinner's account of language development was wrong. Chomsky noted that children all over the world start to speak at roughly the same age and proceed through roughly the same stages without being explicitly taught or rewarded for the effort. According to Chomsky, the human capacity for learning language is innate. He theorized that the human brain is “hardwired” for language as a product of evolution. By pointing to the primary importance of biological dispositions in the development of language, Chomsky’s theory dealt a serious blow to the behaviorist assumption that all human behaviours are formed and maintained by reinforcement.
Before psychology became established in science, it was popularly associated with extrasensory perception (ESP) and other paranormal phenomena (phenomena beyond the laws of science). Today, these topics lie outside the traditional scope of scientific psychology and fall within the domain of parapsychology. Psychologists note that thousands of studies have failed to demonstrate the existence of paranormal phenomena. See Psychical Research.
Grounded in the conviction that mind and behaviour must be studied using statistical and scientific methods, psychology has become a highly respected and socially useful discipline. Psychologists now study important and sensitive topics such as the similarities and differences between men and women, racial and ethnic diversity, sexual orientation, marriage and divorce, abortion, adoption, intelligence testing, sleep and sleep disorders, obesity and dieting, and the effects of psychoactive drugs such as methylphenidate (Ritalin) and fluoxetine (Prozac).
In the last few decades, researchers have made significant breakthroughs in understanding the brain, mental processes, and behaviour. This section of the article provides examples of contemporary research in psychology: the plasticity of the brain and nervous system, the nature of consciousness, memory distortions, competence and rationality, genetic influences on behaviour, infancy, the nature of intelligence, human motivation, prejudice and discrimination, the benefits of psychotherapy, and the psychological influences on the immune system.
Psychologists once believed that the neural circuits of the adult brain and nervous system were fully developed and no longer subject to change. Then in the 1980s and 1990s a series of provocative experiments showed that the adult brain has flexibility, or plasticity-a capacity to change as a result of usage and experience.
These experiments showed that adult rats flooded with visual stimulation formed new neural connections in the brain’s visual cortex, where visual signals are interpreted. Likewise, those trained to run an obstacle course formed new connections in the cerebellum, where balance and motor skills are coordinated. Similar results with birds, mice, and monkeys have confirmed the point: Experience can stimulate the growth of new connections and mold the brain’s neural architecture.
Once the brain reaches maturity, the number of neurons does not increase, and any neurons that are damaged are permanently disabled. But the plasticity of the brain can greatly benefit people with damage to the brain and nervous system. Organisms can compensate for loss by strengthening old neural connections and sprouting new ones. That is why people who suffer strokes are often able to recover their lost speech and motor abilities.
In 1860 German physicist Gustav Fechner theorized that if the human brain were divided into right and left halves, each side would have its own stream of consciousness. Modern medicine has actually allowed scientists to investigate this hypothesis. People who suffer from life-threatening epileptic seizures sometimes undergo a radical surgery that severs the corpus callosum, a bridge of nerve tissue that connects the right and left hemispheres of the brain. After the surgery, the two hemispheres can no longer communicate with each other.
Scientists have long considered the nature of consciousness without producing a fully satisfactory definition. In the early 20th century American philosopher and psychologist William James suggested that consciousness is a mental process involving both attention to external stimuli and short-term memory. Later scientific explorations of consciousness mostly expanded upon James’s work. In this article from a 1997 special issue of Scientific American, Nobel laureate Francis Crick, who helped determine the structure of DNA, and fellow biophysicist Christof Koch explain how experiments on vision might deepen our understanding of consciousness.
Beginning in the 1960s American neurologist Roger Sperry and others tested such split-brain patients in carefully designed experiments. The researchers found that the hemispheres of these patients seemed to function independently, almost as if the subjects had two brains. In addition, they discovered that the left hemisphere was capable of speech and language, but not the right hemisphere. For example, when split-brain patients saw the image of an object flashed in their left visual field (thus sending the visual information to the right hemisphere), they were incapable of naming or describing the object. Yet they could easily point to the correct object with their left hand (which is controlled by the right hemisphere). As Sperry’s colleague Michael Gazzaniga stated, 'Each half brain seemed to work and function outside of the conscious realm of the other'.
Other psychologists interested in consciousness have examined how people are influenced without their awareness. For example, research has demonstrated that under certain conditions in the laboratory, people can be fleetingly affected by subliminal stimuli, sensory information presented so rapidly or faintly that it falls below the threshold of awareness. (Note, however, that scientists have discredited claims that people can be importantly influenced by subliminal messages in advertising, rock music, or other media.) Other evidence for influence without awareness comes from studies of people with a type of amnesia that prevents them from forming new memories. In experiments, these subjects are unable to recognize words they previously viewed in a list, but they are more likely to use those words later in an unrelated task. In fact, memory without awareness is normal, as when people come up with an idea they think is original, only later to realize that they had inadvertently borrowed it from another source.
Cognitive psychologists have often likened human memory to a computer that encodes, stores, and retrieves information. It is now clear, however, that remembering is an active process and that people construct and alter memories according to their beliefs, wishes, needs, and information received from outside sources.
Without realizing it, people sometimes create memories that are false. In one study, for example, subjects watched a slide show depicting a car accident. They saw either a 'STOP' sign or a 'YIELD' sign in the slides, but afterward they were asked a question about the accident that implied the presence of the other sign. Influenced by this suggestion, many subjects recalled the wrong traffic sign. In another study, people who heard a list of sleep-related words (bed, yawn) or music-related words (jazz, instrument) were often convinced moments later that they had also heard the words sleep or music-words that fit the category but were not on the list. In a third study, researchers asked college students to recall their high-school grades. Then the researchers checked those memories against the students’ actual transcripts. The students recalled most grades correctly, but most of the errors inflated their grades, particularly when the actual grades were low. See Memory.
When scientists distinguish between human beings and other animals, they point to our larger cerebral cortex (the outer part of the brain) and to our superior intellect-as seen in the abilities to acquire and store large amounts of information, solve problems, and communicate through the use of language.
In recent years, however, those studying human cognition have found that people are often less than rational and accurate in their performance. Some researchers have found that people are prone to forgetting, and worse, that memories of past events are often highly distorted. Others have observed that people often violate the rules of logic and probability when reasoning about real events, as when gamblers overestimate the odds of winning in games of chance. One reason for these mistakes is that we commonly rely on cognitive heuristics, mental shortcuts that allow us to make judgments that are quick but often in error. To understand how heuristics can lead to mistaken assumptions, imagine offering people a lottery ticket containing six numbers out of a pool of the numbers 1 through 40. If given a choice between the tickets 6-39-2-10-24-30 or 1-2-3-4-5-6, most people select the first ticket, because it has the appearance of randomness. Yet out of the 3,838,380 possible winning combinations, both sequences are equally likely.
One of the oldest debates in psychology, and in philosophy, concerns whether individual human traits and abilities are predetermined from birth or due to one’s upbringing and experiences. This debate is often termed the nature-nurture debate. A strict genetic (nature) position states that people are predisposed to become sociable, smart, cheerful, or depressed according to their genetic blueprint. In contrast, a strict environmental (nurture) position says that people are shaped by parents, peers, cultural institutions, and life experiences.
Research shows that the more genetically related a person is to someone with schizophrenia, the greater the risk that person has of developing the illness. For example, children of one parent with schizophrenia have a 13 percent chance of developing the illness, whereas children of two parents with schizophrenia have a 46 percent chance of developing the disorder.
Researchers can estimate the role of genetic factors in two ways: (1) twin studies and (2) adoption studies. Twin studies compare identical twins with fraternal twins of the same sex. If identical twins (who share all the same genes) are more similar to each other on a given trait than are same-sex fraternal twins (who share only about half of the same genes), then genetic factors are assumed to influence the trait. Other studies compare identical twins who are raised together with identical twins who are separated at birth and raised in different families. If the twins raised together are more similar to each other than the twins raised apart, childhood experiences are presumed to influence the trait. Sometimes researchers conduct adoption studies, in which they compare adopted children to their biological and adoptive parents. If these children display traits that resemble those of their biological relatives more than their adoptive relatives, genetic factors are assumed to play a role in the trait.
In recent years, several twin and adoption studies have shown that genetic factors play a role in the development of intellectual abilities, temperament and personality, vocational interests, and various psychological disorders. Interestingly, however, this same research indicates that at least 50 percent of the variation in these characteristics within the population is attributable to factors in the environment. Today, most researchers agree that psychological characteristics spring from a combination of the forces of nature and nurture.
Helpless to survive on their own, newborn babies nevertheless possess a remarkable range of skills that aid in their survival. Newborns can see, hear, taste, smell, and feel pain; vision is the least developed sense at birth but improves rapidly in the first months. Crying communicates their need for food, comfort, or stimulation. Newborns also have reflexes for sucking, swallowing, grasping, and turning their head in search of their mother’s nipple.
In 1890 William James described the newborn’s experience as 'one great blooming, buzzing confusion'. However, with the aid of sophisticated research methods, psychologists have discovered that infants are smarter than was previously known.
A period of dramatic growth, infancy lasts from birth to around 18 months of age. Researchers have found that infants are born with certain abilities designed to aid their survival. For example, newborns show a distinct preference for human faces over other visual stimuli.
To learn about the perceptual world of infants, researchers measure infants’ head movements, eye movements, facial expressions, brain waves, heart rate, and respiration. Using these indicators, psychologists have found that shortly after birth, infants show a distinct preference for the human face over other visual stimuli. Also suggesting that newborns are tuned in to the face as a social object is the fact that within 72 hours of birth, they can mimic adults who purse the lips or stick out the tongue-a rudimentary form of imitation. Newborns can distinguish between their mother’s voice and that of another woman. And at two weeks old, nursing infants are more attracted to the body odour of their mother and other breast-feeding females than to that of other women. Taken together, these findings show that infants are equipped at birth with certain senses and reflexes designed to aid their survival.
In 1905 French psychologist Alfred Binet and colleague Théodore Simon devised one of the first tests of general intelligence. The test sought to identify French children likely to have difficulty in school so that they could receive special education. An American version of Binet’s test, the Stanford-Binet Intelligence Scale, is still used today.
In 1905 French psychologist Alfred Binet devised the first major intelligence test for the purpose of identifying slow learners in school. In doing so, Binet assumed that intelligence could be measured as a general intellectual capacity and summarized in a numerical score, or intelligence quotient (IQ). Consistently, testing has revealed that although each of us is more skilled in some areas than in others, a general intelligence underlies our more specific abilities.
Intelligence tests often play a decisive role in determining whether a person is admitted to college, graduate school, or professional school. Thousands of people take intelligence tests every year, but many psychologists and education experts question whether these tests are an accurate way of measuring who will succeed or fail in school and later in life. In this 1998 Scientific American article, psychology and education professor Robert J. Sternberg of Yale University in New Haven, Connecticut, presents evidence against conventional intelligence tests and proposes several ways to improve testing.
Today, many psychologists believe that there is more than one type of intelligence. American psychologist Howard Gardner proposed the existence of multiple intelligences, each linked to a separate system within the brain. He theorized that there are seven types of intelligence: linguistic, logical-mathematical, spatial, musical, bodily-kinesthetic, interpersonal, and intrapersonal. American psychologist Robert Sternberg suggested a different model of intelligence, consisting of three components: analytic ('school smarts', as measured in academic tests), creative (a capacity for insight), and practical ('street smarts', or the ability to size up and adapt to situations). See Intelligence.
Psychologists from all branches of the discipline study the topic of motivation, an inner state that moves an organism toward the fulfilment of some goal. Over the years, different theories of motivation have been proposed. Some theories state that people are motivated by the need to satisfy physiological needs, whereas others state that people seek to maintain an optimum level of bodily arousal (not too little and not too much). Still other theories focus on the ways in which people respond to external incentives such as money, grades in school, and recognition. Motivation researchers study a wide range of topics, including hunger and obesity, sexual desire, the effects of reward and punishment, and the needs for power, achievement, social acceptance, love, and self-esteem.
In 1954 American psychologist Abraham Maslow proposed that all people are motivated to fulfill a hierarchical pyramid of needs. At the bottom of Maslow’s pyramid are needs essential to survival, such as the needs for food, water, and sleep. The need for safety follows these physiological needs. According to Maslow, higher-level needs become important to us only after our more basic needs are satisfied. These higher needs include the need for love and belongingness, the need for esteem, and the need for self-actualization (in Maslow’s theory, a state in which people realize their greatest potential).
The view that the role of sentences in inference gives a more important key to their meaning than their ‘external’ relations to things in the world. The meaning of a sentence becomes its place in a network of inferences that it legitimates. Also known as its functional role semantics, procedural semantic or conceptual role semantics. As these view bear some relation to the coherence theory of truth, and suffers from the same suspicion that divorces meaning from any clear association with things in the world.
Paradoxes rest upon the assumption that analysis is a relation with concept, then are involving entities of other sorts, such as linguistic expressions, and that in true analysis, analysand and analysandum are one and the same concept. However, these assumptions are explicit in the British philosopher George Edward Moore, but some of Moore’s remarks hint at a solution that a statement of an analysis is a statement partially taken about the concept involved and partly about the verbal expression used to express it. Moore is to suggest that he thinks of a solution of this sort is bound to be right, however, facts to suggest one because he cannot reveal of any way in which the analysis can be as part of the expressors.
Elsewhere, the possibility clearly does set of apparent incontrovertible premises giving unacceptable or contradictory conclusions. To solve a paradox will involve either showing that these is a hidden flaw in the premises, or what the reasoning is erroneous, or that the apparently unacceptable conclusion can, in fact, be tolerable. Paradoxes are therefore important in philosophy, for until one is solved it shows that there is something about our reasoning and our concepts that we do not understand. Famous families of paradoxes include the semantic paradoxes and Zeno’s paradoxes. At the beginning of the 20th century, Russell’s paradox and other set-theoretic paradoxes of set theory, while the Sorites paradox has led to the investigation of the semantics of vagueness, and fuzzy logic. Paradoxes are under their other titles. Much as there is as much as a puzzle arising when someone says ‘p but I do not believe that p’. What is said is not contradictory, since (for many instances of p) both parts of it could br true. But the person nevertheless violates one presupposition of normal practice, namely that you assert something only if you believe it: By adding that you do not believe what you just said you undo the natural significance of the original act saying it.
Furthermore, the moral philosopher and epistemologist Bernard Bolzano (1781-1848), whose logical work was based on a strong sense of there being an ontological underpinning of science and epistemology, lying in a theory of the objective entailments masking up the structure of scientific theories. His ability to challenge wisdom and come up with startling new ideas, as a Christian philosopher whether than from any position of mathematical authority, that for considerations of infinity, Bolzano’s significant work was Paradoxin des Unenndlichen, written in retirement an translated into the English as Paradoxes of the Infinite. Here, Bolzano considered directly the points that had concerned Galileo-the conflicting result that seem to emerge when infinity is studied. Certainly most of the paradoxical statements encountered in the mathematical domain . . . are propositions which either immediately contain the idea of the infinite, or at least in some way or other depends upon that idea for their attempted proof.
Continuing, Bolzano looks at two possible approaches to infinity. One is simply the case of setting up a sequence of numbers, such as the whole numbers, and saying that ass it can not conceivably be said to have a last term, it is inherently infinite-not finite. It is easy enough to show that the whole numbers do not have a point at which they stop, giving a name to that last number whatever it might be an call it ‘ultimate’. Then what’s wrong with ultimate + 1? Why is that not a whole number?
The second approach to infinity, which Bolzano ascribes in Paradoses of the Infinite to ‘some philosophers . . . Taking this approach describe his first conception of infinity as the ‘bad infinity’. Although the German philosopher Friedrich George Hegel (1770-1831) applies the conceptual form of infinity and points that it is, rather, the basis for a substandard infinity that merely reaches towards the absolute, but never reaches it. In Paradoses of the Infinite, he calls this form of potential infinity as a variable quantity knowing no limit to its growth (a definition adopted, even by many mathematicians) . . . always growing int th infinite and never reaching it. As far as Hegel and his colleagues were concerned , using this uprush, there was no need for a real infinity beyond some unreachable absolute. Instead we deal with a variable quality that is as big as we need it to be, or often in calculus as small as we need it to be, without ever reaching the absolute, ultimate, truly infinite.
Bolzano argues, though, that there is something else, an infinity that doe not have this ‘whatever you need it to be’ elasticity. In fact a truly infinite quantity (for example, the length of a straight line unbounded in either direction, meaning: The magnitude of the spatial entity containing all the points determined solely by their abstractly conceivable relation to two fixed points) does not by any means need to be variable, and in adduced example it is in fact not variable. Conversely, it is quite possible for a quantity merely capable of being taken greater than we have already taken it, and of becoming larger than any pre-assigned (finite) quantity, nevertheless to mean at all times merely finite, which holds in particular of every numerical quantity 1, 2, 3, 4, 5.
In other words, for Bolzano there could be a true infinity that was not a variable ‘something’ that was only bigger than anything you might specify. Such a true infinity was the result of joining two pints together and extending that line in both directions without stopping. And what is more, he could separate off the demands of calculus, using a finite quality without ever bothering with the slippery potential infinity. Here was both a deeper understanding of the nature of infinity and the basis on which are built in his ‘safe’ infinity free calculus.
This use of the inexhaustible follows on directly from most Bolzano’s criticism of the way that ∞we used as a variable something that would be bigger than anything you could specify, but never quite reached the true, absolute infinity. In Paradoxes of the Infinity Bolzano points out that is possible for a quantity merely capable of becoming larger than any one pre-assigned (finite) quantity, nevertheless to remain at all times merely finite.
Bolzano intended tis as a criticism of the way infinity was treated, but Professor Jacquette sees it instead of a way of masking use of practical applications like calculus without the need for weasel words about infinity.
By replacing ∞with ¤ we do away with one of the most common requirements for infinity, but is there anything left that map out to the real world? Can we confine infinity to that pure mathematical other world, where anything, however unreal, can be constructed, and forget about it elsewhere? Surprisingly, this seems to have been the view, at least at one point in time, even of the German mathematician and founder of set-theory Georg Cantor (1845-1918), himself, whose comments in 1883, that only the finite numbers are real.
Keeping within the lines of reason, both the Cambridge mathematician and philosopher Frank Plumpton Ramsey (1903-30) and the Italian mathematician G. Peano (1858-1932) have been to distinguish logical paradoxes and that depend upon the notion of reference or truth (semantic notions), such are the postulates justifying mathematical induction. It ensures that a numerical series is closed, in the sense that nothing but zero and its successors can be numbers. In that any series satisfying a set of axioms can be conceived as the sequence of natural numbers. Candidates from set theory include the Zermelo numbers, where the empty set is zero, and the successor of each number is its unit set, and the von Neuman numbers, where each number is the set of all smaller numbers. A similar and equally fundamental complementarity exists in the relation between zero and infinity. Although the fullness of infinity is logically antithetical to the emptiness of zero, infinity can be obtained from zero with a simple mathematical operation. The division of many numbers by zero is infinity, while the multiplication of any number by zero is zero.
With the set theory developed by the German mathematician and logician Georg Cantor. From 1878 to 1807, Cantor created a theory of abstract sets of entities that eventually became a mathematical discipline. A set, as he defined it, is a collection of definite and distinguished objects in thought or perception conceived as a whole.
Cantor attempted to prove that the process of counting and the definition of integers could be placed on a solid mathematical foundation. His method was to repeatedly place the elements in one set into ‘one-to-one’ correspondence with those in another. In the case of integers, Cantor showed that each integer (1, 2, 3, . . . n) could be paired with an even integer (2, 4, 6, . . . n), and, therefore, that the set of all integers was equal to the set of all even numbers.
Amazingly, Cantor discovered that some infinite sets were large than others and that infinite sets formed a hierarchy of greater infinities. After this failed attempt to save the classical view of logical foundations and internal consistency of mathematical systems, it soon became obvious that a major crack had appeared in the seemingly sold foundations of number and mathematics. Meanwhile, an impressive number of mathematicians began to see that everything from functional analysis to the theory of real numbers depended on the problematic character of number itself.
While, in the theory of probability Ramsey was the first to show how a personalised theory could be developed, based on precise behavioural notions of preference and expectation. In the philosophy of language, Ramsey was one of the first thinkers to accept a ‘redundancy theory of truth’, which hr combined with radical views of the function of man y kinds of propositions. Neither generalizations nor causal propositions, nor those treating probability or ethics, describe facts, but each has a different specific function in our intellectual economy.
Ramsey advocates that of a sentence generated by taking all the sentence affirmed in a scientific theory that use some term, e.g., ‘quark’. Replacing the term by a variable, and existentially quantifying into the result. Instead of saying quarks have such-and-such properties, Ramsey postdated that the sentence as saying that there is something that has those properties. If the process is repeated, the sentence gives the ‘topic-neutral’ structure of the theory, but removes any implications that we know what the term so treated denote. I t leaves open the possibility of identifying the theoretical item with whatever it is that best fits the description provided. Nonetheless, it was pointed out by the Cambridge mathematician Newman that if the process is carried out for all except the logical bones of the theory, then by the Löwenheim-Skolem theorem, the result will be interpretable in any domain of sufficient cardinality, and the content of the theory may reasonably be felt to have been lost.
It seems, that the most taken of paradoxes in the foundations of ‘set theory’ as discovered by Russell in 1901. Some classes have themselves as members: The class of all abstract objects, for example, is an abstract object, whereby, others do not: The class of donkeys is not itself a donkey. Now consider the class of all classes that are not members of themselves, is this class a member of itself, that, if it is, then it is not, and if it is not, then it is.
The paradox is structurally similar to easier examples, such as the paradox of the barber. Such one like a village having a barber in it, who shaves all and only the people who do not have in themselves. Who shaves the barber? If he shaves himself, then he does not, but if he does not shave himself, then he does not. The paradox is actually just a proof that there is no such barber or in other words, that the condition is inconsistent. All the same, it is no to easy to say why there is no such class as the one Russell defines. It seems that there must be some restriction on the kind of definition that are allowed to define classes and the difficulty that of finding a well-motivated principle behind any such restriction.
The French mathematician and philosopher Henri Jules Poincaré (1854-1912) believed that paradoses like those of Russell nd the ‘barber’ were due to such as the impredicative definitions, and therefore proposed banning them. But, it tuns out that classical mathematics required such definitions at too many points for the ban to be easily absolved. Having, in turn, as forwarded by Poincaré and Russell, was that in order to solve the logical and semantic paradoxes it would have to ban any collection (set) containing members that can only be defined by means of the collection taken as a whole. It is, effectively by all occurring principles into which have an adopting vicious regress, as to mark the definition for which involves no such failure. There is frequently room for dispute about whether regresses are benign or vicious, since the issue will hinge on whether it is necessary to reapply the procedure. The cosmological argument is an attempt to find a stopping point for what is otherwise seen as being an infinite regress, and, to ban of the predicative definitions.
The investigation of questions that arise from reflection upon sciences and scientific inquiry, are such as called of a philosophy of science. Such questions include, what distinctions in the methods of science? s there a clear demarcation between scenes and other disciplines, and how do we place such enquires as history, economics or sociology? And scientific theories probable or more in the nature of provisional conjecture? Can the be verified or falsified? What distinguished good from bad explanations? Might there be one unified since, embracing all th special science? For much of the 20th century there questions were pursued in a highly abstract and logical framework it being supposed that as general logic of scientific discovery that a general logic of scientific discovery a justification might be found. However, many now take interests in a more historical, contextual and sometimes sociological approach, in which the methods and successes of a science at a particular time are regarded less in terms of universal logical principles and procedure, and more in terms of their availability to methods and paradigms as well as the social context.
In addition, to general questions of methodology, there are specific problems within particular sciences, giving subjects as biology, mathematics and physics.
The intuitive certainty that sparks aflame the dialectic awarenesses for its immediate concerns are either of the truth or by some other in an object of apprehensions, such as a concept. Awareness as such, has to its amounting quality value the place where philosophical understanding of the source of our knowledge are, however, in covering the sensible apprehension of things and pure intuition it is that which stricture sensation into the experience of things accent of its direction that orchestrates the celestial overture into measures in space and time.
The notion that determines how something is seen or evaluated of the status of law and morality especially associated with St Thomas Aquinas and the subsequent scholastic tradition. More widely, any attempt to cement the moral and legal order together with the nature of the cosmos or how the nature of human beings, for which sense it is also found in some Protestant writers, and arguably derivative from a Platonic view of ethics, and is implicit in ancient Stoicism. Law stands above and apart from the activities of human lawmaker, it constitutes an objective set of principles that can be seen true by ‘natural light’ or reason, and (in religion versions of the theory) that express God’s will for creation. Non-religious versions of the theory substitute objective conditions for human flourishing as the source of constraints upon permissible actions and social arrangements. Within the natural law tradition, different views have been held about the relationship between the rule of law about God’ s will, for instance the Dutch philosopher Hugo Grothius (1583-1645), similarly takes upon the view that the content of natural law is independent of any will, including that of God, while the German theorist and historian Samuel von Pufendorf (1632-94) takes the opposite view, thereby facing the problem of one horn of the Euthyphro dilemma, that simply states, that its dilemma arises from whatever the source of authority is supposed to be, for in which do we care about the general good because it is good, or do we just call good things that we care about. Wherefore, by facing the problem that may be to assume of a strong form, in which it is claimed that various facts entail values, or a weaker form, from which it confines itself to holding that reason by itself is capable of discerning moral requirements that are supped of binding to all human bings regardless of their desires
Although the morality of people send the ethical amount from which the same thing, is that there is a usage that restricts morality to systems such as that of the German philosopher and founder of ethical philosophy Immanuel Kant (1724-1804), based on notions such as duty, obligation, and principles of conduct, reserving ethics for more than the Aristotelian approach to practical reasoning based on the notion of a virtue, and generally avoiding the separation of ‘moral’ considerations from other practical considerations. The scholarly issues are complex, with some writers seeing Kant as more Aristotelian and Aristotle as, ore involved in a separate sphere of responsibility and duty, than the simple contrast suggests. Some theorists see the subject in terms of a number of laws (as in the Ten Commandments). The status of these laws may be test they are the edicts of a divine lawmaker, or that they are truths of reason, knowable deductively. Other approaches to ethics (e.g., eudaimonism, situation ethics, virtue ethics) eschew general principles as much as possible, frequently disguising the great complexity of practical reasoning. For Kantian notion of the moral law is a binding requirement of the categorical imperative, and to understand whether they are equivalent at some deep level. Kant’s own applications of the notion are not always convincing, as for one cause of confusion in relating Kant’s ethics to theories such additional expressivism is that it is easy, but mistaken, to suppose that the categorical nature of the imperative means that it cannot be the expression of sentiment, but must derive from something ‘unconditional’ or ‘necessary’ such as the voice of reason.
For which ever reason, the mortal being makes of its presence to the future of weighing of that which one must do, or that which can be required of one. The term carries implications of that which is owed (due) to other people, or perhaps in onself. Universal duties would be owed to persons (or sentient beings) as such, whereas special duty in virtue of specific relations, such as being the child of someone, or having made someone a promise. Duty or obligation is the primary concept of ‘deontological’ approaches to ethics, but is constructed in other systems out of other notions. In the system of Kant, a perfect duty is one that must be performed whatever the circumstances: Imperfect duties may have to give way to the more stringent ones. In another way, perfect duties are those that are correlative with the right to others, imperfect duties are not. Problems with the concept include the ways in which due needs to be specified (a frequent criticism of Kant is that his notion of duty is too abstract). The concept may also suggest of a regimented view of ethical life in which we are all forced conscripts in a kind of moral army, and may encourage an individualistic and antagonistic view of social relations.
The most generally accepted account of externalism and/or internalism, that this distinction is that a theory of justification is internalist if only if it requiem that all of the factors needed for a belief to be epistemologically justified for a given person be cognitively accessible to that person, internal to his cognitive percreptive, and externalist, if it allows that at least some of the justifying factors need not be thus accessible, so that thy can be external to the believer’s cognitive perceptive, beyond any such given relations. However, epistemologists often use the distinction between internalist and externalist theories of epistemic justification without offering any very explicit explication.
The externalist/internalist distinction has been mainly applied to theories of epistemic justification: It has also been applied in a closely related way to accounts of knowledge and in a rather different way to accounts of belief and thought contents.
The internalist requirement of cognitive accessibility can be interpreted in at least two ways: A strong version of internalism would require that the believer actually be aware of the justifying factor in order to be justified: While a weaker version would require only that he be capable of becoming aware of them by focussing his attentions appropriately, but without the need for any change of position, new information, etc. Though the phrase ‘cognitively accessible’ suggests the weak interpretion, the main intuitive motivation for internalism, viz the idea that epistemic justification requires that the believer actually have in his cognitive possession a reason for thinking that the belief is true, and would require the strong interpretation.
Perhaps, the clearest example of an internalist position would be a Foundationalist view according to which foundational beliefs pertain to immediately experienced states of mind and other beliefs are justified by standing in cognitively accessible logical or inferential relations to such foundational beliefs. Such a view could count as either a strong or a weak version of internalism, depending on whether actual awareness of the justifying elements or only the capacity to become aware of them is required. Similarly, a coherent view could also be internalist, if both the beliefs or other states with which a justification belief is required to cohere and the coherence relations themselves are reflectively accessible.
It should be carefully noticed that when internalism is construed in this way, it is neither necessary nor sufficient by itself for internalism that the justifying factors literally be internal mental states of the person in question. Not necessary, necessary, because on at least some views, e.g., a direct realist view of perception, something other than a mental state of the believer can be cognitively accessible: Not sufficient, because there are views according to which at least some mental states need not be actual (strong version) or even possible (weak version) objects of cognitive awareness. Also, on this way of drawing the distinction, a hybrid view, according to which some of the factors required for justification must be cognitively accessible while others need not and in general will not be, would count as an externalist view. Obviously too, a view that was externalist in relation to a strong version of internalism (by not requiring that the believer actually be aware of all justifying factors) could still be internalist in relation to a weak version (by requiring that he at least be capable of becoming aware of them).
The most prominent recent externalist views have been versions of reliabilism, whose requirements for justification is roughly that the belief be produced in a way or via a process that makes of objectively likely that the belief is true. What makes such a view externalist is the absence of any requirement that the person for whom the belief is justified have any sort of cognitive access to the relations of reliability in question. Lacking such access, such a person will in general have no reason for thinking that the belief is true or likely to be true , but will, on such an account, nonetheless be epistemically justified in according it. Thus such a view arguably marks a major break from the modern epistemological tradition, stemming from Descartes, which identifies epistemic justification with having a reason, perhaps even a conclusive reason for thinking that the belief is true. An epistemologist working within this tradition is likely to feel that the externalist, than offering a competing account of the same concept of epistemic justification with which the traditional epistemologist is concerned, has simply changed the subject.
The main objection to externalism rests on the intuitive certainty that the basic requirement for epistemic justification is that the acceptance of the belief in question be rational or responsible in relation to the cognitive goal of truth, which seems to require in turn that the believer actually be dialectally aware of a reason for thinking that the belief is true (or, at the very least, that such a reason be available to him). Since the satisfaction of an externalist condition is neither necessary nor sufficient for the existence of such a cognitively accessible reason, it is argued, externalism is mistaken as an account of epistemic justification. This general point has been elaborated by appeal to two sorts of putative intuitive counter-examples to externalism. The first of these challenges the necessity of belief which seem intuitively to be justified, but for which the externalist conditions are not satisfied. The standard examples in this sort are cases where beliefs are produced in some very nonstandard way, e.g., by a Cartesian demon, but nonetheless, in such a way that the subjective experience of the believer is indistinguishable from that of someone whose beliefs are produced more normally. The intuitive claim is that the believer in such a case is nonetheless epistemically justified, as much so as one whose belief is produced in a more normal way, and hence that externalist account of justification must be mistaken.
Perhaps the most striking reply to this sort of counter-example, on behalf of a cognitive process is to be assessed in ‘normal’ possible worlds, i.e., in possible worlds that are actually the way our world is common-seismically believed to be, than in the world which contains the belief being judged. Since the cognitive processes employed in the Cartesian demon cases are, for which we may assume, reliable when assessed in this way, the reliabilist can agree that such beliefs are justified. The obvious, to a considerable degree of bringing out the issue of whether it is or not an adequate rationale for this construal of reliabilism, so that the reply is not merely a notional presupposition guised as having representation.
The correlative way of elaborating on the general objection to justificatory externalism challenges the sufficiency of the various externalist conditions by citing cases where those conditions are satisfied, but where the believers in question seem intuitively not to be justified. In this context, the most widely discussed examples have to do with possible occult cognitive capacities, like clairvoyance. Considering the point in application once, again, to reliabilism, the claim is that to think that he has such a cognitive power, and, perhaps, even good reasons to the contrary, is not rational or responsible and therefore not epistemically justified in accepting the belief that result from his clairvoyance, dispite the fact that the reliablist condition is satisfied.
One sort of response to this latter sorts of objection is to ‘bite the bullet’ and insist that such believers are in fact justified, dismissing the seeming intuitions to the contrary as latent internalist prejudice. A more widely adopted response attempts to impose additional conditions, usually of a roughly internalist sort, which will rule out the offending example, while stopping far of a full internalism. But, while there is little doubt that such modified versions of externalism can handle particular cases, as well enough to avoid clear intuitive implausibility, the usually problematic cases that they cannot handle, and also whether there is and clear motivation for the additional requirements other than the general internalist view of justification that externalist are committed to reject.
A view in this same general vein, one that might be described as a hybrid of internalism and externalism holds that epistemic justification requires that there is a justicatory factor that is cognitively accessible to the believer in question (though it need not be actually grasped), thus ruling out, e.g., a pure reliabilism. At the same time, however, though it must be objectively true that beliefs for which such a factor is available are likely to be true, in addition, the fact need not be in any way grasped or cognitively accessible to the believer. In effect, of the premises needed to argue that a particular belief is likely to be true, one must be accessible in a way that would satisfy at least weak internalism, the internalist will respond that this hybrid view is of no help at all in meeting the objection and has no belief nor is it held in the rational, responsible way that justification intuitively seems to require, for the believer in question, lacking one crucial premise, still has no reason at all for thinking that his belief is likely to be true.
An alternative to giving an externalist account of epistemic justification, one which may be more defensible while still accommodating many of the same motivating concerns, is to give an externalist account of knowledge directly, without relying on an intermediate account of justification. Such a view will obviously have to reject the justified true belief account of knowledge, holding instead that knowledge is true belief which satisfies the chosen externalist condition, e.g., a result of a reliable process (and perhaps, further conditions as well). This makes it possible for such a view to retain internalist account of epistemic justification, though the centrality of that concept to epistemology would obviously be seriously diminished.
Such an externalist account of knowledge can accommodate the commonsense conviction that animals, young children, and unsophisticated adults posses knowledge, though not the weaker conviction (if such a conviction does exists) that such individuals are epistemically justified in their beliefs. It is also at least less vulnerable to internalist counter-examples of the sort discussed, since the intuitions involved there pertain more clearly to justification than to knowledge. What is uncertain is what ultimate philosophical significance the resulting conception of knowledge is supposed to have. In particular, does it have any serious bearing on traditional epistemological problems and on the deepest and most troubling versions of scepticism, which seems in fact to be primarily concerned with justification, th an knowledge?`
A rather different use of the terms ‘internalism’ and ‘externalism’ has to do with the issue of how the content of beliefs and thoughts is determined: According to an internalist view of content, the content of such intention states depends only on the non-relational, internal properties of the individual’s mind or grain, and not at all on his physical and social environment: While according to an externalist view, content is significantly affected by such external factors and suggests a view that appears of both internal and external elements is standardly classified as an external view.
As with justification and knowledge, the traditional view of content has been strongly internalist in character. The main argument for externalism derives from the philosophy y of language, more specifically from the various phenomena pertaining to natural kind terms, indexicals, etc. that motivate the views that have come to be known as ‘direct reference’ theories. Such phenomena seem at least to show that the belief or thought content that can be properly attributed to a person is dependant on facts about his environment-e.g., whether he is on Earth or Twin Earth, what is fact pointing at, the classificatory criteria employed by expects in his social group, etc.-not just on what is going on internally in his mind or brain.
An objection to externalist account of content is that they seem unable to do justice to our ability to know the content of our beliefs or thought ‘from the inside’, simply by reflection. If content is depend on external factors pertaining to the environment, then knowledge of content should depend on knowledge of these factors-which will not in general be available to the person whose belief or thought is in question.
The adoption of an externalist account of mental content would seem to support an externalist account of justification, by way that if part or all of the content of a belief inaccessible to the believer, then both the justifying status of other beliefs in relation to that content and the status of that content ss justifying further beliefs will be similarly inaccessible, thus contravening the internalist requirement for justification. An internalist must insist that there are no justification relations of these sorts, that our internally associable content can either be justified or justly anything else: But such a response appears lame unless it is coupled with an attempt to show that the externalist account of content is mistaken.
In addition, to what to the Foundationalist, but the view in epistemology that knowledge must be regarded as a structure raised upon secure, certain foundations. These are found in some combination of experience and reason, with different schools (empirical, rationalism) emphasizing the role of one over that of the other. Foundationalism was associated with the ancient Stoics, and in the modern era with Descartes, who discovered his foundations in the ‘clear and distinct’ ideas of reason. Its main opponent is coherentism or the view that a body of propositions my be known without as foundation is certain, but by their interlocking strength. Rather as a crossword puzzle may be known to have been solved correctly even if each answer, taken individually, admits of uncertainty.
Truth, alone with coherence is the study of concept, in such a study in philosophy is that it treats both the meaning of the word true and the criteria by which we judge the truth or falsity in spoken and written statements. Philosophers have attempted to answer the question “What is truth?” for thousands of years. The four main theories they have proposed to answer this question are the correspondence, pragmatic, coherence, and deflationary theories of truth.
There are various ways of distinguishing types of foundatinalist epistemology by the use of the variations that have been enumerating. Planntinga has put forward an influence conception of ‘classical foundationalism’, specified in terms of limitations on the foundations. He construes this as a disjunction of ‘ancient and medieval foundationalism;, which takes foundations to comprise that with ‘self-evident’ and ‘evident to the senses’, and ‘modern foundationalism’ that replace ‘evident foundationalism’ that replaces ’evident to the senses’ with the replaces of ‘evident to the senses’ with ‘incorrigibly’, which in practice was taken to apply only to beliefs bout one’s present state of consciousness. Plantinga himself developed this notion in the context of arguing that items outside this territory, in particular certain beliefs about God, could also be immediately justified. A popular recent distinction is between what is variously ‘strong’ or ‘extremely’ foundationlism and ‘moderate’, ‘modest’ or minimalism’ and ‘moderate ‘Modest’ or ‘minimal’ foundationalisn with the distinction depending on whether epistemic immunities are reassured of foundations. While depending on whether it require of a foundation only that it be required of as foundation, that only it be immediately justified, or whether it be immediately justified. In that it make just the comforted preferability, only to suggest that the plausibility of the string requiring stems from both a ‘level confusion’ between beliefs on different levels.
Emerging sceptic tendencies come forth in the 14th-century writings of Nicholas of Autrecourt. His criticisms of any certainty beyond the immediate deliverance of the senses and basic logic, and in particular of any knowledge of either intellectual or material substances, anticipate the later scepticism of Balye and Hume. The; latter distinguishes between Pyrrhonistic and excessive scepticism, which he regarded as unlivable, and the more mitigated scepticism that accepts every day or commonsense beliefs (not as the delivery of reason, but as due more to custom and habit), but is duly wary of the power of reason to give us much more. Mitigated scepticism is thus closer to the attitude fostered by ancient scepticism from Pyrrho through to Sexus Empiricus. Although the phrase ‘Cartesian scepticism’ is sometimes used, Descartes himself was not a sceptic, but in the method of doubt, uses a sceptical scenario in order to begin the process of finding a secure mark of knowledge. Descartes himself trusts a category of ‘clear and distinct’ ideas, not far removed from the phantasia kataleptiké of the Stoics.
Scepticism should not be confused with relativism, which is a doctrine about the nature of truth, and may be motivated by trying to avoid scepticism. Nor is it identical with eliminativism, which counsels abandoning an area of thought together, not because we cannot know the truth, but because there are no truths capable of being framed in the terms we use.
Descartes’s theory of knowledge starts with the quest for certainty, for an indubitable starting-point or foundation on the basis alone of which progress is possible. This is eventually found in the celebrated ‘Cogito ergo sum’: I think therefore I am. By locating the point of certainty in my own awareness of my own self, Descartes gives a first-person twist to the theory of knowledge that dominated them following centuries in spite of various counter-attacks on behalf of social and public starting-points. The metaphysics associated with this priority is the famous Cartesian dualism, or separation of mind and matter into two different but interacting substances, Descartes rigorously and rightly sees that it takes divine dispensation to certify any relationship between the two realms thus divided, and to prove the reliability of the senses invokes a ‘clear and distinct perception’ of highly dubious proofs of the existence of a benevolent deity. This has not met general acceptance: as Hume drily puts it, ‘to have recourse to the veracity of the supreme Being, in order to prove the veracity of our senses, is surely making a very unexpected circuit’.
In his own time Descartes’s conception of the entirely separate substance of the mind was recognized to give rise to insoluble problems of the nature of the causal connection between the two. It also gives rise to the problem, insoluble in its own terms, of other minds. Descartes’s notorious denial that non-human animals are conscious is a stark illustration of the problem. In his conception of matter Descartes also gives preference to rational cogitation over anything derived from the senses. Since we can conceive of the matter of a ball of wax surviving changes to its sensible qualities, matter is not an empirical concept, but eventually an entirely geometrical one, with extension and motion as its only physical nature. Descartes’s thought, as reflected in Leibniz, that the qualities of sense experience have no resemblance to qualities of things, so that knowledge of the external world is essentially knowledge of structure rather than of filling. On this basis Descartes erects a remarkable physics. Since matter is in effect the same as extension there can be no empty space or ‘void’, since there is no empty space motion is not a question of occupying previously empty space, but is to be thought of in terms of vortices (like the motion of a liquid).
Although the structure of Descartes’s epistemology, theory of mind, and theory of matter have ben rejected many times, their relentless exposure of the hardest issues, their exemplary clarity, and even their initial plausibility, all contrive to make him the central point of reference for modern philosophy.
The self conceived as Descartes presents it in the first two Meditations: aware only of its own thoughts, and capable of disembodied existence, neither situated in a space nor surrounded by others. This is the pure self of ‘I-ness’ that we are tempted to imagine as a simple unique thing that make up our essential identity. Descartes’s view that he could keep hold of this nugget while doubting everything else is criticized by Lichtenberg and Kant, and most subsequent philosophers of mind.
Descartes holds that we do not have any knowledge of any empirical proposition about anything beyond the contents of our own minds. The reason, roughly put, is that there is a legitimate doubt about all such propositions because there is no way to deny justifiably that our senses are being stimulated by some cause (an evil spirit, for example) which is radically different from the objects that we normally think affect our senses.
He also points out, that the senses (sight, hearing, touch, etc., are often unreliable, and ‘it is prudent never to trust entirely those who have deceived us even once’, he cited such instances as the straight stick that looks ben t in water, and the square tower that looks round from a distance. This argument of illusion, has not, on the whole, impressed commentators, and some of Descartes’ contemporaries pointing out that since such errors become known as a result of further sensory information, it cannot be right to cast wholesale doubt on the evidence of the senses. But Descartes regarded the argument from illusion as only the first stage in a softening up process which would ‘lead the mind away from the senses’. He admits that there are some cases of sense-base belief about which doubt would be insane, e.g., the belief that I am sitting here by the fire, wearing a winter dressing gown’.
Descartes was to realize that there was nothing in this view of nature that could explain or provide a foundation for the mental, or from direct experience as distinctly human. In a mechanistic universe, he said, there is no privileged place or function for mind, and the separation between mind and matter is absolute. Descartes was also convinced, that the immaterial essences that gave form and structure to this universe were coded in geometrical and mathematical ideas, and this insight led him to invent algebraic geometry.
A scientific understanding of these ideas could be derived, said Descartes, with the aid of precise deduction, and he also claimed that the contours of physical reality could be laid out in three-dimensional coordinates. Following the publication of Newton’s Principia Mathematica in 1687, reductionism and mathematical modelling became the most powerful tools of modern science. And the dream that the entire physical world could be known and mastered through the extension and refinement of mathematical theory became the central feature and guiding principle of scientific knowledge.
Having to its recourse of knowledge, its cental questions include the origin of knowledge, the place of experience in generating knowledge, and the place of reason in doing so, the relationship between knowledge and certainty, and between knowledge and the impossibility of error, the possibility of universal scepticism, and the changing forms of knowledge that arise from new conceptualizations of the world. All of these issues link with other central concerns of philosophy, such as the nature of truth and the natures of experience and meaning.
Foundationalism was associated with the ancient Stoics, and in the modern era with Descartes (1596-1650). Who discovered his foundations in the ‘clear and distinct’ ideas of reason? Its main opponent is Coherentism, or the view that a body of propositions mas be known without a foundation in certainty, but by their interlocking strength, than as a crossword puzzle may be known to have been solved correctly even if each answer, taken individually, admits of uncertainty. Difficulties at this point led the logical passivists to abandon the notion of an epistemological foundation altogether, and to flirt with the coherence theory of truth. It is widely accepted that trying to make the connection between thought and experience through basic sentences depends on an untenable ‘myth of the given’.
Still in spite of these concerns, the problem was, of course, in defining knowledge in terms of true beliefs plus some favoured relations between the believer and the facts that began with Plato’s view in the “Theaetetus,” that knowledge is true belief, and some logos. Due of its nonsynthetic epistemology, the enterprising of studying the actual formation of knowledge by human beings, without aspiring to certify those processes as rational, or its proof against ‘scepticism’ or even apt to yield the truth. Natural epistemology would therefore blend into the psychology of learning and the study of episodes in the history of science. The scope for ‘external’ or philosophical reflection of the kind that might result in scepticism or its refutation is markedly diminished. Despite the fact that the terms of modernity are so distinguished as exponents of the approach include Aristotle, Hume, and J. S. Mills.
The task of the philosopher of a discipline would then be to reveal the correct method and to unmask counterfeits. Although this belief lay behind much positivist philosophy of science, few philosophers now subscribe to it. It places too well a confidence in the possibility of a purely previous ‘first philosophy’, or viewpoint beyond that of the work one’s way of practitioners, from which their best efforts can be measured as good or bad. These standpoints now seem that too many philosophers to be a fanciefancy, that the more modest of tasks that are actually adopted at various historical stages of investigation into different areas with the aim not so much of criticizing but more of systematization, in the presuppositions of a particular field at a particular tie. There is still a role for local methodological disputes within the community investigators of some phenomenon, with one approach charging that another is unsound or unscientific, but logic and philosophy will not, on the modern view, provide an independent arsenal of weapons for such battles, which indeed often come to seem more like political bids for ascendancy within a discipline.
This is an approach to the theory of knowledge that sees an important connection between the growth of knowledge and biological evolution. An evolutionary epistemologist claims that the development of human knowledge processed through some natural selection process, the best example of which is Darwin’s theory of biological natural selection. There is a widespread misconception that evolution proceeds according to some plan or direct, but it has neither, and the role of chance ensures that its future course will be unpredictable. Random variations in individual organisms create tiny differences in their Darwinian fitness. Some individuals have more offsprings than others, and the characteristics that increased their fitness thereby become more prevalent in future generations. Once upon a time, at least a mutation occurred in a human population in tropical Africa that changed the haemoglobin molecule in a way that provided resistance to malaria. This enormous advantage caused the new gene to spread, with the unfortunate consequence that sickle-cell anaemia came to exist.
Given that chance, it can influence the outcome at each stage: First, in the creation of genetic mutation, second, in wether the bearer lives long enough to show its effects, thirdly, in chance events that influence the individual’s actual reproductive success, and fourth, in whether a gene even if favoured in one generation, is, happenstance, eliminated in the next, and finally in the many unpredictable environmental changes that will undoubtedly occur in the history of any group of organisms. As Harvard biologist Stephen Jay Gould has so vividly expressed that process over again, the outcome would surely be different. Not only might there not be humans, there might not even be anything like mammals.
We will often emphasis the elegance of traits shaped by natural selection, but the common idea that nature creates perfection needs to be analysed carefully. The extent to which evolution achieves perfection depends on exactly what you mean. If you mean “Does natural selections always take the best path for the long-term welfare of a species?” The answer is no. That would require adaption by group selection, and this is, unlikely. If you mean “Does natural selection creates every adaption that would be valuable?” The answer again, is no. For instance, some kinds of South American monkeys can grasp branches with their tails. The trick would surely also be useful to some African species, but, simply because of bad luck, none have it. Some combination of circumstances started some ancestral South American monkeys using their tails in ways that ultimately led to an ability to grab onto branches, while no such development took place in Africa. Mere usefulness of a trait does not necessitate a means in that what will understandably endure phylogenesis or evolution.
This is an approach to the theory of knowledge that sees an important connection between the growth of knowledge and biological evolution. An evolutionary epistemologist claims that the development of human knowledge proceeds through some natural selection process, the best example of which is Darwin’s theory of biological natural selection. The three major components of the model of natural selection are variation selection and retention. According to Darwin’s theory of natural selection, variations are not pre-designed to do certain functions. Rather, these variations that do useful functions are selected. While those that do not employ of some coordinates in that are regainfully purposed are also, not to any of a selection, as duly influenced of such a selection, that may have responsibilities for the visual aspects of variational intentionally occurs. In the modern theory of evolution, genetic mutations provide the blind variations: Blind in the sense that variations are not influenced by the effects they would have-the likelihood of a mutation is not correlated with the benefits or liabilities that mutation would confer on the organism, the environment provides the filter of selection, and reproduction provides the retention. Fatnesses are achieved because those organisms with features that make them less adapted for survival do not survive in connection with other organisms in the environment that have features that are better adapted. Evolutionary epistemology applies this blind variation and selective retention model to the growth of scientific knowledge and to human thought processes overall.
The parallel between biological evolution and conceptual or ‘epistemic’ evolution can be seen as either literal or analogical. The literal version of evolutionary epistemology deeds biological evolution as the main cause of the growth of knowledge. On this view, called the ‘evolution of cognitive mechanic programs’, by Bradie (1986) and the ‘Darwinian approach to epistemology’ by Ruse (1986), that growth of knowledge occurs through blind variation and selective retention because biological natural selection itself is the cause of epistemic variation and selection. The most plausible version of the literal view does not hold that all human beliefs are innate but rather than the mental mechanisms that guide the acquisitions of non-innate beliefs are themselves innately and the result of biological natural selection. Ruse, (1986) demands of a version of literal evolutionary epistemology that he links to sociolology (Rescher, 1990).
On the analogical version of evolutionary epistemology, called the ‘evolution of theory’s program’, by Bradie (1986). The ‘Spenserians approach’ (after the nineteenth century philosopher Herbert Spencer) by Ruse (1986), the development of human knowledge is governed by a process analogous to biological natural selection, rather than by an instance of the mechanism itself. This version of evolutionary epistemology, introduced and elaborated by Donald Campbell (1974) as well as Karl Popper, sees the [partial] fit between theories and the world as explained by a mental process of trial and error known as epistemic natural selection.
Both versions of evolutionary epistemology are usually taken to be types of naturalized epistemology, because both take some empirical facts as a starting point for their epistemological project. The literal version of evolutionary epistemology begins by accepting evolutionary theory and a materialist approach to the mind and, from these, constructs an account of knowledge and its developments. In contrast, the metaphorical version does not require the truth of biological evolution: It simply draws on biological evolution as a source for the model of natural selection. For this version of evolutionary epistemology to be true, the model of natural selection need only apply to the growth of knowledge, not to the origin and development of species. Crudely put, evolutionary epistemology of the analogical sort could still be true even if Creationism is the correct theory of the origin of species.
Although they do not begin by assuming evolutionary theory, most analogical evolutionary epistemologists are naturalized epistemologists as well, their empirical assumptions, least of mention, implicitly come from psychology and cognitive science, not evolutionary theory. Sometimes, however, evolutionary epistemology is characterized in a seemingly non-naturalistic fashion. Campbell (1974) says that ‘if one is expanding knowledge beyond what one knows, one has no choice but to explore without the benefit of wisdom’, i.e., blindly. This, Campbell admits, makes evolutionary epistemology close to being a tautology (and so not naturalistic). Evolutionary epistemology does assert the analytic claim that when expanding one’s knowledge beyond what one knows, one must precessed to something that is already known, but, more interestingly, it also makes the synthetic claim that when expanding one’s knowledge beyond what one knows, one must proceed by blind variation and selective retention. This claim is synthetic because it can be empirically falsified. The central claim of evolutionary epistemology is synthetic, not analytic. If the central contradictory, which they are not. Campbell is right that evolutionary epistemology does have the analytic feature he mentions, but he is wrong to think that this is a distinguishing feature, since any plausible epistemology has the same analytic feature (Skagestad, 1978).
Two extraordinary issues lie to awaken the literature that involves questions about ‘realism’, i.e., What metaphysical commitment does an evolutionary epistemologist have to make? Progress, i.e., according to evolutionary epistemology, does knowledge develop toward a goal? With respect to realism, many evolutionary epistemologists endorse that is called ‘hypothetical realism’, a view that combines a version of epistemological ‘scepticism’ and tentative acceptance of metaphysical realism. With respect to progress, the problem is that biological evolution is not goal-directed, but the growth of human knowledge seems to be. Campbell (1974) worries about the potential dis-analogy here but is willing to bite the stone of conscience and admit that epistemic evolution progress toward a goal (truth) while biologic evolution does not. Many another has argued that evolutionary epistemologists must give up the ‘truth-topic’ sense of progress because a natural selection model is in essence, is non-teleological, as an alternative, following Kuhn (1970), and embraced in the accompaniment with evolutionary epistemology.
Among the most frequent and serious criticisms levelled against evolutionary epistemology is that the analogical version of the view is false because epistemic variation is not blind (Skagestad, 1978), and (Ruse, 1986) including, (Stein and Lipton, 1990) all have argued, nonetheless, that this objection fails because, while epistemic variation is not random, its constraints come from heuristics that, for the most part, are selective retention. Further, Stein and Lipton come to the conclusion that heuristics are analogous to biological pre-adaptions, evolutionary pre-biological pre-adaptions, evolutionary cursors, such as a half-wing, a precursor to a wing, which have some function other than the function of their descendable structures: The function of descendable structures, the function of their descendable character embodied to its structural foundations, is that of the guidelines of epistemic variation is, on this view, not the source of disanalogy, but the source of a more articulated account of the analogy.
Many evolutionary epistemologists try to combine the literal and the analogical versions (Bradie, 1986, and Stein and Lipton, 1990), saying that those beliefs and cognitive mechanisms, which are innate results from natural selection of the biological sort and those that are innate results from natural selection of the epistemic sort. This is reasonable as long as the two parts of this hybrid view are kept distinct. An analogical version of evolutionary epistemology with biological variation as its only source of blondeness would be a null theory: This would be the case if all our beliefs are innate or if our non-innate beliefs are not the result of blind variation. An appeal to the legitimate way to produce a hybrid version of evolutionary epistemology since doing so trivializes the theory. For similar reasons, such an appeal will not save an analogical version of evolutionary epistemology from arguments to the effect that epistemic variation is blind (Stein and Lipton, 1990).
Although it is a new approach to theory of knowledge, evolutionary epistemology has attracted much attention, primarily because it represents a serious attempt to flesh out a naturalized epistemology by drawing on several disciplines. In science is relevant to understanding the nature and development of knowledge, then evolutionary theory is among the disciplines worth a look. Insofar as evolutionary epistemology looks there, it is an interesting and potentially fruitful epistemological programme.
What makes a belief justified and what makes a true belief knowledge? Thinking that whether a belief deserves one of these appraisals is natural depends on what caused the depicted branch of knowledge to have the belief. In recent decades a number of epistemologists have pursued this plausible idea with a variety of specific proposals. Some causal theories of knowledge have it that a true belief that ‘p’ is knowledge just in case it has the right causal connection to the fact that ‘p’. Such a criterion can be applied only to cases where the fact that ‘p’ is a sort that can reach causal relations, as this seems to exclude mathematically and there necessary facts and perhaps any fact expressed by a universal generalization, and proponents of this sort of criterion have usually supposed that it is limited to perceptual representations where knowledge of particular facts about subjects’ environments.
For example, Armstrong (1973), predetermined that a position held by a belief in the form ‘This perceived object is ‘F’ is [non-inferential] knowledge if and only if the belief is a completely reliable sign that the perceived object is ‘F’, that is, the fact that the object is ‘F’ contributed to causing the belief and its doing so depended on properties of the believer such that the laws of nature dictated that, for any subject ‘χ’ and perceived object ‘y’, if ‘χ’ has those properties and believed that ‘y’ is ‘F’, then ‘y’ is ‘F’. (Dretske (1981) offers a rather similar account, in terms of the belief’s being caused by a signal received by the perceiver that carries the information that the object is ‘F’).
Goldman (1986) has proposed an importantly different causal criterion, namely, that a true belief is knowledge if it is produced by a type of process that is ‘globally’ and ‘locally’ reliable. Causing true beliefs is sufficiently high is globally reliable if its propensity. Local reliability has to do with whether the process would have produced a similar but false belief in certain counterfactual situations alternative to the actual situation. This way of marking off true beliefs that are knowledge does not require the fact believed to be causally related to the belief, and so it could in principle apply to knowledge of any kind of truth.
Goldman requires the global reliability of the belief-producing process for the justification of a belief, he requires it also for knowledge because justification is required for knowledge. What he requires for knowledge, but does not require for justification is local reliability. His idea is that a justified true belief is knowledge if the type of process that produced it would not have produced it in any relevant counterfactual situation in which it is false. Its purported theory of relevant alternatives can be viewed as an attempt to provide a more satisfactory response to this tension in our thinking about knowledge. It attempts to characterize knowledge in a way that preserves both our belief that knowledge is an absolute concept and our belief that we have knowledge.
According to the theory, we need to qualify rather than deny the absolute character of knowledge. We should view knowledge as absolute, reactive to certain standards (Dretske, 1981 and Cohen, 1988). That is to say, in order to know a proposition, our evidence need not eliminate all the alternatives to that preposition, rather for ‘us’, that we can know our evidence eliminates al the relevant alternatives, where the set of relevant alternatives (a proper subset of the set of all alternatives) is determined by some standard. Moreover, according to the relevant alternatives view, and the standards determining that of the alternatives is raised by the sceptic are not relevant. If this is correct, then the fact that our evidence cannot eliminate the sceptic’s alternative does not lead to a sceptical result. For knowledge requires only the elimination of the relevant alternatives, so the relevant alternative view preserves in both strands in our thinking about knowledge. Knowledge is an absolute concept, but because the absoluteness is relative to a standard, we can know many things.
The interesting thesis that counts as a causal theory of justification (in the meaning of ‘causal theory’ intended here) are that: A belief is justified in case it was produced by a type of process that is ‘globally’ reliable, that is, its propensity to produce true beliefs-that can be defined (to a good approximation) As the proportion of the beliefs it produces (or would produce) that is true is sufficiently great.
This proposal will be adequately specified only when we are told (I) how much of the causal history of a belief counts as part of the process that produced it, (ii) which of the many types to which the process belongs is the type for purposes of assessing its reliability, and (iii) relative to why the world or worlds are the reliability of the process type to be assessed the actual world, the closet worlds containing the case being considered, or something else? Let ‘us’ look at the answers suggested by Goldman, the leading proponent of a reliabilist account of justification.
(1) Goldman (1979, 1986) takes the relevant belief producing process to include only the proximate causes internal to the believer. So, for instance, when recently I believed that the telephone was ringing the process that produced the belief, for purposes of assessing reliability, includes just the causal chain of neural events from the stimulus in my ear’s inward ands other concurrent brain states on which the production of the belief depended: It does not include any events’ I the telephone, or the sound waves travelling between it and my ears, or any earlier decisions I made that were responsible for my being within hearing distance of the telephone at that time. It does seem intuitively plausible of a belief depends should be restricted to internal omnes proximate to the belief. Why? Goldman does not tell ‘us’. One answer that some philosophers might give is that it is because a belief’s being justified at a given time can depend only on facts directly accessible to the believer’s awareness at that time (for, if a believer ought to holds only beliefs that are justified, she can tell at any given time what beliefs would then be justified for her). However, this cannot be Goldman’s answer because he wishes to include in the relevantly process neural events that are not directly accessible to consciousness.
(2) Once the reliabilist has told ‘us’ how to delimit the process producing a belief, he needs to tell ‘us’ which of the many types to which it belongs is the relevant type. Coincide, for example, the process that produces your current belief that you see a book before you. One very broad type to which that process belongs would be specified by ‘coming to a belief as to something one perceives as a result of activation of the nerve endings in some of one’s sense-organs’. A constricted type, in which that unvarying processes belong would be specified by ‘coming to a belief as to what one sees as a result of activation of the nerve endings in one’s retinas’. A still narrower type would be given by inserting in the last specification a description of a particular pattern of activation of the retina’s particular cells. Which of these or other types to which the token process belongs is the relevant type for determining whether the type of process that produced your belief is reliable?
If we select a type that is too broad, as having the same degree of justification various beliefs that intuitively seem to have different degrees of justification. Thus the broadest type we specified for your belief that you see a book before you apply also to perceptual beliefs where the object seen is far away and seen only briefly is less justified. On the other hand, is we are allowed to select a type that is as narrow as we please, then we make it out that an obviously unjustified but true belief is produced by a reliable type of process. For example, suppose I see a blurred shape through the fog far in a field and unjustifiedly, but correctly, believe that it is a sheep: If we include enough details about my retinal image is specifying te type of the visual process that produced that belief, we can specify a type is likely to have only that one instanced and is therefore 100 percent reliable. Goldman conjectures (1986) that the relevant process type is ‘the narrowest type that is casually operative’. Presumably, a feature of the process producing beliefs were causally operatives in producing it just in case some alternative feature instead, but it would not have led to that belief. (We need to say ‘some’ here rather than ‘any’, because, for example, when I see an oak or pine tree, the particular ‘like-minded’ material bodies of my retinal image is causably clearly toward the operatives in producing my belief that what is seen as a tree, even though there are alternative shapes, for example, ‘pineish’ or ‘birchness’ ones, that would have produced the same belief.)
(3) Should the justification of a belief in a hypothetical, non-actual example turn on the reliability of the belief-producing process in the possible world of the example? That leads to the implausible result in that in a world run by a Cartesian demon-a powerful being who causes the other inhabitants of the world to have rich and coherent sets of perceptual and memory impressions that are all illusory the perceptual and memory beliefs of the other inhabitants are all unjustified, for they are produced by processes that are, in that world, quite unreliable. If we say instead that it is the reliability of the processes in the actual world that matters, we get the equally undesired result that if the actual world is a demon world then our perceptual and memory beliefs are all unjustified.
Goldman’s solution (1986) is that the reliability of the process types is to be gauged by their performance in ‘normal’ worlds, that is, worlds consistent with ‘our general beliefs about the world . . . ‘about the sorts of objects, events and changes that occur in it’. This gives the intuitively right results for the problem cases just considered, but indicate by inference an implausible proportion of making compensations for alternative tending toward justification. If there are people whose general beliefs about the world are very different from mine, then there may, on this account, be beliefs that I can correctly regard as justified (ones produced by processes that are reliable in what I take to be a normal world) but that they can correctly regard as not justified.
However, these questions about the specifics are dealt with, and there are reasons for questioning the basic idea that the criterion for a belief’s being justified is its being produced by a reliable process. Thus and so, doubt about the sufficiency of the reliabilist criterion is prompted by a sort of example that Goldman himself uses for another purpose. Suppose that being in brain-state ‘B’ always causes one to believe that one is in brained-state ‘B’. Here the reliability of the belief-producing process is perfect, but ‘we can readily imagine circumstances in which a person goes into grain-state ‘B’ and therefore has the belief in question, though this belief is by no means justified’ (Goldman, 1979). Doubt about the necessity of the condition arises from the possibility that one might know that one has strong justification for a certain belief and yet that knowledge is not what actually prompts one to believe. For example, I might be well aware that, having read the weather bureau’s forecast that it will be much hotter tomorrow. I have ample reason to be confident that it will be hotter tomorrow, but I irrationally refuse to believe it until Wally tells me that he feels in his joints that it will be hotter tomorrow. Here what prompts me to believe dors not justify my belief, but my belief is nevertheless justified by my knowledge of the weather bureau’s prediction and of its evidential force: I can advert to any disavowable inference that I ought not to be holding the belief. Indeed, given my justification and that there is nothing untoward about the weather bureau’s prediction, my belief, if true, can be counted knowledge. This sorts of example raises doubt whether any causal conditions, are it a reliable process or something else, is necessary for either justification or knowledge.
Philosophers and scientists alike, have often held that the simplicity or parsimony of a theory is one reason, all else being equal, to view it as true. This goes beyond the unproblematic idea that simpler theories are easier to work with and gave greater aesthetic appeal.
One theory is more parsimonious than another when it postulates fewer entities, processes, changes or explanatory principles: The simplicity of a theory depends on essentially the same consecrations, though parsimony and simplicity obviously become the same. Demanding clarification of what makes one theory simpler or more parsimonious is plausible than another before the justification of these methodological maxims can be addressed.
If we set this description problem to one side, the major normative problem is as follows: What reason is there to think that simplicity is a sign of truth? Why should we accept a simpler theory instead of its more complex rivals? Newton and Leibniz thought that the answer was to be found in a substantive fact about nature. In “Principia,” Newton laid down as his first Rule of Reasoning in Philosophy that ‘nature does nothing in vain . . . ‘for Nature is pleased with simplicity and affects not the pomp of superfluous causes’. Leibniz hypothesized that the actual world obeys simple laws because God’s taste for simplicity influenced his decision about which world to actualize.
The tragedy of the Western mind, described by Koyré, is a direct consequence of the stark Cartesian division between mind and world. We discovered the ‘certain principles of physical reality’, said Descartes, ‘not by the prejudices of the senses, but by the light of reason, and which thus possess so great evidence that we cannot doubt of their truth’. Since the real, or that which actually exists external to ourselves, was in his view only that which could be represented in the quantitative terms of mathematics, Descartes conclude that all quantitative aspects of reality could be traced to the deceitfulness of the senses.
The most fundamental aspect of the Western intellectual tradition is the assumption that there is a fundamental division between the material and the immaterial world or between the realm of matter and the realm of pure mind or spirit. The metaphysical frame-work based on this assumption is known as ontological dualism. As the word dual implies, the framework is predicated on an ontology, or a conception of the nature of God or Being, that assumes reality has two distinct and separable dimensions. The concept of Being as continuous, immutable, and having a prior or separate existence from the world of change dates from the ancient Greek philosopher Parmenides. The same qualities were associated with the God of the Judeo-Christian tradition, and they were considerably amplified by the role played in theology by Platonic and Neoplatonic philosophy.
Nicolas Copernicus, Galileo, Johannes Kepler, and Isaac Newton were all inheritors of a cultural tradition in which ontological dualism was a primary article of faith. Hence the idealization of the mathematical ideal as a source of communion with God, which dates from Pythagoras, provided a metaphysical foundation for the emerging natural sciences. This explains why, the creators of classical physics believed that doing physics was a form of communion with the geometrical and mathematical forms’ resident in the perfect mind of God. This view would survive in a modified form in what is now known as Einsteinian epistemology and accounts in no small part for the reluctance of many physicists to accept the epistemology associated with the Copenhagen Interpretation.
At the beginning of the nineteenth century, Pierre-Sinon LaPlace, along with a number of other French mathematicians, advanced the view that the science of mechanics constituted a complete view of nature. Since this science, by observing its epistemology, had revealed itself to be the fundamental science, the hypothesis of God was, they concluded, entirely unnecessary.
LaPlace is recognized for eliminating not only the theological component of classical physics but the ‘entire metaphysical component’ as well’. The epistemology of science requires, he said, that we proceed by inductive generalizations from observed facts to hypotheses that are ‘tested by observed conformity of the phenomena’. What was unique about LaPlace’s view of hypotheses was his insistence that we cannot attribute reality to them. Although concepts like force, mass, motion, cause, and laws are obviously present in classical physics, they exist in LaPlace’s view only as quantities. Physics is concerned, he argued, with quantities that we associate as a matter of convenience with concepts, and the truths about nature are only the quantities.
As this view of hypotheses and the truths of nature as quantities was extended in the nineteenth century to a mathematical description of phenomena like heat, light, electricity, and magnetism. LaPlace’s assumptions about the actual character of scientific truths seemed correct. This progress suggested that if we could remove all thoughts about the ‘nature of’ or the ‘source of’ phenomena, the pursuit of strictly quantitative concepts would bring us to a complete description of all aspects of physical reality. Subsequently, figures like Comte, Kirchhoff, Hertz, and Poincaré developed a program for the study of nature hat was quite different from that of the original creators of classical physics.
The seventeenth-century view of physics as a philosophy of nature or as natural philosophy was displaced by the view of physics as an autonomous science that was ‘the science of nature’. This view, which was premised on the doctrine of positivism, promised to subsume all of nature with a mathematical analysis of entities in motion and claimed that the true understanding of nature was revealed only in the mathematical description. Since the doctrine of positivism assumes that the knowledge we call physics resides only in the mathematical formalism of physical theory, it disallows the prospect that the vision of physical reality revealed in physical theory can have any other meaning. In the history of science, the irony is that positivism, which was intended to banish metaphysical concerns from the domain of science, served to perpetuate a seventeenth-century metaphysical assumption about the relationship between physical reality and physical theory.
Epistemology since Hume and Kant has drawn back from this theological underpinning. Indeed, the very idea that nature is simple (or uniform) has come in for a critique. The view has taken hold that a preference for simple and parsimonious hypotheses is purely methodological: It is constitutive of the attitude we call ‘scientific’ and makes no substantive assumption about the way the world is.
A variety of otherwise diverse twentieth-century philosophers of science have attempted, in different ways, to flesh out this position. Two examples must suffice here: Hesse (1969) as, for summaries of other proposals. Popper (1959) holds that scientists should prefer highly falsifiable (improbable) theories: He tries to show that simpler theories are more falsifiable, also Quine (1966), in contrast, sees a virtue in theories that are highly probable, he argues for a general connection between simplicity and high probability.
Both these proposals are global. They attempt to explain why simplicity should be part of the scientific method in a way that spans all scientific subject matters. No assumption about the details of any particular scientific problem serves as a premiss in Popper’s or Quine’s arguments.
Newton and Leibniz thought that the justification of parsimony and simplicity flows from the hand of God: Popper and Quine try to justify these methodologically median of importance is without assuming anything substantive about the way the world is. In spite of these differences in approach, they have something in common. They assume that all users of parsimony and simplicity in the separate sciences can be encompassed in a single justifying argument. That recent developments in confirmation theory suggest that this assumption should be scrutinized. Good (1983) and Rosenkrantz (1977) has emphasized the role of auxiliary assumptions in mediating the connection between hypotheses and observations. Whether a hypothesis is well supported by some observations, or whether one hypothesis is better supported than another by those observations, crucially depends on empirical background assumptions about the inference problem here. The same view applies to the idea of prior probability (or, prior plausibility). In of a single hypo-physical science if chosen as an alternative to another even though they are equally supported by current observations, this must be due to an empirical background assumption.
Principles of parsimony and simplicity mediate the epistemic connection between hypotheses and observations. Perhaps these principles are able to do this because they are surrogates for an empirical background theory. It is not that there is one background theory presupposed by every appeal to parsimony; This has the quantifier order backwards. Rather, the suggestion is that each parsimony argument is justified only to each degree that it reflects an empirical background theory about the subjective matter. On this theory is brought out into the open, but the principle of parsimony is entirely dispensable (Sober, 1988).
This ‘local’ approach to the principles of parsimony and simplicity resurrects the idea that they make sense only if the world is one way rather than another. It rejects the idea that these maxims are purely methodological. How defensible this point of view is, will depend on detailed case studies of scientific hypothesis evaluation and on further developments in the theory of scientific inference.
It is usually not found of one and the same that, an inference is a (perhaps very complex) act of thought by virtue of which act (1) I pass from a set of one or more propositions or statements to a proposition or statement and (2) it appears that the latter are true if the former is or are. This psychological characterization has occurred over a wider summation of literature under more lesser than inessential variations. Desiring a better characterization of inference is natural. Yet attempts to do so by constructing a fuller psychological explanation fail to comprehend the grounds on which inference will be objectively valid-A point elaborately made by Gottlob Frége. Attempts to understand the nature of inference through the device of the representation of inference by formal-logical calculations or derivations better (1) leave ‘us’ puzzled about the relation of formal-logical derivations to the informal inferences they are supposedly to represent or reconstruct, and (2) leaves ‘us’ worried about the sense of such formal derivations. Are these derivations inference? Are not informal inferences needed in order to apply the rules governing the constructions of formal derivations (inferring that this operation is an application of that formal rule)? These are concerns cultivated by, for example, Wittgenstein.
Coming up with an adequate characterized inferences, and even working out what would count as a very adequate characterization here is demandingly by no means nearly some resolved philosophical problem.
Traditionally, a proposition that is not a ‘conditional’, as with the ‘affirmative’ and ‘negative’, modern opinion is wary of the distinction, since what appears categorical may vary with the choice of a primitive vocabulary and notation. Apparently categorical propositions may also turn out to be disguised conditionals: ‘X’ is intelligent (categorical?) Equivalent, if ‘X’ is given a range of tasks, she does them better than many people (conditional?). The problem is not merely one of classification, since deep metaphysical questions arise when facts that seem to be categorical and therefore solid, come to seem by contrast conditional, or purely hypothetical or potential.
Its condition of some classified necessity is so proven sufficient that if ‘p’ is a necessary condition of ‘q’, then ‘q’ cannot be true unless ‘p’; is true? If ‘p’ is a sufficient condition, thus steering well is a necessary condition of driving in a satisfactory manner, but it is not sufficient, for one can steer well but drive badly for other reasons. Confusion may result if the distinction is not heeded. For example, the statement that ‘A’ causes ‘B’ may be interpreted to mean that ‘A’ is itself a sufficient condition for ‘B’, or that it is only a necessary condition fort ‘B’, or perhaps a necessary parts of a total sufficient condition. Lists of conditions to be met for satisfying some administrative or legal requirement frequently attempt to give individually necessary and jointly sufficient sets of conditions.
What is more, that if any proposition of the form ‘if p then q’. The condition hypothesized, ‘p’. Is called the antecedent of the conditionals, and ‘q’, the consequent? Various kinds of conditional have been distinguished. Its weakest is that of ‘material implication’, merely telling that either ‘not-p’, or ‘q’. Stronger conditionals include elements of ‘modality’, corresponding to the thought that ‘if p is truer then q must be true’. Ordinary language is very flexible in its use of the conditional form, and there is controversy whether conditionals are better treated semantically, yielding differently finds of conditionals with different meanings, or pragmatically, in which case there should be one basic meaning with surface differences arising from other implicatures.
It follows from the definition of ‘strict implication’ that a necessary proposition is strictly implied by any proposition, and that an impossible proposition strictly implies any proposition. If strict implication corresponds to ‘q follows from p’, then this means that a necessary proposition follows from anything at all, and anything at all follows from an impossible proposition. This is a problem if we wish to distinguish between valid and invalid arguments with necessary conclusions or impossible premises.
The Humean problem of induction is that if we would suppose that there is some property ‘A’ concerning and observational or an experimental situation, and that out of a large number of observed instances of ‘A’, some fraction m/n (possibly equal to 1) has also been instances of some logically independent property ‘B’. Suppose further that the background proportionate circumstances not specified in these descriptions have been varied to a substantial degree and that there is no collateral information available concerning the frequency of ‘B’s’ among ‘A’s or concerning causal or nomologically connections between instances of ‘A’ and instances of ‘B’.
In this situation, an ‘enumerative’ or ‘instantial’ induction inference would move rights from the premise, that m/n of observed ‘A’s’ are ‘B’s’ to the conclusion that approximately m/n of all ‘A’s’ are ‘B’s. (The usual probability qualification will be assumed to apply to the inference, rather than being part of the conclusion.) Here the class of ‘A’s’ should be taken to include not only unobserved ‘A’s’ and future ‘A’s’, but also possible or hypothetical ‘A’s’ (an alternative conclusion would concern the probability or likelihood of the adjacently observed ‘A’ being a ‘B’).
The traditional or Humean problem of induction, often referred to simply as ‘the problem of induction’, is the problem of whether and why inferences that fit this schema should be considered rationally acceptable or justified from an epistemic or cognitive standpoint, i.e., whether and why reasoning in this way is likely to lead to true claims about the world. Is there any sort of argument or rationale that can be offered for thinking that conclusions reached in this way are likely to be true in the corresponding premisses is true ‒or even that their chances of truth are significantly enhanced?
Hume’s discussion of this issue deals explicitly only with cases where all observed ‘A’s’ are ‘B’s’ and his argument applies just as well to the more general case. His conclusion is entirely negative and sceptical: Inductive inferences are not rationally justified, but are instead the result of an essentially a-rational process, custom or habit. Hume (1711-76) challenges the proponent of induction to supply a cogent line of reasoning that leads from an inductive premise to the corresponding conclusion and offers an extremely influential argument in the form of a dilemma (a few times referred to as ‘Hume’s fork’), that either our actions are determined, in which case we are not responsible for them, or they are the result of random events, under which case we are also not responsible for them.
Such reasoning would, he argues, have to be either deductively demonstrative reasoning in the concerning relations of ideas or ‘experimental’, i.e., empirical, that reasoning concerning matters of fact or existence. It cannot be the former, because all demonstrative reasoning relies on the avoidance of contradiction, and it is not a contradiction to suppose that ‘the course of nature may change’, that an order that was observed in the past and not of its continuing against the future: But it cannot be, as the latter, since any empirical argument would appeal to the success of such reasoning about an experience, and the justifiability of generalizing from experience are precisely what is at issue-so that any such appeal would be question-begging. Hence, Hume concludes that there can be no such reasoning (1748).
An alternative version of the problem may be obtained by formulating it with reference to the so-called Principle of Induction, which says roughly that the future will resemble the past or, somewhat better, that unobserved cases will resemble observed cases. An inductive argument may be viewed as enthymematic, with this principle serving as a supposed premiss, in which case the issue is obviously how such a premiss can be justified. Hume’s argument is then that no such justification is possible: The principle cannot be justified a prior because having possession of been true in experiences without obviously begging the question is not contradictory to have possession of been true in experiences without obviously begging the question.
The predominant recent responses to the problem of induction, at least in the analytic tradition, in effect accept the main conclusion of Hume’s argument, namely, that inductive inferences cannot be justified in the sense of showing that the conclusion of such an inference is likely to be true if the premise is true, and thus attempt to find another sort of justification for induction. Such responses fall into two main categories: (I) Pragmatic justifications or ‘vindications’ of induction, mainly developed by Hans Reichenbach (1891-1953), and (ii) ordinary language justifications of induction, whose most important proponent is Frederick, Peter Strawson (1919-). In contrast, some philosophers still attempt to reject Hume’s dilemma by arguing either (iii) That, contrary to appearances, induction can be inductively justified without vicious circularity, or (iv) that an anticipatory justification of induction is possible after all. In that:
(1) Reichenbach’s view is that induction is best regarded, not as a form of inference, but rather as a ‘method’ for arriving at posits regarding, i.e., the proportion of ‘A’s’ remain additionally of ‘B’s’. Such a posit is not a claim asserted to be true, but is instead an intellectual wager analogous to a bet made by a gambler. Understood in this way, the inductive method says that one should posit that the observed proportion is, within some measure of an approximation, the true proportion and then continually correct that initial posit as new information comes in.
The gambler’s bet is normally an ‘appraised posit’, i.e., he knows the chances or odds that the outcome on which he bets will actually occur. In contrast, the inductive bet is a ‘blind posit’: We do not know the chances that it will succeed or even that success is that it will succeed or even that success is possible. What we are gambling on when we make such a bet is the value of a certain proportion in the independent world, which Reichenbach construes as the limit of the observed proportion as the number of cases increases to infinity. Nevertheless, we have no way of knowing that there are even such a limit, and no way of knowing that the proportion of ‘A’s’ are in addition of ‘B’s’ converges in the end on some stable value than varying at random. If we cannot know that this limit exists, then we obviously cannot know that we have any definite chance of finding it.
What we can know, according to Reichenbach, is that ‘if’ there is a truth of this sort to be found, the inductive method will eventually find it’. That this is so is an analytic consequence of Reichenbach’s account of what it is for such a limit to exist. The only way that the inductive method of making an initial posit and then refining it in light of new observations can fail eventually to arrive at the true proportion is if the series of observed proportions never converges on any stable value, which means that there is no truth to be found pertaining the proportion of ‘A’s additionally constitute ‘B’s’. Thus, induction is justified, not by showing that it will succeed or indeed, that it has any definite likelihood of success, but only by showing that it will succeed if success is possible. Reichenbach’s claim is that no more than this can be established for any method, and hence that induction gives ‘us’ our best chance for success, our best gamble in a situation where there is no alternative to gambling.
This pragmatic response to the problem of induction faces several serious problems. First, there are indefinitely many other ‘methods’ for arriving at posits for which the same sort of defence can be given-methods that yield the same result as the inductive method over time but differ arbitrarily before long. Despite the efforts of others, it is unclear that there is any satisfactory way to exclude such alternatives, in order to avoid the result that any arbitrarily chosen short-term posit is just as reasonable as the inductive posit. Second, even if there is a truth of the requisite sort to be found, the inductive method is only guaranteed to find it or even to come within any specifiable distance of it in the indefinite long run. All the same, any actual application of inductive results always takes place in the presence to the future eventful states in making the relevance of the pragmatic justification to actual practice uncertainly. Third, and most important, it needs to be emphasized that Reichenbach’s response to the problem simply accepts the claim of the Humean sceptic that an inductive premise never provides the slightest reason for thinking that the corresponding inductive conclusion is true. Reichenbach himself is quite candid on this point, but this does not alleviate the intuitive implausibility of saying that we have no more reason for thinking that our scientific and commonsense conclusions that result in the induction of it ‘ . . . is true’ than, to use Reichenbach’s own analogy (1949), a blind man wandering in the mountains who feels an apparent trail with his stick has for thinking that following it will lead him to safety.
An approach to induction resembling Reichenbach’s claiming in that those particular inductive conclusions are posits or conjectures, than the conclusions of cogent inferences, is offered by Popper. However, Popper’s view is even more overtly sceptical: It amounts to saying that all that can ever be said in favour of the truth of an inductive claim is that the claim has been tested and not yet been shown to be false.
(2) The ordinary language response to the problem of induction has been advocated by many philosophers, none the less, Strawson claims that the question whether induction is justified or reasonable makes sense only if it tacitly involves the demand that inductive reasoning meet the standards appropriate to deductive reasoning, i.e., that the inductive conclusions are shown to follow deductively from the inductive assumption. Such a demand cannot, of course, be met, but only because it is illegitimate: Inductive and deductive reasons are simply fundamentally different kinds of reasoning, each possessing its own autonomous standards, and there is no reason to demand or expect that one of these kinds meet the standards of the other. Whereas, if induction is assessed by inductive standards, the only ones that are appropriate, then it is obviously justified.
The problem here is to understand to what this allegedly obvious justification of an induction amount. In his main discussion of the point (1952), Strawson claims that it is an analytic true statement that believing it a conclusion for which there is strong evidence is reasonable and an analytic truth that inductive evidence of the sort captured by the schema presented earlier constitutes strong evidence for the corresponding inductive conclusion, thus, apparently yielding the analytic conclusion that believing it a conclusion for which there is inductive evidence is reasonable. Nevertheless, he also admits, indeed insists, that the claim that inductive conclusions will be true in the future is contingent, empirical, and may turn out to be false (1952). Thus, the notion of reasonable belief and the correlative notion of strong evidence must apparently be understood in ways that have nothing to do with likelihood of truth, presumably by appeal to the standard of reasonableness and strength of evidence that are accepted by the community and are embodied in ordinary usage.
Understood in this way, Strawson’s response to the problem of inductive reasoning does not speak to the central issue raised by Humean scepticism: The issue of whether the conclusions of inductive arguments are likely to be true. It amounts to saying merely that if we reason in this way, we can correctly call ourselves ‘reasonable’ and our evidence ‘strong’, according to our accepted community standards. Nevertheless, to the undersealing of issue of wether following these standards is a good way to find the truth, the ordinary language response appears to have nothing to say.
(3) The main attempts to show that induction can be justified inductively have concentrated on showing that such as a defence can avoid circularity. Skyrms (1975) formulate, perhaps the clearest version of this general strategy. The basic idea is to distinguish different levels of inductive argument: A first level in which induction is applied to things other than arguments: A second level in which it is applied to arguments at the first level, arguing that they have been observed to succeed so far and hence are likely to succeed in general: A third level in which it is applied in the same way to arguments at the second level, and so on. Circularity is allegedly avoided by treating each of these levels as autonomous and justifying the argument at each level by appeal to an argument at the next level.
One problem with this sort of move is that even if circularity is avoided, the movement to higher and higher levels will clearly eventually fail simply for lack of evidence: A level will reach at which there have been enough successful inductive arguments to provide a basis for inductive justification at the next higher level, and if this is so, then the whole series of justifications collapses. A more fundamental difficulty is that the epistemological significance of the distinction between levels is obscure. If the issue is whether reasoning in accord with the original schema offered above ever provides a good reason for thinking that the conclusion is likely to be true, then it still seems question-begging, even if not flatly circular, to answer this question by appeal to anther argument of the same form.
(4) The idea that induction can be justified on a pure priori basis is in one way the most natural response of all: It alone treats an inductive argument as an independently cogent piece of reasoning whose conclusion can be seen rationally to follow, although perhaps only with probability from its premise. Such an approach has, however, only rarely been advocated (Russell, 19132 and BonJour, 1986), and is widely thought to be clearly and demonstrably hopeless.
Many on the reasons for this pessimistic view depend on general epistemological theses about the possible or nature of anticipatory cognition. Thus if, as Quine alleges, there is no a prior justification of any kind, then obviously a prior justification for induction is ruled out. Or if, as more moderate empiricists have in claiming some preexistent knowledge should be analytic, then again a prevenient justification for induction seems to be precluded, since the claim that if an inductive premise ids truer, then the conclusion is likely to be true does not fit the standard conceptions of ‘analyticity’. A consideration of these matters is beyond the scope of the present spoken exchange.
There are, however, two more specific and quite influential reasons for thinking that an early approach is impossible that can be briefly considered, first, there is the assumption, originating in Hume, but since adopted by very many of others, that a move forward in the defence of induction would have to involve ‘turning induction into deduction’, i.e., showing, per impossible, that the inductive conclusion follows deductively from the premise, so that it is a formal contradiction to accept the latter and deny the former. However, it is unclear why a prior approach need be committed to anything this strong. It would be enough if it could be argued that it is deductively unlikely that such a premise is true and corresponding conclusion false.
Second, Reichenbach defends his view that pragmatic justification is the best that is possible by pointing out that a completely chaotic world in which there is simply not true conclusion to be found as to the proportion of ‘A’s’ in addition that occur of, but B’s’ is neither impossible nor unlikely from a purely a prior standpoint, the suggestion being that therefore there can be no a prior reason for thinking that such a conclusion is true. Nevertheless, there is still a substring wayin laying that a chaotic world is a prior neither impossible nor unlikely without any further evidence does not show that such a world os not a prior unlikely and a world containing such-and-such regularity might anticipatorially be somewhat likely in relation to an occurrence of a long-run patten of evidence in which a certain stable proportion of observed ‘A’s’ are ‘B’s’ ~. An occurrence, it might be claimed, that would be highly unlikely in a chaotic world (BonJour, 1986).
Goodman’s ‘new riddle of induction’ purports that we suppose that before some specific time ’t’ (perhaps the year 2000) we observe a larger number of emeralds (property A) and find them all to be green (property B). We proceed to reason inductively and conclude that all emeralds are green Goodman points out, however, that we could have drawn a quite different conclusion from the same evidence. If we define the term ‘grue’ to mean ‘green if examined before ’t’ and blue examined after t ʹ, then all of our observed emeralds will also be gruing. A parallel inductive argument will yield the conclusion that all emeralds are gruing, and hence that all those examined after the year 2000 will be blue. Presumably the first of these concisions is genuinely supported by our observations and the second is not. Nevertheless, the problem is to say why this is so and to impose some further restriction upon inductive reasoning that will permit the first argument and exclude the second.
The obvious alternative suggestion is that ‘grue. Similar predicates do not correspond to genuine, purely qualitative properties in the way that ‘green’ and ‘blueness’ does, and that this is why inductive arguments involving them are unacceptable. Goodman, however, claims to be unable to make clear sense of this suggestion, pointing out that the relations of formal desirability are perfectly symmetrical: Grue’ may be defined in terms if, ‘green’ and ‘blue’, but ‘green’ an equally well be defined in terms of ‘grue’ and ‘green’ (blue if examined before ‘t’ and green if examined after ‘t’).
The ‘grued, paradoxes’ demonstrate the importance of categorization, in that sometimes it is itemized as ‘gruing’, if examined of a presence to the future, before future time ‘t’ and ‘green’, or not so examined and ‘blue’. Even though all emeralds in our evidence class grue, we ought must infer that all emeralds are gruing. For ‘grue’ is unprojectible, and cannot transmit credibility from known to unknown cases. Only projectable predicates are right for induction. Goodman considers entrenchment the key to projectibility having a long history of successful protection, ‘grue’ is entrenched, lacking such a history, ‘grue’ is not. A hypothesis is projectable, Goodman suggests, only if its predicates (or suitable related ones) are much better entrenched than its rivalrous past successes that do not assume future ones. Induction remains a risky business. The rationale for favouring entrenched predicates is pragmatic. Of the possible projections from our evidence class, the one that fits with past practices enables ‘us’ to utilize our cognitive resources best. Its prospects of being true are worse than its competitors’ and its cognitive utility is greater.
So, to a better understanding of induction we should then term is most widely used for any process of reasoning that takes ‘us’ from empirical premises to empirical conclusions supported by the premises, but not deductively entailed by them. Inductive arguments are therefore kinds of applicative arguments, in which something beyond the content of the premise is inferred as probable or supported by them. Induction is, however, commonly distinguished from arguments to theoretical explanations, which share this applicative character, by being confined to inferences in which he conclusion involves the same properties or relations as the premises. The central example is induction by simple enumeration, where from premises telling that Fa, Fb, Fc . . . ‘where a, b, c’s, are all of some kind ‘G’, it is inferred that G’s from outside the sample, such as future G’s, will be ‘F’, or perhaps that all G’s are ‘F’. In this, which and the other persons deceive them, children may infer that everyone is a deceiver: Different, but similar inferences of a property by some object to the same object’s future possession of the same property, or from the constancy of some law-like pattern in events and states of affairs ti its future constancy. All objects we know of attract each other with a force inversely proportional to the square of the distance between them, so perhaps they all do so, and will always do so.
The rational basis of any inference was challenged by Hume, who believed that induction presupposed belie in the uniformity of nature, but that this belief has no defence in reason, and merely reflected a habit or custom of the mind. Hume was not therefore sceptical about the role of reason in either explaining it or justifying it. Trying to answer Hume and to show that there is something rationally compelling about the inference referred to as the problem of induction. It is widely recognized that any rational defence of induction will have to partition well-behaved properties for which the inference is plausible (often called projectable properties) from badly behaved ones, for which it is not. It is also recognized that actual inductive habits are more complex than those of similar enumeration, and that both common sense and science pay attention to such giving factors as variations within the sample giving ‘us’ the evidence, the application of ancillary beliefs about the order of nature, and so on.
Nevertheless, the fundamental problem remains that ant experience condition by application show ‘us’ only events occurring within a very restricted part of a vast spatial and temporal order about which we then come to believe things.
Uncompounded by its belonging of a confirmation theory finding of the measure to which evidence supports a theory fully formalized confirmation theory would dictate the degree of confidence that a rational investigator might have in a theory, given some-body of evidence. The grandfather of confirmation theory is Gottfried Leibniz (1646-1718), who believed that a logically transparent language of science would be able to resolve all disputes. In the 20th century a fully formal confirmation theory was a main goal of the logical positivist, since without it the central concept of verification by empirical evidence itself remains distressingly unscientific. The principal developments were due to Rudolf Carnap (1891-1970), culminating in his “Logical Foundations of Probability” (1950). Carnap’s idea was that the measure necessitated would be the proportion of logically possible states of affairs in which the theory and the evidence both hold, compared ti the number in which the evidence itself holds that the probability of a preposition, relative to some evidence, is a proportion of the range of possibilities under which the proposition is true, compared to the total range of possibilities left by the evidence. The difficulty with the theory lies in identifying sets of possibilities so that they admit of measurement. It therefore demands that we can put a measure on the ‘range’ of possibilities consistent with theory and evidence, compared with the range consistent with the evidence alone.
Among the obstacles the enterprise meets, is the fact that while evidence covers only a finite range of data, the hypotheses of science may cover an infinite range. In addition, confirmation proves to vary with the language in which the science is couched, and the Carnapian programme has difficulty in separating genuinely confirming variety of evidence from less compelling repetition of the same experiment. Confirmation also proved to be susceptible to acute paradoxes. Finally, scientific judgement seems to depend on such intangible factors as the problems facing rival theories, and most workers have come to stress instead the historically situated scene of what would appear as a plausible distinction of a scientific knowledge at a given time.
Arose to the paradox of which when a set of apparent incontrovertible premises is given to unacceptable or contradictory conclusions. To solve a paradox will involve showing either that there is a hidden flaw in the premises, or that the reasoning is erroneous, or that the apparently unacceptable conclusion can, in fact, be tolerated. Paradoxes are therefore important in philosophy, for until one is solved it shows that there is something about our reasoning and our concepts that we do not understand. What is more, and somewhat loosely, a paradox is a compelling argument from unacceptable premises to an unacceptable conclusion: More strictly speaking, a paradox is specified to be a sentence that is true if and only if it is false. A characterized objection lesson of it would be: “The displayed sentence is false.”
Seeing that this sentence is false if true is easy, and true if false, a paradox, in either of the senses distinguished, presents an important philosophical challenger. Epistemologists are especially concerned with various paradoxes having to do with knowledge and belief. In other words, for example, the Knower paradox is an argument that begins with apparently impeccable premisses about the concepts of knowledge and inference and derives an explicit contradiction. The origin of the reasoning is the ‘surprise examination paradox’: A teacher announces that there will be a surprise examination next week. A clever student argues that this is impossible. ‘The test cannot be on Friday, the last day of the week, because it would not be a surprise. We would know the day of the test on Thursday evening. This means we can also rule out Thursday. For after we learn that no test has been given by Wednesday, we would know the test is on Thursday or Friday -and would already know that it s not on Friday and would already know that it is not on Friday by the previous reasoning. The remaining days can be eliminated in the same manner’.
This puzzle has over a dozen variants. The first was probably invented by the Swedish mathematician Lennard Ekbon in 1943. Although the first few commentators regarded the reverse elimination argument as cogent, every writer on the subject since 1950 agrees that the argument is unsound. The controversy has been over the proper diagnosis of the flaw.
Initial analyses of the subject’s argument tried to lay the blame on a simple equivocation. Their failure led to more sophisticated diagnoses. The general format has been an assimilation to better-known paradoxes. One tradition casts the surprise examination paradox as a self-referential problem, as fundamentally akin to the Liar, the paradox of the Knower, or Gödel’s incompleteness theorem. That in of itself, says enough that Kaplan and Montague (1960) distilled the following ‘self-referential’ paradox, the Knower. Consider the sentence: (S) The negation of this sentence is known (to be true).
Suppose that (S) is true. Then its negation is known and hence true. However, if its negation is true, then (S) must be false. Therefore (s) is false, or what is the name, the negation of (S) is true.
This paradox and its accompanying reasoning are strongly reminiscent of the Lair Paradox that (in one version) begins by considering a sentence ‘This sentence is false’ and derives a contradiction. Versions of both arguments using axiomatic formulations of arithmetic and Gödel-numbers to achieve the effect of self-reference yields important meta-theorems about what can be expressed in such systems. Roughly these are to the effect that no predicates definable in the formalized arithmetic can have the properties we demand of truth (Tarski’s Theorem) or of knowledge (Montague, 1963).
These meta-theorems still leave ‘us; with the problem that if we suppose that we add of these formalized languages predicates intended to express the concept of knowledge (or truth) and inference-as one mighty does if a logic of these concepts is desired. Then the sentence expressing the leading principles of the Knower Paradox will be true.
Explicitly, the assumption about knowledge and inferences are:
(1) If sentences ‘A’ are known, then “a.”
(2) (1) is known?
(3) If ‘B’ is correctly inferred from ‘A’, and ‘A’ is known, then ‘B’ is known.
To give an absolutely explicit t derivation of the paradox by applying these principles to (S), we must add (contingent) assumptions to the effect that certain inferences have been done. Still, as we go through the argument of the Knower, these inferences are done. Even if we can somehow restrict such principles and construct a consistent formal logic of knowledge and inference, the paradoxical argument as expressed in the natural language still demands some explanation.
The usual proposals for dealing with the Liar often have their analogues for the Knower, e.g., that there is something wrong with a self-reference or that knowledge (or truth) is properly a predicate of propositions and not of sentences. The relies that show that some of these are not adequate are often parallel to those for the Liar paradox. In addition, on e c an try here what seems to be an adequate solution for the Surprise Examination Paradox, namely the observation that ‘new knowledge can drive out knowledge’, but this does not seem to work on the Knower (Anderson, 1983).
There are a number of paradoxes of the Liar family. The simplest example is the sentence ‘This sentence is false’, which must be false if it is true, and true if it is false. One suggestion is that the sentence fails to say anything, but sentences that fail to say anything are at least not true. In fact case, we consider to sentences ‘This sentence is not true’, which, if it fails to say anything is not true, and hence (this kind of reasoning is sometimes called the strengthened Liar). Other versions of the Liar introduce pairs of sentences, as in a slogan on the front of a T-shirt saying ‘This sentence on the back of this T-shirt is false’, and one on the back saying ‘The sentence on the front of this T-shirt is true’. It is clear that each sentence individually is well formed, and were it not for the other, might have said something true. So any attempt to dismiss the paradox by sating that the sentence involved are meaningless will face problems.
Even so, the two approaches that have some hope of adequately dealing with this paradox is ‘hierarchy’ solutions and ‘truth-value gap’ solutions. According to the first, knowledge is structured into ‘levels’. It is argued that there be bo one-coherent notion expressed by the verb; knows’, but rather a whole series of notions, of the knowable knows, and so on (perhaps into transfinite), stated ion terms of predicate expressing such ‘ramified’ concepts and properly restricted, (1)-(3) lead to no contradictions. The main objections to this procedure are that the meaning of these levels has not been adequately explained and that the idea of such subscripts, even implicit, in a natural language is highly counterintuitive the ‘truth-value gap’ solution takes sentences such as (S) to lack truth-value. They are neither true nor false, but they do not express propositions. This defeats a crucial step in the reasoning used in the derivation of the paradoxes. Kripler (1986) has developed this approach in connection with the Liar and Asher and Kamp (1986) has worked out some details of a parallel solution to the Knower. The principal objection is that ‘strengthened’ or ‘super’ versions of the paradoxes tend to reappear when the solution itself is stated.
Since the paradoxical deduction uses only the properties (1)-(3) and since the argument is formally valid, any notion that satisfy these conditions will lead to a paradox. Thus, Grim (1988) notes that this may be read as ‘is known by an omniscient God’ and concludes that there is no coherent single notion of omniscience. Thomason (1980) observes that with some different conditions, analogous reasoning about belief can lead to paradoxical consequence.
Overall, it looks as if we should conclude that knowledge and truth are ultimately intrinsically ‘stratified’ concepts. It would seem that wee must simply accept the fact that these (and similar) concepts cannot be assigned of any-one fixed, finite or infinite. Still, the meaning of this idea certainly needs further clarification.
Its paradox arises when a set of apparently incontrovertible premises gives unacceptable or contradictory conclusions, to solve a paradox will involve showing either that there is a hidden flaw in the premises, or that the reasoning is erroneous, or that the apparently unacceptable conclusion can, in fact, be tolerated. Paradoxes are therefore important in philosophy, for until one is solved its shows that there is something about our reasoning and of concepts that we do not understand. Famous families of paradoxes include the ‘semantic paradoxes’ and ‘Zeno’s paradoxes. Art the beginning of the 20th century, paradox and other set-theoretical paradoxes led to the complete overhaul of the foundations of set theory, while the ’Sorites paradox’ has lead to the investigations of the semantics of vagueness and fuzzy logics.
It is, however, to what extent can analysis be informative? This is the question that gives a riser to what philosophers has traditionally called ‘the’ paradox of analysis. Thus, consider the following proposition:
(1) To be an instance of knowledge is to be an instance of justified true belief not essentially grounded in any falsehood. (1) If true, illustrates an important type of philosophical analysis. For convenience of exposition, I will assume (1) is a correct analysis. The paradox arises from the fact that if the concept of justified true belief not been essentially grounded in any falsification is the analysand of the concept of knowledge, it would seem that they are the same concept and hence that: (2) To be an instance of knowledge is to be as an instance of knowledge and would have to be the same propositions as (1). But then how can (1) be informative when (2) is not? This is what is called the first paradox of analysis. Classical writings’ on analysis suggests a second paradoxical analysis (Moore, 1942).
(3) An analysis of the concept of being a brother is that to be a
brother is to be a male sibling. If (3) is true, it would seem that the concept of being a brother would have to be the same concept as the concept of being a male sibling and tat:
(4) An analysis of the concept of being a brother is that to be a brother is to be a brother
would also have to be true and in fact, would have to be the same proposition as (3?). Yet (3) is true and (4) is false.
Both these paradoxes rest upon the assumptions that analysis is a relation between concepts, than one involving entity of other sorts, such as linguistic expressions, and tat in a true analysis, analysand and analysandum are the same concept. Both these assumptions are explicit in Moore, but some of Moore’s remarks hint at a solution to that of another statement of an analysis is a statement partly about the concept involved and partly about the verbal expressions used to express it. He says he thinks a solution of this sort is bound to be right, but fails to suggest one because he cannot see a way in which the analysis can be even partly about the expression (Moore, 1942).
Elsewhere, of such ways, as a solution to the second paradox, to which is explicating (3) as: (5)-An analysis is given by saying that the verbal expression ‘χ’ is a brother’ expresses the same concept as is expressed by the conjunction of the verbal expressions ‘χ’ is male’ when used to express the concept of being male and ‘χ’ is a sibling’ when used to express the concept of being a sibling. (Ackerman, 1990). An important point about (5) is as follows. Stripped of its philosophical jargon (‘analysis’, ‘concept’, ‘χ’ is a . . . ‘), (5) seems to state the sort of information generally stated in a definition of the verbal expression ‘brother’ in terms of the verbal expressions ‘male’ and ‘sibling’, where this definition is designed to draw upon listeners’ antecedent understanding of the verbal expression ‘male’ and ‘sibling’, and thus, to tell listeners what the verbal expression ‘brother’ really means, instead of merely providing the information that two verbal expressions are synonymous without specifying the meaning of either one. Thus, its solution to the second paradox seems to make the sort of analysis tat gives rise to this paradox matter of specifying the meaning of a verbal expression in terms of separate verbal expressions already understood and saying how the meanings of these separate, already-understood verbal expressions are combined. This corresponds to Moore’s intuitive requirement that an analysis should both specify the constituent concepts of the analysandum and tell how they are combined, but is this all there is to philosophical analysis?
To answer this question, we must note that, in addition too there being two paradoxes of analysis, there is two types of analyses that are relevant here. (There are also other types of analysis, such as reformatory analysis, where the analysand are intended to improve on and replace the analysandum. But since reformatory analysis involves no commitment to conceptual identity between analysand and analysandum, reformatory analysis does not generate a paradox of analysis and so will not concern ‘us’ here.) One way to recognize the difference between the two types of analysis concerning ‘us’ here is to focus on the difference between the two paradoxes. This can be done by means of the Frége-inspired sense-individuation condition, which is the condition that two expressions have the same sense if and only if they can be interchangeably ‘salva veritate’ whenever used in propositional attitude context. If the expressions for the analysands and the analysandum in (1) met this condition, (1) and (2) would not raise the first paradox, but the second paradox arises regardless of whether the expression for the analysand and the analysandum meet this condition. The second paradox is a matter of the failure of such expressions to be interchangeable salva veritate in sentences involving such contexts as ‘an analysis is given thereof. Thus, a solution (such as the one offered) that is aimed only at such contexts can solve the second paradox. This is clearly false for the first paradox, however, which will apply to all pairs of propositions expressed by sentences in which expressions for pairs of analysands and analysantia raising the first paradox is interchangeable. For example, consider the following proposition:
(6) Mary knows that some cats tail.
It is possible for John to believe (6) without believing:
(7) Mary has justified true belief, not essentially grounded in any falsehood, that some cats lack tails.
Yet this possibility clearly does not mean that the proposition that Mary knows that some casts lack tails is partly about language.
One approach to the first paradox is to argue that, despite the apparent epistemic inequivalence of (1) and (2), the concept of justified true belief not essentially grounded in any falsehood is still identical with the concept of knowledge (Sosa, 1983). Another approach is to argue that in the sort of analysis raising the first paradox, the analysand and analysandum is concepts that are different but that bear a special epistemic relation to each other. Elsewhere, the development is such an approach and suggestion that this analysand-analysandum relation has the following facets.
(i) The analysand and analysandum are necessarily coextensive, i.e., necessarily every instance of one is an instance of the other.
(ii) The analysand and analysandum are knowable theoretical to be coextensive.
(iii) The analysandum is simpler than the analysands a condition whose necessity is recognized in classical writings on analysis, such as, Langford, 1942.
(iv) The analysand do not have the analysandum as a constituent.
Condition (iv) rules out circularity. But since many valuable quasi-analyses are partly circular, e.g., knowledge is justified true belief supported by known reasons not essentially grounded in any falsehood, it seems best to distinguish between full analysis, from that of (iv) is a necessary condition, and partial analysis, for which it is not.
These conditions, while necessary, are clearly insufficient. The basic problem is that they apply too many pairs of concepts that do not seem closely enough related epistemologically to count as analysand and analysandum. , such as the concept of being 6 and the concept of the fourth root of 1296. Accordingly, its solution upon what actually seems epistemologically distinctive about analyses of the sort under consideration, which is a certain way they can be justified. This is by the philosophical example-and-counterexample method, which is in a general term that goes as follows. ‘J’ investigates the analysis of K’s concept ‘Q’ (where ‘K’ can but need not be identical to ‘J’ by setting ‘K’ a series of armchair thought experiments, i.e., presenting ‘K’ with a series of simple described hypothetical test cases and asking ‘K’ questions of the form ‘If such-and-such where the case would this count as a case of Q? ‘J’ then contrasts the descriptions of the cases to which; K’ answers affirmatively with the description of the cases to which ‘K’ does not, and ‘J’ generalizes upon these descriptions to arrive at the concepts (if possible not including the analysandum) and their mode of combination that constitute the analysand of K’‘s concept ‘Q’. Since ‘J’ need not be identical with ‘K’, there is no requirement that ‘K’ himself be able to perform this generalization, to recognize its result as correct, or even to understand he analysand that is its result. This is reminiscent of Walton’s observation that one can simply recognize a bird as a swallow without realizing just what feature of the bird (beak, wing configurations, etc.) form the basis of this recognition. (The philosophical significance of this way of recognizing is discussed in Walton, 1972) ‘K’ answers the questions based solely on whether the described hypothetical cases just strike him as cases of ‘Q’. ‘J’ observes certain strictures in formulating the cases and questions. He makes the cases as simple as possible, to minimize the possibility of confusion and to minimize the likelihood that ‘K’ will draw upon his philosophical theories (or quasi-philosophical, a rudimentary notion if he is unsophisticated philosophically) in answering the questions. For this conflicting result, the conflict should ‘other things being equal’ be resolved in favour of the simpler case. ‘J’ makes the series of described cases wide-ranging and varied, with the aim of having it be a complete series, where a series is complete if and only if no case that is omitted in such that, if included, it would change the analysis arrived at. ‘J’ does not, of course, use as a test-case description anything complicated and general enough to express the analysand. There is no requirement that the described hypothetical test cases be formulated only in terms of what can be observed. Moreover, using described hypothetical situations as test cases enables ‘J’ to frame the questions in such a way as to rule out extraneous background assumption to a degree, thus, even if ‘K’ correctly believes that all and only P’s are R’s, the question of whether the concepts of P, R, or both enter the analysand of his concept ‘Q’ can be investigated by asking him such questions as ‘Suppose (even if it seems preposterous to you) that you were to find out that there was a ‘P’ that was not an ‘R’. Would you still consider it a case of Q?
Taking all this into account, the fifth necessary condition for this sort of analysand-analysandum relations is as follows: If ‘S’ is the analysand of ‘Q’, the proposition that necessarily all and only instances of ‘S’ are instances of ‘Q’ can be justified by generalizing from intuition about the correct answers to questions of the sort indicated about a varied and wide-ranging series of simple described hypothetical situations. It so does occur of antinomy, when we are able to argue for, or demonstrate, both a proposition and its contradiction, roughly speaking, a contradiction of a proposition ‘p’ is one that can be expressed in form ‘not-p’, or, if ‘p’ can be expressed in the form ‘not-q’, then a contradiction is one that can be expressed in the form ‘q’. Thus, e.g., if ‘p is 2 + 1 = 4, then 2 + 1 ≠4 is the contradictory of ‘p’, for 2 + 1 ≠4 can be expressed in the form not (2 + 1 = 4). If ‘p’ is 2 + 1 ≠4, then 2 + 1-4 is a contradictory of ‘p’, since 2 + 1 ≠4 can be expressed in the form not (2 + 1 = 4). This is, mutually, but contradictory propositions can be expressed in the form, ‘r’, ‘not-r’. The Principle of Contradiction says that mutually contradictory propositions cannot both be true and cannot both be false. Thus, by this principle, since if ‘p’ is true, ‘not-p’ is false, no proposition ‘p’ can be at once true and false (otherwise both ‘p’ and its contradictories would be false?). In particular, for any predicate ‘p’ and object ‘χ’, it cannot be that ‘p’; is at once true of ‘χ’ and false of χ? This is the classical formulation of the principle of contradiction, but it is nonetheless, that wherein, we cannot now fault either demonstrates. We would eventually hope to be able ‘to solve the antinomy’ by managing, through careful thinking and analysis, eventually to fault either or both demonstrations.
Many paradoxes are as an easy source of antinomies, for example, Zeno gave some famously lets say, logical-cum-mathematical arguments that might be interpreted as demonstrating that motion is impossible. But our eyes as it was, demonstrate motion (exhibit moving things) all the time. Where did Zeno go wrong? Where do our eyes go wrong? If we cannot readily answer at least one of these questions, then we are in antinomy. In the “Critique of Pure Reason,” Kant gave demonstrations of the same kind -in the Zeno example they were obviously not the same kind of both, e.g., that the world has a beginning in time and space, and that the world has no beginning in time or space. He argues that both demonstrations are at fault because they proceed on the basis of ‘pure reason’ unconditioned by sense experience.
At this point, we display attributes to the theory of experience, as it is not possible to define in an illuminating way, however, we know what experiences are through acquaintances with some of our own, e.g., visual experiences of as afterimage, a feeling of physical nausea or a tactile experience of an abrasive surface (which might be caused by an actual surface -rough or smooth, or which might be part of a dream, or the product of a vivid sensory imagination). The essential feature of experience is it feels a certain way -that there is something that it is like to have it. We may refer to this feature of an experience as its ‘character’.
Another core feature of the sorts of experiences with which this may be of a concern, is that they have representational ‘content’. (Unless otherwise indicated, ‘experience’ will be reserved for their ‘contentual representations’.) The most obvious cases of experiences with content are sense experiences of the kind normally involved in perception. We may describe such experiences by mentioning their sensory modalities ad their contents, e.g., a gustatory experience (modality) of chocolate ice cream (content), but do so more commonly by means of perceptual verbs combined with noun phrases specifying their contents, as in ‘Macbeth saw a dagger’. This is, however, ambiguous between the perceptual claim ‘There was a (material) dagger in the world that Macbeth perceived visually’ and ‘Macbeth had a visual experience of a dagger’ (the reading with which we are concerned, as it is afforded by our imagination, or perhaps, experiencing mentally hallucinogenic imagery).
As in the case of other mental states and events with content, it is important to distinguish between the properties that and experience ‘represents’ and the properties that it ‘possesses’. To talk of the representational properties of an experience is to say something about its content, not to attribute those properties to the experience itself. Like every other experience, a visual; experience of a non-shaped square, of which is a mental event, and it is therefore not itself either irregular or is it square, even though it represents those properties. It is, perhaps, fleeting, pleasant or unusual, even though it does not represent those properties. An experience may represent a property that it possesses, and it may even do so in virtue of a rapidly changing (complex) experience representing something as changing rapidly. However, this is the exception and not the rule.
Which properties can be [directly] represented in sense experience is subject to debate. Traditionalists include only properties whose presence could not be doubted by a subject having appropriate experiences, e.g., colour and shape in the case of visual experience, and apparent shape, surface texture, hardness, etc., in the case of tactile experience. This view is natural to anyone who has an egocentric, Cartesian perspective in epistemology, and who wishes for pure data in experiences to serve as logically certain foundations for knowledge, especially to the immediate objects of perceptual awareness in or of sense-data, such categorized of colour patches and shapes, which are usually supposed distinct from surfaces of physical objectivity. Qualities of sense-data are supposed to be distinct from physical qualities because their perception is more relative to conditions, more certain, and more immediate, and because sense-data is private and cannot appear other than they are they are objects that change in our perceptual field when conditions of perception change. Physical objects remain constant.
Others who do not think that this wish can be satisfied, and who are more impressed with the role of experience in providing animisms with ecologically significant information about the world around them, claim that sense experiences represent properties, characteristic and kinds that are much richer and much more wide-ranging than the traditional sensory qualities. We do not see only colours and shapes, they tell ‘us’, but also earth, water, men, women and fire: We do not smell only odours, but also food and filth. There is no space here to examine the factors relevantly responsible to their choice of situational alternatives. Yet, this suggests that character and content are not really distinct, and there is a close tie between them. For one thing, the relative complexity of the character of sense experience places limitations upon its possible content, e.g., a tactile experience of something touching one’s left ear is just too simple to carry the same amount of content as typically convincing to an every day, visual experience. Moreover, the content of a sense experience of a given character depends on the normal causes of appropriately similar experiences, e.g., the sort of gustatory experience that we have when eating chocolate would be not represented as chocolate unless it was normally caused by chocolate. Granting a contingent ties between the character of an experience and its possible causal origins, once, again follows that its possible content is limited by its character.
Character and content are none the less irreducibly different, for the following reasons. (1) There are experiences that completely lack content, e.g., certain bodily pleasures. (2) Not every aspect of the character of an experience with content is relevant to that content, e.g., the unpleasantness of an aural experience of chalk squeaking on a board may have no representational significance. (3) Experiences in different modalities may overlap in content without a parallel overlap in character, e.g., visual and tactile experiences of circularity feel completely different. (4) The content of an experience with a given character may vary according to the background of the subject, e.g., a certain content ‘singing bird’ only after the subject has learned something about birds.
According to the act/object analysis of experience (which is a special case of the act/object analysis of consciousness), every experience involves an object of experience even if it has no material object. Two main lines of argument may be offered in support of this view, one ‘phenomenological’ and the other ‘semantic’.
In an outline, the phenomenological argument is as follows. Whenever we have an experience, even if nothing beyond the experience answers to it, we seem to be presented with something through the experience (which is itself diaphanous). The object of the experience is whatever is so presented to ‘us’-is that it is an individual thing, an event, or a state of affairs.
The semantic argument is that objects of experience are required in order to make sense of certain features of our talk about experience, including, in particular, the following. (I) Simple attributions of experience, e.g., ‘Rod is experiencing an oddity that is not really square but in appearance it seems more than likely a square’, this seems to be relational. (ii) We appear to refer to objects of experience and to attribute properties to them, e.g., ‘The after-image that John experienced was certainly odd’. (iii) We appear to quantify ov er objects of experience, e.g., ‘Macbeth saw something that his wife did not see’.
The act/object analysis faces several problems concerning the status of objects of experiences. Currently the most common view is that they are sense-data-private mental entities that actually posses the traditional sensory qualities represented by the experiences of which they are the objects. But the very idea of an essentially private entity is suspect. Moreover, since an experience may apparently represent something as having a determinable property, e.g., redness, without representing it as having any subordinate determinate property, e.g., any specific shade of red, a sense-datum may actually have a determinate property subordinate to it. Even more disturbing is that sense-data may have contradictory properties, since experiences can have contradictory contents. A case in point is the waterfall illusion: If you stare at a waterfall for a minute and then immediately fixate on a nearby rock, you are likely to have an experience of the rock’s moving upward while it remains in the same place. The sense-data theorist must either deny that there are such experiences or admit contradictory objects.
These problems can be avoided by treating objects of experience as properties. This, however, fails to do justice to the appearances, for experience seems not to present ‘us’ with properties embodied in individuals. The view that objects of experience is Meinongian objects accommodate this point. It is also attractive in as far as (1) it allows experiences to represent properties other than traditional sensory qualities, and (2) it allows for the identification of objects of experience and objects of perception in the case of experiences that constitute perception.
According to the act/object analysis of experience, every experience with content involves an object of experience to which the subject is related by an act of awareness (the event of experiencing that object). This is meant to apply not only to perceptions, which have material objects (whatever is perceived), but also to experiences like hallucinations and dream experiences, which do not. Such experiences none the less appear to represent something, and their objects are supposed to be whatever it is that they represent. Act/object theorists may differ on the nature of objects of experience, which have been treated as properties. Meinongian objects (which may not exist or have any form of being), and, more commonly private mental entities with sensory qualities. (The term ‘sense-data’ is now usually applied to the latter, but has also been used as a general term for objects of sense experiences, as in the work of G. E. Moore) Act/object theorists may also differ on the relationship between objects of experience and objects of perception. In terms of perception (of which we are ‘indirectly aware’) are always distinct from objects of experience (of which we are ‘directly aware’). Meinongian, however, may treat objects of perception as existing objects of experience. But sense-datum theorists must either deny that there are such experiences or admit contradictory objects. Still, most philosophers will feel that the Meinongian’s acceptance of impossible objects is too high a price to pay for these benefits.
A general problem for the act/object analysis is that the question of whether two subjects are experiencing one and the same thing (as opposed to having exactly similar experiences) appears to have an answer only on the assumption that the experiences concerned are perceptions with material objects. But in terms of the act/object analysis the question must have an answer even when this condition is not satisfied. (The answer is always negative on the sense-datum theory; it could be positive on other versions of the act/object analysis, depending on the facts of the case.)
In view of the above problems, the case for the act/object analysis should be reassessed. The Phenomenological argument is not, on reflection, convincing, for it is easy enough to grant that any experience appears to present ‘us’ with an object without accepting that it actually does. The semantic argument is more impressive, but is none the less answerable. The seemingly relational structure of attributions of experience is a challenge dealt with below in connection with the adverbial theory. Apparent reference to and quantification over objects of experience can be handled by analysing them as reference to experiences themselves and quantification over experiences tacitly typed according to content. Thus, ‘The after-image that John experienced was colourfully appealing’ becomes ‘John’s after-image experience was an experience of colour’, and ‘Macbeth saw something that his wife did not see’ becomes ‘Macbeth had a visual experience that his wife did not have’.
Pure cognitivism attempts to avoid the problems facing the act/object analysis by reducing experiences to cognitive events or associated disposition, e.g., Susy’s experience of a rough surface beneath her hand might be identified with the event of her acquiring the belief that there is a rough surface beneath her hand, or, if she does not acquire this belief, with a disposition to acquire it that has somehow been blocked.
This position has attractions. It does full justice to the cognitive contents of experience, and to the important role of experience as a source of belief acquisition. It would also help clear the way for a naturalistic theory of mind, since there seems to be some prospect of a physicalist/functionalist account of belief and other intentional states. But pure cognitivism is completely undermined by its failure to accommodate the fact that experiences have a felt character that cannot be reduced to their content, as aforementioned.
The adverbial theory is an attempt to undermine the act/object analysis by suggesting a semantic account of attributions of experience that does not require objects of experience. Unfortunately, the oddities of explicit adverbializations of such statements have driven off potential supporters of the theory. Furthermore, the theory remains largely undeveloped, and attempted refutations have traded on this. It may, however, be founded on sound basis intuitions, and there is reason to believe that an effective development of the theory (which is merely hinting at) is possible.
The relevant intuitions are (1) that when we say that someone is experiencing ‘an A’, or has an experience ‘of an A’, we are using this content-expression to specify the type of thing that the experience is especially apt to fit, (2) that doing this is a matter of saying something about the experience itself (and maybe about the normal causes of like experiences), and (3) that it is no-good of reasons to posit of its position to presuppose that of any involvements, is that its descriptions of an object in which the experience is. Thus the effective role of the content-expression in a statement of experience is to modify the verb it compliments, not to introduce a special type of object.
Perhaps, the most important criticism of the adverbial theory is the ‘many property problem’, according to which the theory does not have the resources to distinguish between, e.g.,
(1) Frank has an experience of a brown triangle
and:
(2) Frank has an experience of brown and an experience of a triangle.
Which is entailed by (1) but does not entail it. The act/object analysis can easily accommodate the difference between (1) and (2) by claiming that the truth of (1) requires a single object of experience that is both brown and triangular, while that of the (2) allows for the possibility of two objects of experience, one brown and the other triangular, however, (1) is equivalent to:
(1*) Frank has an experience of something’s being both brown and triangular.
And (2) is equivalent to:
(2*) Frank has an experience of something’s being brown and an experience of something’s being triangular,
and the difference between these can be explained quite simply in terms of logical scope without invoking objects of experience. The Adverbialists may use this to answer the many-property problem by arguing that the phrase ‘a brown triangle’ in (1) does the same work as the clause ‘something’s being both brown and triangular’ in (1*). This is perfectly compatible with the view that it also has the ‘adverbial’ function of modifying the verb ‘has an experience of’, for it specifies the experience more narrowly just by giving a necessary condition for the satisfaction of the experience (the condition being that there are something both brown and triangular before Frank).
A final position that should be mentioned is the state theory, according to which a sense experience of an ‘A’ is an occurrent, non-relational state of the kind that the subject would be in when perceiving an ‘A’. Suitably qualified, this claim is no doubt true, but its significance is subject to debate. Here it is enough to remark that the claim is compatible with both pure cognitivism and the adverbial theory, and that state theorists are probably best advised to adopt adverbials as a means of developing their intuitions.
Yet, clarifying sense-data, if taken literally, is that which is given by the senses. But in response to the question of what exactly is so given, sense-data theories posit private showings in the consciousness of the subject. In the case of vision this would be a kind of inner picture show which itself only indirectly represents aspects of the external world that has in and of itself a worldly representation. The view has been widely rejected as implying that we really only see extremely thin coloured pictures interposed between our mind’s eye and reality. Modern approaches to perception tend to reject any conception of the eye as a camera or lense, simply responsible for producing private images, and stress the active life of the subject in and of the world, as the determinant of experience.
Nevertheless, the argument from illusion is of itself the usually intended directive to establish that certain familiar facts about illusion disprove the theory of perception called naïevity or direct realism. There are, however, many different versions of the argument that must be distinguished carefully. Some of these distinctions centre on the content of the premises (the nature of the appeal to illusion); others centre on the interpretation of the conclusion (the kind of direct realism under attack). Let ‘us’ set about by distinguishing the importantly different versions of direct realism which one might take to be vulnerable to familiar facts about the possibility of perceptual illusion.
A crude statement of direct realism might go as follows. In perception, we sometimes directly perceive physical objects and their properties, we do not always perceive physical objects by perceiving something ‘else’, e.g., a sense-datum. There are, however, difficulties with this formulation of the view, as for one thing a great many philosophers who are ‘not’ direct realists would admit that it is a mistake to describe people as actually ‘perceiving’ something other than a physical object. In particular, such philosophers might admit, we should never say that we perceive sense-data. To talk that way would be to suppose that we should model our understanding of our relationship to sense-data on our understanding of the ordinary use of perceptual verbs as they describe our relation to and of the physical world, and that is the last thing paradigm sense-datum theorists should want. At least, many of the philosophers who objected to direct realism would prefer to express in what they were of objecting too in terms of a technical (and philosophically controversial) concept such as ‘acquaintance’. Using such a notion, we could define direct realism this way: In ‘veridical’ experience we are directly acquainted with parts, e.g., surfaces, or constituents of physical objects. A less cautious verison of the view might drop the reference to veridical experience and claim simply that in all experience we are directly acquainted with parts or constituents of physical objects. The expressions ‘knowledge by acquaintance’ and ‘knowledge by description’, and the distinction they mark between knowing ‘things’ and knowing ‘about’ things, are generally associated with Bertrand Russell (1872-1970), that scientific philosophy required analysing many objects of belief as ‘logical constructions’ or ‘logical fictions’, and the programme of analysis that this inaugurated dominated the subsequent philosophy of logical atomism, and then of other philosophers, Russell’s “The Analysis of Mind,” the mind itself is treated in a fashion reminiscent of Hume, as no more than the collection of neutral perceptions or sense-data that make up the flux of conscious experience, and that looked at another way that also was to make up the external world (neutral monism), but “An Inquiry into Meaning and Truth” (1940) represents a more empirical approach to the problem. Yet, philosophers have perennially investigated this and related distinctions using varying terminology.
Distinction in our ways of knowing things, highlighted by Russell and forming a central element in his philosophy after the discovery of the theory of ‘definite descriptions’. A thing is known by acquaintance when there is direct experience of it. It is known by description if it can only be described as a thing with such-and-such properties. In everyday parlance, I might know my spouse and children by acquaintance, but know someone as ‘the first person born at sea’ only by description. However, for a variety of reasons Russell shrinks the area of things that can be known by acquaintance until eventually only current experience, perhaps my own self, and certain universals or meanings qualify anything else is known only as the thing that has such-and-such qualities.
Because one can interpret the relation of acquaintance or awareness as one that is not ‘epistemic’, i.e., not a kind of propositional knowledge, it is important to distinguish the above aforementioned views read as ontological theses from a view one might call ‘epistemological direct realism? In perception we are, on at least some occasions, non-inferentially justified in believing a proposition asserting the existence of a physical object. Since it is that these objects exist independently of any mind that might perceive them, and so it thereby rules out all forms of idealism and phenomenalism, which hold that there are no such independently existing objects. Its being to ‘direct’ realism rules out those views defended under the cubic of ‘critical naive realism’, or ‘representational realism’, in which there is some non-physical intermediary -usually called a ‘sense-datum’ or a ‘sense impression’ -that must first be perceived or experienced in order to perceive the object that exists independently of this perception. Often the distinction between direct realism and other theories of perception is explained more fully in terms of what is ‘immediately’ perceived, than ‘mediately’ perceived. What relevance does illusion have for these two forms of direct realism?
The fundamental premise of the arguments is from illusion seems to be the theses that things can appear to be other than they are. Thus, for example, straight sticks when immerged in water looks bent, a penny when viewed from certain perspective appears as an illusory spatial elliptic circularity, when something that is yellow when place under red fluorescent light looks red. In all of these cases, one version of the argument goes, it is implausible to maintain that what we are directly acquainted with is the real nature of the object in question. Indeed, it is hard to see how we can be said to be aware of the really physical object at all. In the above illusions the things we were aware of actually were bent, elliptical and red, respectively. But, by hypothesis, the really physical objects lacked these properties. Thus, we were not aware of the substantial reality of been real as a physical objects or theory.
So far, if the argument is relevant to any of the direct realisms distinguished above, it seems relevant only to the claim that in all sense experience we are directly acquainted with parts or constituents of physical objects. After all, even if in illusion we are not acquainted with physical objects, but their surfaces, or their constituents, why should we conclude anything about the hidden nature of our relations to the physical world in veridical experience?
We are supposed to discover the answer to this question by noticing the similarities between illusory experience and veridical experience and by reflecting on what makes illusion possible at all. Illusion can occur because the nature of the illusory experience is determined, not just by the nature of events or sorted, conflicting affairs but the object perceived as itself the event in cause, but also by other conditions, both external and internal as becoming of an inner or as the outer experience. But all of our sensations are subject to these causal influences and it would be gratuitous and arbitrary to select from indefinitely of many and subtly different perceptual experiences some special ones those that get ‘us’ in touch with the ‘real’ nature of the physical world and its surrounding surfaces. Red fluorescent light affects the way thing’s look, but so does sunlight. Water reflects light, but so does air. We have no unmediated access to the external world.
The Philosophy of science, and scientific epistemology are not the only area where philosophers have lately urged the relevance of neuroscientific discoveries. Kathleen Akins argues that a "traditional" view of the senses underlies the variety of sophisticated "naturalistic" programs about intentionality. Current neuroscientific understanding of the mechanisms and coding strategies implemented by sensory receptors shows that this traditional view is mistaken. The traditional view holds that sensory systems are "veridical" in at least three ways. (1) Each signal in the system correlates with a small range of properties in the external (to the body) environment. (2) The structure in the relevant relations between the external properties the receptors are sensitive to is preserved in the structure of the relations between the resulting sensory states. And (3) the sensory system re-constructively in faithfully, without fictive additions or embellishments, the external events. Using recent neurobiological discoveries about response properties of thermal receptors in the skin as an illustration, Akins shows that sensory systems are "narcissistic" rather than "veridical." All three traditional assumptions are violated. These neurobiological details and their philosophical implications open novel questions for the philosophy of perception and for the appropriate foundations for naturalistic projects about intentionality. Armed with the known neurophysiology of sensory receptors, for example, our "philosophy of perception" or of "perceptual intentionality" will no longer focus on the search for correlations between states of sensory systems and "veridically detected" external properties. This traditional philosophical (and scientific) project rests upon a mistaken "veridical" view of the senses. Neuroscientific knowledge of sensory receptor activity also shows that sensory experience does not serve the naturalist well as a "simple paradigm case" of an intentional relation between representation and world. Once again, available scientific detail shows the naivety of some traditional philosophical projects.
Focussing on the anatomy and physiology of the pain transmission system, Valerie Hardcastle (1997) urges a similar negative implication for a popular methodological assumption. Pain experiences have long been philosophers' favorite cases for analysis and theorizing about conscious experience generally. Nevertheless, every position about pain experiences has been defended recently: eliminativist, a variety of objectivists view, relational views, and subjectivist views. Why so little agreement, despite agreement that pain experience is the place to start an analysis or theory of consciousness? Hardcastle urges two answers. First, philosophers tend to be uninformed about the neuronal complexity of our pain transmission systems, and build their analyses or theories on the outcome of a single component of a multi-component system. Second, even those who understand some of the underlying neurobiology of pain tends to advocate gate-control theories. But the best existing gate-control theories are vague about the neural mechanisms of the gates. Hardcastle instead proposes a dissociable dual system of pain transmission, consisting of a pain sensory system closely analogous in its neurobiological implementation to other sensory systems, and a descending pain inhibitory system. She argues that this dual system is consistent with recent neuroscientific discoveries and accounts for all the pain phenomena that have tempted philosophers toward particular (but limited) theories of pain experience. The neurobiological uniqueness of the pain inhibitory system, contrasted with the mechanisms of other sensory modalities, renders pain processing atypical. In particular, the pain inhibitory system dissociates pains sensation from stimulation of nociceptors (pain receptors). Hardcastle concludes from the neurobiological uniqueness of pain transmission that pain experiences are atypical conscious events, and hence not a good place to start theorizing about or analyzing the general type.
Developing and defending theories of content is a central topic in current philosophy of mind. A common desideratum in this debate is a theory of cognitive representation consistent with a physical or naturalistic ontology. We'll here describe a few contributions neuro philosophers have made to this literature.
When one perceives or remembers that he is out of coffee, his brain state possesses intentionality or "aboutness." The percept or memory is about one's being out of coffee, and it represents one for being out of coffee. The representational state has content. A psychosemantics seeks to explain what it is for a representational state to be about something: to provide an account of how states and events can have specific representational content. A physicalist psychosemantics seeks to do this using resources of the physical sciences exclusively. neuro philosophers have contributed to two types of physicalist psychosemantics: the Functional Role approach and the Informational approach.
The nucleus of functional roles of semantics holds that a representation has its content in virtue of relations it bears to other representations. Its paradigm application is to concepts of truth-functional logic, like the conjunctive ‘and’ or disjunctive ‘or.’ A physical event instantiates the ‘and’ function just in case it maps two true inputs onto a single true output. Thus an expression bears the relations to others that give it the semantic content of ‘and.’ Proponents of functional role semantics propose similar analyses for the content of all representations (Form 1986). A physical event represents birds, for example, if it bears the right relations to events representing feathers and others representing beaks. By contrast, informational semantics associates content to a state depending upon the causal relations obtaining between the state and the object it represents. A physical state represents birds, for example, just in case an appropriate causal relation obtains between it and birds. At the heart of informational semantics is a causal account of information. Red spots on a face carry the information that one has measles because the red spots are caused by the measles virus. A common criticism of informational semantics holds that mere causal covariation is insufficient for representation, since information (in the causal sense) is by definition, always veridical while representations can misrepresent. A popular solution to this challenge invokes a teleological analysis of ‘function.’ A brain state represents X by virtue of having the function of carrying information about being caused by X (Dretske 1988). These two approaches do not exhaust the popular options for a psychosemantics, but are the ones to which neuro philosophers have contributed.
Jerry Fodor and Ernest LePore raise an important challenge to Churchlands psychosemantics. Location in a state space alone seems insufficient to fix a state's representational content. Churchland never explains why a point in a three-dimensional state space represents the Collor, as opposed to any other quality, object, or event that varies along three dimensions. Churchlands account achieves its explanatory power by the interpretation imposed on the dimensions. Fodor and LePore allege that Churchland never specifies how a dimension comes to represent, e.g., degree of saltiness, as opposed to yellow-blue wavelength opposition. One obvious answer appeals to the stimuli that form the ‘external’ inputs to the neural network in question. Then, for example, the individuating conditions on neural representations of colours are that opponent processing neurons receive input from a specific class of photoreceptors. The latter in turn have electromagnetic radiation (of a specific portion of the visible spectrum) as their activating stimuli. However, this appeal to ‘external’ stimuli as the ultimate individuating conditions for representational content makes the resulting approach a version of informational semantics. Is this approach consonant with other neurobiological details?
The neurobiological paradigm for informational semantics is the feature detector: One or more neurons that are (I) maximally responsive to a particular type of stimulus, and (ii) have the function of indicating the presence of that stimulus type. Examples of such stimulus-types for visual feature detectors include high-contrast edges, motion direction, and colours. A favorite feature detector among philosophers is the alleged fly detector in the frog. Lettvin et al. (1959) identified cells in the frog retina that responded maximally to small shapes moving across the visual field. The idea that these cells' activity functioned to detect flies rested upon knowledge of the frogs' diet. Using experimental techniques ranging from single-cell recording to sophisticated functional imaging, neuroscientists have recently discovered a host of neurons that are maximally responsive to a variety of stimuli. However, establishing condition (ii) on a feature detector is much more difficult. Even some paradigm examples have been called into question. David Hubel and Torsten Wiesel's (1962) Nobel Prize winning work establishing the receptive fields of neurons in striate cortices are often interpreted as revealing cells whose function is edge detection. However, Lehky and Sejnowski (1988) have challenged this interpretation. They trained an artificial neural network to distinguish the three-dimensional shape and orientation of an object from its two-dimensional shading pattern. Their network incorporates many features of visual neurophysiology. Nodes in the trained network turned out to be maximally responsive to edge contrasts, but did not appear to have the function of edge detection.
Kathleen Akins (1996) offers a different neuro philosophical challenge to informational semantics and its affiliated feature-detection view of sensory representation. We saw in the previous section how Akins argues that the physiology of thermoreceptor violates three necessary conditions on ‘veridical’ representation. From this fact she draws doubts about looking for feature detecting neurons to ground a psychosemantics generally, including thought contents. Human thoughts about flies, for example, are sensitive to numerical distinctions between particular flies and the particular locations they can occupy. But the ends of frog nutrition are well served without a representational system sensitive to such ontological refinements. Whether a fly seen now is numerically identical to one seen a moment ago, need not, and perhaps cannot, figure into the frog's feature detection repertoire. Akins' critique casts doubt on whether details of sensory transduction will scale up to encompass of some adequately unified psychosemantics. It also raises new questions for human intentionality. How do we get from activity patterns in "narcissistic" sensory receptors, keyed not to "objective" environmental features but rather only to effects of the stimuli on the patch of tissue innervated, to the human ontology replete with enduring objects with stable configurations of properties and relations, types and their tokens (as the "fly-thought" example presented above reveals), and the rest? And how did the development of a stable, and rich ontology confer survival advantages to human ancestors?
Consciousness has reemerged as a topic in philosophy of mind and the cognitive and brain sciences over the past three decades. Instead of ignoring it, many physicalists now seek to explain it (Dennett, 1991). Here we focus exclusively on ways those neuroscientific discoveries have impacted philosophical debates about the nature of consciousness and its relation to physical mechanisms. Thomas Nagel argues that conscious experience is subjective, and thus permanently recalcitrant to objective scientific understanding. He invites us to ponder ‘what it is like to be a bat’ and urges the intuition that no amount of physical-scientific knowledge (including neuroscientific) supplies a complete answer. Nagel's intuition pump has generated extensive philosophical discussion. At least two well-known replies make direct appeal to neurophysiology. John Biro suggests that part of the intuition pumped by Nagel, that bat experience is substantially different from human experience, presupposes systematic relations between physiology and phenomenology. Kathleen Akins (1993) delves deeper into existing knowledge of bat physiology and reports much that is pertinent to Nagel's question. She argues that many of the questions about bat subjectivity that we still consider open hinge on questions that remain unanswered about neuroscientific details. One example of the latter is the function of various cortical activity profiles in the active bat.
The more recent philosopher David Chalmers (1996), has argued that any possible brain-process account of consciousness will leave open an ‘explanatory gap’ between the brain process and properties of the conscious experience. This is because no brain-process theory can answer the "hard" question: Why should that particular brain process give rise to conscious experience? We can always imagine ("conceive of") a universe populated by creatures having those brain processes but completely lacking conscious experience. A theory of consciousness requires an explanation of how and why some brain process causes consciousness replete with all the features we commonly experience. The fact that the hard question remains unanswered shows that we will probably never get a complete explanation of consciousness at the level of neural mechanisms. Paul and Patricia Churchland have recently offered the following diagnosis and reply. Chalmers offer a conceptual argument, based on our ability to imagine creatures possessing brains like ours but wholly lacking in conscious experience. But the more one learns about how the brain produces conscious experience-and literature is beginning to emerge (e.g., Gazzaniga, 1995)-the harder it becomes to imagine a universe consisting of creatures with brain processes like ours but lacking consciousness. This is not just to bare assertions. The Churchlands appeal to some neurobiological detail. For example, Paul Churchland (1995) develops a neuroscientific account of consciousness based on recurrent connections between thalamic nuclei (particularly "diffusely projecting" nuclei like the intralaminar nuclei) and the cortex. Churchland argues that the thalamocortical recurrency accounts for the selective features of consciousness, for the effects of short-term memory on conscious experience, for vivid dreaming during REM. (rapid-eye movement) sleep, and other "core" features of conscious experience. In other words, the Churchlands are claiming that when one learns about activity patterns in these recurrent circuits, one can't "imagine" or "conceive of" this activity occurring without these core features of conscious experience. (Other than just mouthing the words, "I am now imagining activity in these circuits without selective attention/the effects of short-term memory/vivid dreaming . . . ")
A second focus of sceptical arguments about a complete neuroscientific explanation of consciousness is sensory qualia: the introspectable qualitative aspects of sensory experience, the features by which subjects discern similarities and differences among their experiences. The colours of visual sensations are a philosopher's favorite example. One famous puzzle about colour qualia is the alleged conceivability of spectral inversions. Many philosophers claim that it is conceptually possible (if perhaps physically impossible) for two humans not to differ neurophysiological, while the Collor that fire engines and tomatoes appear to have to one subject is the Collor that grass and frogs appear to have to the other (and vice versa). A large amount of neuroscientifically-informed philosophy has addressed this question. A related area where neurophilosophical considerations have emerged concerns the metaphysics of colours themselves (rather than Collor experiences). A longstanding philosophical dispute is whether colours are objective property’s Existing external to perceiver or rather identifiable as or dependent upon minds or nervous systems. Some recent work on this problem begins with characteristics of Collor experiences: For example that Collor similarity judgments produce Collor orderings that align on a circle. With this resource, one can seek mappings of phenomenology onto environmental or physiological regularities. Identifying colours with particular frequencies of electromagnetic radiation does not preserve the structure of the hue circle, whereas identifying colours with activity in opponent processing neurons does. Such a tidbit is not decisive for the Collor objectivist-subjectivist debate, but it does convey the type of neurophilosophical work being done on traditional metaphysical issues beyond the philosophy of mind.
We saw in the discussion of Hardcastle (1997) two sections above that Neurophilosophers have entered disputes about the nature and methodological import of pain experiences. Two decades earlier, Dan Dennett (1978) took up the question of whether it is possible to build a computer that feels pain. He compares and notes pressure between neurophysiological discoveries and common sense intuitions about pain experience. He suspects that the incommensurability between scientific and common sense views is due to incoherence in the latter. His attitude is wait-and-see. But foreshadowing Churchland's reply to Chalmers, Dennett favours scientific investigations over conceivability-based philosophical arguments.
Neurological deficits have attracted philosophical interest. For thirty years philosophers have found implications for the unity of the self in experiments with commissurotomy patients. In carefully controlled experiments, commissurotomy patients display two dissociable seats of consciousness. Patricia Churchland scouts philosophical implications of a variety of neurological deficits. One deficit is blindsight. Some patients with lesions to primary visual cortex report being unable to see items in regions of their visual fields, yet perform far better than chance in forced guess trials about stimuli in those regions. A variety of scientific and philosophical interpretations have been offered. Ned Form (1988) worries that many of these conflate distinct notions of consciousness. He labels these notions ‘phenomenal consciousness’ (‘P-consciousness’) and ‘access consciousness’ (‘A-consciousness’). The former is that which, ‘what it is like-ness of experience. The latter is the availability of representational content to self-initiated action and speech. Form argues that P-consciousness is not always representational whereas A-consciousness is. Dennett and Michael Tye are sceptical of non-representational analyses of consciousness in general. They provide accounts of blindsight that do not depend on Form's distinction.
Many other topics are worth neurophilosophical pursuit. We mentioned commissurotomy and the unity of consciousness and the self, which continues to generate discussion. Qualia beyond those of Collor and pain have begun to attract neurophilosophical attention has self-consciousness. The first issues to arise in the ‘philosophy of neuroscience’ (before there was a recognized area) was the localization of cognitive functions to specific neural regions. Although the ‘localization’ approach had dubious origins in the phrenology of Gall and Spurzheim, and was challenged severely by Flourens throughout the early nineteenth century, it reemerged in the study of aphasia by Bouillaud, Auburtin, Broca, and Wernicke. These neurologists made careful studies (where possible) of linguistic deficits in their aphasic patients followed by brain autopsies postmortem. Broca's initial study of twenty-two patients in the mid-nineteenth century confirmed that damage to the left cortical hemisphere was predominant, and that damage to the second and third frontal convolutions was necessary to produce speech production deficits. Although the anatomical coordinates’ Broca postulates for the ‘speech production centres do not correlate exactly with damage producing production deficits, both are that in this area of frontal cortex and speech production deficits still bear his name (‘Broca's area’ and ‘Broca's aphasia’). Less than two decades later Carl Wernicke published evidence for a second language Centre. This area is anatomically distinct from Broca's area, and damage to it produced a very different set of aphasic symptoms. The cortical area that still bears his name (‘Wernicke's area’) is located around the first and second convolutions in temporal cortex, and the aphasia that bears his name (‘Wernicke's aphasia’) involves deficits in language comprehension. Wernicke's method, like Broca's, was based on lesion studies: a careful evaluation of the behavioural deficits followed by post mortem examination to find the sites of tissue damage and atrophy. Lesion studies suggesting more precise localization of specific linguistic functions remain a cornerstone to this day in aphasic research
Lesion studies have also produced evidence for the localization of other cognitive functions: for example, sensory processing and certain types of learning and memory. However, localization arguments for these other functions invariably include studies using animal models. With an animal model, one can perform careful behavioural measures in highly controlled settings, then ablate specific areas of neural tissue (or use a variety of other techniques to Form or enhance activity in these areas) and remeasure performance on the same behavioural tests. But since we lack an animal model for (human) language production and comprehension, this additional evidence isn't available to the neurologist or neurolinguist. This fact makes the study of language a paradigm case for evaluating the logic of the lesion/deficit method of inferring functional localization. Philosopher Barbara Von Eckardt (1978) attempts to make explicit the steps of reasoning involved in this common and historically important method. Her analysis begins with Robert Cummins' early analysis of functional explanation, but she extends it into a notion of structurally adequate functional analysis. These analyses break down a complex capacity C into its constituent capacity’s c1, c2, . . . cn, where the constituent capacities are consistent with the underlying structural details of the system. For example, human speech production (complex capacity C) results from formulating a speech intention, then selecting appropriate linguistic representations to capture the content of the speech intention, then formulating the motor commands to produce the appropriate sounds, then communicating these motor commands to the appropriate motor pathways (constituent capacity’s c1, c2, . . . , cn). A functional-localization hypothesis has the form: Brain structure S in an organism (type) O has constituent capacity ci, where ci is a function of some part of O. An example, Brains Broca's area (S) in humans (O) formulates motor commands to produce the appropriate sounds (one of the constituent capacities ci). Such hypotheses specify aspects of the structural realization of a functional-component model. They are part of the theory of the neural realization of the functional model.
Armed with these characterizations, Von Eckardt argues that inference to a functional-localization hypothesis proceeds in two steps. First, a functional deficit in a patient is hypothesized based on the abnormal behaviour the patient exhibits. Second, localization of function in normal brains is inferred on the basis of the functional deficit hypothesis plus the evidence about the site of brain damage. The structurally-adequate functional analysis of the capacity connects the pathological behaviour to the hypothesized functional deficit. This connection suggests four adequacy conditions on a functional deficit hypothesis. First, the pathological behaviour P (e.g., the speech deficits characteristic of Broca's aphasia) must result from failing to exercise some complex capacity C (human speech production). Second, there must be a structurally-adequate functional analysis of how people exercise capacity C that involves some constituent capacity ci (formulating motor commands to produce the appropriate sounds). Third, the operation of the steps described by the structurally-adequate functional analysis minus the operation of the component performing ci (Broca's area) must result in pathological behaviour P. Fourth, there must not be a better available explanation for why the patient does P. Arguments to a functional deficit hypothesis on the basis of pathological behaviour is thus an instance of argument to the best available explanation. When postulating a deficit in a normal functional component provides the best available explanation of the pathological data, we are justified in drawing the inference.
Von Eckardt applies this analysis to a neurological case study involving a controversial reinterpretation of agnosia. Her philosophical explication of this important neurological method reveals that most challenges to localization arguments of whether to argue only against the localization of a particular type of functional capacity or against generalizing from localization of function in one individual to all normal individuals. (She presents examples of each from the neurological literature.) Such challenges do not impugn the validity of standard arguments for functional localization from deficits. It does not follow that such arguments are unproblematic. But they face difficult factual and methodological problems, not logical ones. Furthermore, the analysis of these arguments as involving a type of functional analysis and inference to the best available explanation carries an important implication for the biological study of cognitive function. Functional analyses require functional theories, and structurally adequate functional analyses require checks imposed by the lower level sciences investigating the underlying physical mechanisms. Arguments to best available explanation are often hampered by a lack of theoretical imagination: the available explanations are often severely limited. We must seek theoretical inspiration from any level of theory and explanation. Hence making explicit the ‘logic’ of this common and historically important form of neurological explanation reveals the necessity of joint participation from all scientific levels, from cognitive psychology down to molecular neuroscience. Von Eckardt anticipated what came to be heralded as the ‘co-evolutionary research methodology,’ which remains a centerpiece of neurophilosophy to the present day.
Over the last two decades, evidence for localization of cognitive function has come increasingly from a new source: the development and refinement of neuroimaging techniques. The form of localization-of-function argument appears not to have changed from that employing lesion studies (as analysed by Von Eckardt). Instead, these imaging technologies resolve some of the methodological problems that plage lesion studies. For example, researchers do not need to wait until the patient dies, and in the meantime probably acquires additional brain damage, to find the lesion sites. Two functional imaging techniques are prominent: Positron emission tomography, or PET, and functional magnetic resonance imaging, or MRI. Although these measure different biological markers of functional activity, both now have a resolution down to around one millimetre. As these techniques increase spatial and temporal resolution of functional markers and continue to be used with sophisticated behavioural methodologies, the possibility of localizing specific psychological functions to increasingly specific neural regions continues to grow
What we now know about the cellular and molecular mechanisms of neural conductance and transmission is spectacular. The same evaluation holds for all levels of explanation and theory about the mind/brain: maps, networks, systems, and behaviour. This is a natural outcome of increasing scientific specialization. We develop the technology, the experimental techniques, and the theoretical frameworks within specific disciplines to push forward our understanding. Still, a crucial aspect of the total picture gets neglected: the relationship between the levels, the ‘glue’ that binds knowledge of neuron activity to subcellular and molecular mechanisms, network activity patterns to the activity of and connectivity between single neurons, and behavioural network activity. This problem is especially glaring when we focus on the relationship between ‘cognitivist’ psychological theories, postulating information-bearing representations and processes operating over their contents, and the activity patterns in networks of neurons. Co-evolution between explanatory levels still seems more like a distant dream rather than an operative methodology.
It is here that some neuroscientists appeal to ‘computational’ methods. If we examine the way that computational models function in more developed sciences (like physics), we find the resources of dynamical systems constantly employed. Global effects (such as large-scale meteorological patterns) are explained in terms of the interaction of ‘local’ lower-level physical phenomena, but only by dynamical, nonlinear, and often chaotic sequences and combinations. Addressing the interlocking levels of theory and explanation in the mind/brain using computational resources that have worked to bridge levels in more mature sciences might yield comparable results. This methodology is necessarily interdisciplinary, drawing on resources and researchers from a variety of levels, including higher levels like experimental psychology, ‘program-writing’ and ‘connectionist’ artificial intelligence, and philosophy of science.
However, the use of computational methods in neuroscience is not new. Hodgkin, Huxley, and Katz incorporated values of voltage-dependent potassium conductance they had measured experimentally in the squid giant axon into an equation from physics describing the time evolution of a first-order kinetic process. This equation enabled them to calculate best-fit curves for modelled conductance versus time data that reproduced the S-shaped (sigmoidal) function suggested by their experimental data. Using equations borrowed from physics, Rall (1959) developed the cable model of dendrites. This theory provided an account of how the various inputs from across the dendritic tree interact temporally and spatially to determine the input-output properties of single neurons. It remains influential today, and has been incorporated into the genesis software for programming neurally realistic networks. More recently, David Sparks and his colleagues have shown that a vector-averaging model of activity in neurons of superior colliculi correctly predicts experimental results about the amplitude and direction of saccadic eye movements. Working with a more sophisticated mathematical model, Apostolos Georgopoulos and his colleagues have predicted direction and amplitude of hand and arm movements based on averaged activity of 224 cells in motor cortices. Their predictions have borne out under a variety of experimental tests. We mention these particular studies only because we are familiar with them. We could multiply examples of the fruitful interaction of computational and experimental methods in neuroscience easily by one-hundred-fold. Many of these extend back before ‘computational neuroscience’ was a recognized research endeavour.
We've already seen one example, the vector transformation account, of neural representation and computation, under active development in cognitive neuroscience. Other approaches using ‘cognitivist’ resources are also being pursued. Many of these projects draw upon ‘cognitivist’ characterizations of the phenomena to be explained. Many exploit ‘cognitivist’ experimental techniques and methodologies. Some even attempt to derive ‘cognitivist’ explanations from cell-biological processes (e.g., Hawkins and Kandel 1984). As Stephen Kosslyn puts it, cognitive neuroscientists employ the ‘information processing’ view of the mind characteristic of cognitivism without trying to separate it from theories of brain mechanisms. Such an endeavour calls for an interdisciplinary community willing to communicate the relevant portions of the mountain of detail gathered in individual disciplines with interested nonspecialists: not just people willing to confer with those working at related levels, but researchers trained in the methods and factual details of a variety of levels. This is a daunting requirement, but it does offer some hope for philosophers wishing to contribute to future neuroscience. Thinkers trained in both the ‘synoptic vision’ afforded by philosophy and the factual and experimental basis of genuine graduate-level science would be ideally equipped for this task. Recognition of this potential niche has been slow among graduate programs in philosophy, but there is some hope that a few programs are taking steps to fill it.
In the final analysis there will be philosophers unprepared to accept that, if a given cognitive capacity is psychologically real, then there must be an explanation of how it is possible for an individual in the course of human development to acquire that cognitive capacity, or anything like it, can have a role to play in philosophical accounts of concepts and conceptual abilities. The most obvious basis for such a view would be a Frégean distrust of “psychology” that leads to a rigid division of labour between philosophy and psychology. The operative thought is that the task of a philosophical theory of concepts is to explain what a given concept is or what a given conceptual ability consist in. This, it is frequently maintained, is something that can be done in complete independence of explaining how such a concept or ability might be acquired. The underlying distinction is one between philosophical questions cantering around concept possession and psychological questions cantering around concept possibilities for an individual to acquire that ability, then it cannot be psychologically real. Nevertheless, this distinction is, however, strictly one does adhere to the distinction, it provides no support for a rejection of any given cognitive capacity for which is psychologically real. The neo-Frégean distinction is directly against the view that facts about how concepts are acquired have a role to play in explaining and individualizing concepts. But this view does not have to be disputed by a supporter as such, nonetheless, all that the supporter is to commit is that the principle that no satisfactory account of what a concept is should make it impossible to provide explanation of how that concept can be acquired. That is, that this principle has nothing to say about the further question of whether the psychological explanation has a role to play in a constitutive explanation of the concept, and hence is not in conflict with the neo-Frégean distinction.
A full account of the structure of consciousness, will need to illustrate those higher, conceptual forms of consciousness to which little attention on such an account will take and about how it might emerge from given points of value, is the thought that an explanation of everything that is distinctive about consciousness will emerge out of an account of what it is for a subject to be capable of thinking about himself. But, to a proper understanding of the complex phenomenon of consciousness. There are no facts about linguistic mastery that will determine or explain what might be termed the cognitive dynamics that are individual processes that have found their way forward for a theory of consciousness, it sees, to chart the characteristic features individualizing the various distinct conceptual forms of consciousness in a way that will provide a taxonomy of unconsciousness and they, to show how these manifest the characterlogical functions a can to determine at the level of content. What is hoped is now clear is that these forms of higher forms of consciousness emerge from a rich foundation of non-conceptual representations of thought, which can only expose and clarify their conviction that these forms of conscious thought hold the key, not just to an eventful account of how mastery of the conscious paradigms, but to a proper understanding of the plexuity of self-consciousness and/or the overall conjecture of consciousness that stands alone as to an everlasting vanquishment into the ever unchangeless state of unconsciousness, and its abysses are only held by incestuousness.
Theory itself, is consistent with fact or reality, not false or incorrect, but truthful, it is sincerely felt or expressed unforeignly and so, that it is essential and exacting of several standing rules and senses of governing requirements. As stapled or fitted in sensing the definitive criteria of narrowly particular possibilities in value as taken by a variaby accord with reality. To position of something, as to make it balanced, level or square, that we may think of a proper alignment as something, in so, that one is certain, like trust, another derivation of the same appears on the name is etymologically, or ‘strong seers’. Conformity of fact or the actuality of a statement as been or accepted as true to an original or standard set class theory from which it is considered as the supreme reality and to have the ultimate meaning, and value of existence. It is, nonetheless, a compound position, such as a conjunction or negation, the truth-values have always determined whose truth-values of that component thesis.
Moreover, science, unswerving exactly to position of something very well hidden, its nature in so that to make it believed, is quickly and imposes on sensing and responding to the definitive qualities or state of being actual or true, such that as a person, an entity, or an event, that might be gainfully employed of all things possessing actuality, existence, or essence. In other words, in that which is objectively inside and out, and in addition it seems to appropriate that of reality, in fact, to the satisfying factions of instinctual needs through the awarenesses of and adjustments abided to environmental demands. Thus, the act of realizing or the condition of truth as seen for being realized, and the existent remnants resulting throughout the retrogressive detentions that are undoubtingly realized.
However, a declaration made to explain or justify action, or its believing desire upon which it is to act, by which the conviction underlying facts or cause, that provide logical sense for a premise or occurrence for logical, rational. Analytic mental states have long since lost in reason, but, yet, the premise usually takes upon the minor premises of an argument, using this faculty of reason that arises to throughout the spoken exchange or a debative discussion, and, of course, spoken in a dialectic way. To determining or conclusively logical impounded by thinking through its directorial solution to the problem, would therefore persuade or dissuade someone with reason that posits of itself with the good sense or justification of reasonability. In which, good causes are simply justifiably to be considered as to think. By which humans seek or attain knowledge or truth. Mere reason is insufficient to convince ‘us’ of its veracity. Still, comprehension perceptively welcomes an intuitively given certainty, as the truth or fact, without the use of the rational process, as one comes to assessing someone’s character, it sublimely configures one consideration, and often with resulting comprehensions, in which it is assessing situations or circumstances and draw sound conclusions into the reign of judgement.
Governing by or being accorded to reason or sound thinking, in that a reasonable solution to the problem, may as well, in being without bounds of common sense and arriving to a fair use of reason, especially to form conclusions, inferences or judgements. In that, all evidential alternates of a confronting argument within the use in thinking or thought out responses to issuing the furthering argumentation to fit or join in the sum parts that are composite to the intellectual faculties, by which case human understanding or the attemptive grasp to its thought, are the resulting liberty encroaching men of zeal, well-meaningly, but without understanding.
Being or occurring in fact or having to some verifiable existence, real objects, and a real illness. . . .’Really true and actual and not imaginary, alleged, or ideal, as people and not ghosts, from which are we to find on practical matters and concerns of experiencing the real world. The surrounding surfaces, might we, as, perhaps attest to this for the first time. Being no less than what they state, we have not taken its free pretence, or affections for a real experience highly, as many may encounter real trouble. This, nonetheless, projects of an existing objectivity in which the world despite subjectivity or conventions of thought or language is or have valuing representation, reckoned by actual power, in that of relating to, or being an image formed by light or another identifiable simulation, that converge in space, the stationary or fixed properties, such as a thing or whole having actual existence. All of which, are accorded a truly factual experience into which the actual attestations have brought to you by the afforded efforts of our very own imaginations.
Ideally, in theory the imagination, a concept of reason that is transcendent but non-empirical as to think os conception of and ideal thought, that potentially or actual exists in the mind as a product exclusive to the mental act. In the philosophy of Plato, an archetype of which a corresponding being in phenomenal reality is an imperfect replica, that also, Hegel’s absolute truth, as the conception and ultimate product of reason (the absolute meaning a mental image of something remembered).
Conceivably, in the imagination the formation of a mental image of something that is or should be b perceived as real nor present to the senses. Nevertheless, the image so formed can confront and deal with the reality by using the creative powers of the mind. That is characteristically well removed from reality, but all powers of fantasy over reason are a degree of insanity/ still, fancy as they have given a product of the imagination free reins, that is in command of the fantasy while it is exactly the mark of the neurotic that his very own fantasy possesses him.
All things possessing actuality, existence or essence that exists objectively and in fact based on real occurrences that exist or known to have existed, a real occurrence, an event, i.e., had to prove the facts of the case, as something believed to be true or real, determining by evidence or truth as to do. However, the usage in the sense ‘allegation of fact’, and the reasoning are wrong of the ‘facts’ and ‘substantive facts’, as we may never know the ‘facts’ of the case’. These usages may occasion qualms’ among critics who insist that facts can only be true, but the usages are often useful for emphasis. Therefore, we have related to, or used the discovery or determinations of fast or accurate information in the discovery of facts, then evidence has determined the comprising events or truth is much as ado about their owing actuality. Its opposition forming the literature that treats real people or events as if they were fictional or uses real people or events as essential elements in an otherwise fictional rendition, i.e., of, relating to, produced by, or characterized by internal dissension, as given to or promoting internal dissension. So, then, it is produced artificially than by a natural process, especially the lacking authenticity or genuine factitious values of another than what is or of reality should be.
Substantive set statements or principles devised to explain a group of facts or phenomena, especially one that we have tested or is together experiment with and taken for ‘us’ to conclude and can be put-upon to make predictions about natural phenomena. Having the consistency of explanatory statements, accepted principles, and methods of analysis, finds to a set of theorems that make up a systematic view of a branch in mathematics or extends upon the paradigms of science, the belief or principle that guides action or helps comprehension or judgements, usually by an ascription based on limited information or knowledge, as a conjecture, tenably to assert the creation from a speculative assumption that bestows to its beginning. Theoretically, to, affiliate oneself with to, or based by itself on theory, i.e., the restriction to theory, is not as much a practical theory of physics, as given to speculative theorizing. Also, the given idea, because of which formidable combinations awaiting upon the inception of an idea, demonstrated as true or is given to demonstration. In mathematics its containment lies of the proposition that has been or is to be proved from explicit assumption and is primarily with theoretical assessments or hypothetical theorizing than possibly these might be thoughtful measures and taken as the characteristics by which we measure its quality value?
Looking back, one can see a discovering degree of homogeneity among the philosophers of the early twentieth century about the topics central to their concerns. More striking still, is the apparent profundities and abstrusity of concerns for which appear at first glance to be separated from the discerned debates of previous centuries, between ‘realism’ and ‘idealist’, say, of ‘rationalists’ and ‘empiricist’.
Thus, no matter what the current debate or discussion, the central issue is often without conceptual and contentual representations, that if one is without concept, is without idea, such that in one foul swoop would ingest the mere truth that lies to the underlying paradoxes of why is there something instead of nothing? Whatever it is that makes, what would otherwise be mere utterances and inscriptions into instruments of communication and understanding. This philosophical problem is to demystify this over-flowing emptiness, and to relate to what we know of ourselves and subjective matter’s resembling reality or ours is to an inherent perceptivity of the world and its surrounding surfaces.
Contributions to this study include the theory of ‘speech arts’, and the investigation of communicable communications, especially the relationship between words and ‘ideas’, and words and the ‘world’. It is, nonetheless, that which and utterance or sentence expresses, the proposition or claim made about the world. By extension, the content of a predicate that any expression effectively connecting with one or more singular terms to make a sentence, the expressed condition that the entities referred to may satisfy, in which case the resulting sentence will be true. Consequently we may think of a predicate as a function from things to sentences or even to truth-values, or other sub-sentential components that contribute to sentences that contain it. The nature of content is the central concern of the philosophy of language.
What some person expresses of a sentence often depends on the environment in which he or she is placed. For example, the disease I refer to by a term like ‘arthritis’ or the kind of tree I call of its criteria will define a ‘beech’ of which I know next to nothing. This raises the possibility of imaging two persons as an alternative different environment, but in which everything appears the same to each of them. The wide content of their thoughts and saying will be different if the situation surrounding them is appropriately different, ‘situation’ may here include the actual objects hey perceive, or the chemical or physical kinds of objects in the world they inhabit, or the history of their words, or the decisions of authorities on what counts as an example of one term thy use. The narrow content is that part of their thought that remains identical, through the identity of the way things appear, despite these differences of surroundings. Partisans of wide, . . . ‘as, something called broadly, content may doubt whether any content is in this sense narrow, partisans of narrow content believe that it is the fundamental notion, with wide content being on narrow content confirming context.
All and all, assuming their rationality has characterized people is common, and the most evident display of our rationality is capable to think. This is the rehearsal in the mind of what to say, or what to do. Not all thinking is verbal, since chess players, composers, and painters all think, and there is no deductive reason that their deliberations should take any more verbal a form than their actions. It is permanently tempting to conceive of this activity about the presence in the mind of elements of some language, or other medium that represents aspects of the world and its surrounding surface structures. However, the model has been attacked, notably by Ludwig Wittgenstein (1889-1951), whose influential application of these ideas was in the philosophy of mind. Wittgenstein explores the role that reports of introspection, or sensations, or intentions, or beliefs can play of our social lives, to undermine the Cartesian mental picture is that they functionally describe the goings-on in an inner theatre of which the subject is the lone spectator. Passages that have subsequentially become known as the ‘rule following’ considerations and the ‘private language argument’ are among the fundamental topics of modern philosophy of language and mind, although their precise interpretation is endlessly controversial.
Effectively, the hypotheses especially associated with Jerry Fodor (1935-), whom is known for the ‘resolute realism’, about the nature of mental functioning, that occurs in a language different from one’s ordinary native language, but underlying and explaining our competence with it. The idea is a development of the notion of an innate universal grammar (Avram Noam Chomsky, 1928-), in as such, that we agree that since a computer programs are linguistically complex sets of instructions were the relative executions by which explains of surface behaviour or the adequacy of the computerized programming installations, if it were definably amendable and, advisably corrective, in that most are disconcerting of many that are ultimately a reason for ‘us’ of thinking intuitively and without the indulgence of retrospective preferences, but an ethical majority in defending of its moral line that is already confronting ‘us’. That these programs may or may not improve to conditions that are lastly to enhance of the right sort of an existence forwarded toward a more valuing amount in humanities lesser extensions that embrace one’s riff of necessity to humanities’ abeyance to expressions in the finer of qualities.
As an explanation of ordinary language-learning and competence, the hypothesis has not found universal favour, as only ordinary representational powers that by invoking the image of the learning person’s capabilities are apparently whom the abilities for translating are contending of an innate language whose own powers are mysteriously a biological given. Perhaps, the view that everyday attributions of intentionality, beliefs, and meaning to other persons proceed by means of a tactic use of a theory that enables one to construct these interpretations as explanations of their doings. We commonly hold the view along with ‘functionalism’, according to which psychological states are theoretical entities, identified by the network of their causes and effects. The theory-theory has different implications, depending upon which feature of theories we are stressing. Theories may be thought of as capable of formalization, as yielding predictions and explanations, as achieved by a process of theorizing, as answering to empirical evidence that is in principle describable without them, as liable to be overturned by newer and better theories, and so on.
The main problem with seeing our understanding of others as the outcome of a piece of theorizing is the nonexistence of a medium in which this theory can be couched, as the child learns simultaneously the minds of others and the meaning of terms in its native language, is not gained by the tactic use of a ‘theory’, enabling ‘us’ to infer what thoughts or intentions explain their actions, but by re-living the situation ‘in their shoes’ or from their point of view, and by that understanding what they experienced and theory, and therefore expressed. Understanding others is achieved when we can ourselves deliberate as they did, and hear their words as if they are our own. The suggestion is a modern development frequently associated in the ‘Verstehen’ traditions of Dilthey (1833-1911), Weber (1864-1920) and Collingwood (1889-1943).
We may call any process of drawing a conclusion from a set of premises a process of reasoning. If the conclusion concerns what to do, the process is called practical reasoning, otherwise pure or theoretical reasoning. Evidently, such processes may be good or bad, if they are good, the premises support or even entail the conclusion drawn, and if they are bad, the premises offer no support to the conclusion. Formal logic studies the cases in which conclusions are validly drawn from premises, but little human reasoning is overly of the forms logicians identify. Partly, we are concerned to draw conclusions that ‘go beyond’ our premises, in the way that conclusions of logically valid arguments do not for the process of using evidence to reach a wider conclusion. Nonetheless, such anticipatory pessimism in the opposite direction to the prospects of conformation theory, denying that we can assess the results of abduction in terms of probability. A cognitive process of reasoning in which a conclusion is played-out from a set of premises usually confined of cases in which the conclusions are supposed in following from the premises, i.e., an inference is logically valid, in that of deductibility in a logically defined syntactic premise but without there being to any reference to the intended interpretation of its theory. Furthermore, as we reason we use indefinite traditional knowledge or commonsense sets of presuppositions about what it is likely or not a task of an automated reasoning project, which is to mimic this causal use of knowledge of the way of the world in computer programs.
Some ‘theories’ usually emerge themselves of engaging to exceptionally explicit predominancy as [ supposed ] truths that they have not organized, making the theory difficult to survey or study as a whole. The axiomatic method is an idea for organizing a theory, one in which tries to select from among the supposed truths a small number from which they can see all others to be deductively inferable. This makes the theory more tractable since, in a sense, they contain all truths in those few. In a theory so organized, they call the few truths from which they deductively imply all others ‘axioms’. David Hilbert (1862-1943) had argued that, just as algebraic and differential equations, which we were used to study mathematical and physical processes, could have themselves be made mathematical objects, so axiomatic theories, like algebraic and differential equations, which are means to representing physical processes and mathematical structures could be of investigating.
Conformation to theory, the philosophy of science, is a generalization or set referring to unobservable entities, i.e., atoms, genes, quarks, unconscious wishes. The ideal gas law, for example, refer to such observable pressures, temperature, and volume, the ‘molecular-kinetic theory’ refers to molecules and their material possession, . . . although an older usage suggests the lack of adequate evidence in support thereof, as an existing philosophical usage does in truth, follow in the tradition (as in Leibniz, 1704), as many philosophers had the conviction that all truths, or all truths about a particular domain, followed from as few than for being many governing principles. These principles were taken to be either metaphysically prior or epistemologically prior or both. In the first sense, they we took to be entities of such a nature that what exists s ‘caused’ by them. When the principles were taken as epistemologically prior, that is, as ‘axioms’, they were taken to be either epistemologically privileged, e.g., self-evident, not needing to be demonstrated, or again, included ‘or’, to such that all truths so truly follow from them by deductive inferences. Gödel (1984) showed in the spirit of Hilbert, treating axiomatic theories as themselves mathematical objects that mathematics, and even a small part of mathematics, elementary number theory, could not be axiomatized, that more precisely, any class of axioms that is such that we could effectively decide, of any proposition, whether or not it was in that class, would be too small to capture in of the truths.
The notion of truth occurs with remarkable frequency in our reflections on language, thought and action. We are inclined to suppose, for example, that truth is the proper aim of scientific inquiry, that true beliefs help to achieve our goals, that to understand a sentence is to know which circumstances would make it true, that reliable preservation of truth as one argues of valid reasoning, that moral pronouncements should not be regarded as objectively true, and so on. To assess the plausibility of such theses, and to refine them and to explain why they hold (if they do), we require some view of what truth be a theory that would account for its properties and its relations to other matters. Thus, there can be little prospect of understanding our most important faculties in the sentence of a good theory of truth.
Such a thing, however, has been notoriously elusive. The ancient idea that truth is some sort of ‘correspondence with reality’ has still never been articulated satisfactorily, and the nature of the alleged ‘correspondence’ and the alleged ‘reality’ persistently remains objectionably enigmatical. Yet the familiar alternative suggestions that true beliefs are those that are ‘mutually coherent’, or ‘pragmatically useful’, or ‘verifiable in suitable conditions’ has each been confronted with persuasive counterexamples. A twentieth-century departure from these traditional analyses is the view that truth is not a property at all that the syntactic form of the predicate, ‘is true’, distorts its really semantic character, which is not to describe propositions but to endorse them. Nevertheless, we have also faced this radical approach with difficulties and suggest, counter intuitively that truth cannot have the vital theoretical role in semantics, epistemology and elsewhere that we are naturally inclined to give it. Thus, truth threatens to remain one of the most enigmatic of notions: An explicit account of it can seem essential yet beyond our reach. All the same, recent work provides some evidence for optimism.
A theory is based in philosophy of science, is a generalization or se of generalizations purportedly referring to observable entities, i.e., atoms, quarks, unconscious wishes, and so on. The ideal gas law, for example, cites to only such observable pressures, temperature, and volume, the molecular-kinetic theory refers top molecules and their properties, although an older usage suggests the lack of an adequate make out in support therefrom as merely a theory, latter-day philosophical usage does not carry that connotation. Einstein’s special and General Theory of Relativity, for example, is taken to be extremely well founded.
These are two main views on the nature of theories. According to the ‘received view’ theories are partially interpreted axiomatic systems, according to the semantic view, a theory is a collection of models (Suppe, 1974). By which, some possibilities, unremarkably emerge as supposed truths that no one has neatly systematized by making theory difficult to make a survey of or study as a whole. The axiomatic method is an ideal for organizing a theory (Hilbert, 1970), one tries to select from among the supposed truths a small number from which they can see all the others to be deductively inferable. This makes the theory more tractable since, in a sense, they contain all truth’s in those few. In a theory so organized, they call the few truths from which they deductively incriminate all others ‘axioms’. David Hilbert (1862-1943) had argued that, morally justified as algebraic and differential equations, which were antiquated into the study of mathematical and physical processes, could hold on to themselves and be made mathematical objects, so they could make axiomatic theories, like algebraic and differential equations, which are means of representing physical processes and mathematical structures, objects of mathematical investigation.
In the tradition (as in Leibniz, 1704), many philosophers had the conviction that all truths, or all truths about a particular domain, followed from a few principles. These principles were taken to be either metaphysically prior or epistemologically prior or both. In the first sense, they were taken to be entities of such a nature that what exists is ‘caused’ by them. When the principles were taken as epistemologically prior, that is, as ‘axioms’, they were taken to be either epistemologically privileged, i.e., self-evident, not needing to be demonstrated, or again, inclusive ‘or’, to be such that all truths do in truth follow from them (by deductive inferences). Gödel (1984) showed in the spirit of Hilbert, treating axiomatic theories as themselves mathematical objects that mathematics, and even a small part. Of mathematics, elementary number theory, could not be axiomatized, that, more precisely, any class of axioms that is such that we could effectively decide, of any proposition, whether or not it was in that class, would be too small to capture all of the truths.
The notion of truth occurs with remarkable frequency in our reflections on language, thought, and action. We are inclined to suppose, for example, that truth is the proper aim of scientific inquiry, that true beliefs help ‘us’ to achieve our goals, tat to understand a sentence is to know which circumstances would make it true, that reliable preservation of truth as one argues from premises to a conclusion is the mark of valid reasoning, that moral pronouncements should not be regarded as objectively true, and so on. In order to assess the plausible of such theses, and in order to refine them and to explain why they hold, if they do, we expect some view of what truth be of a theory that would keep an account of its properties and its relations to other matters. Thus, there can be little prospect of understanding our most important faculties without a good theory of truth.
The ancient idea that truth is one sort of ‘correspondence with reality’ has still never been articulated satisfactorily: The nature of the alleged ‘correspondence’ and te alleged ‘reality remains objectivably rid of obstructions. Yet, the familiar alternative suggests ~. That true beliefs are those that are ‘mutually coherent’, or ‘pragmatically useful’, or‘verifiable in suitable conditions has each been confronted with persuasive counterexamples. A twentieth-century departure from these traditional analyses is the view that truth is not a property at al ~. That the syntactic form of the predicate,‘ . . . is true’, distorts the ‘real’ semantic character, with which is not to describe propositions but to endorse them. Still, this radical approach is also faced with difficulties and suggests, counter intuitively that truth cannot have the vital theoretical role in semantics, epistemology and elsewhere that we are naturally inclined to give it. Thus, truth threatens to remain one of the most enigmatic of notions, and a confirming account of it can seem essential yet, on the far side of our reach. However, recent work provides some grounds for optimism.
The belief that snow is white owes its truth to a certain feature of the external world, namely, to the fact that snow is white. Similarly, the belief that dogs bark is true because of the fact that dogs bark. This trivial observation leads to what is perhaps the most natural and popular account of truth, the ‘correspondence theory’, according to which a belief (statement, a sentence, propositions, etc. (as true just in case there exists a fact corresponding to it (Wittgenstein, 1922, Austin! 950). This thesis is unexceptionable, however, if it is to provide a rigorous, substantial and complete theory of truth ~. If it is to be more than merely a picturesque way of asserting all equivalences to the form. The belief that ‘p’ is true ‘p’.
Then it must be supplemented with accounts of what facts are, and what it is for a belief to correspond to a fact, and these are the problems on which the correspondence theory of truth has floundered. For one thing, it is far from going unchallenged that any significant gain in understanding is achieved by reducing ‘the belief that snow is white is’ true’ to the facts that snow is white exists: For these expressions look equally resistant to analysis and too close in meaning for one to provide a crystallizing account of the other. In addition, the undistributed relationship that holds in particular between the belief that snow is white and the fact that snow is white, between the belief that dogs bark and the fact that a ‘dog barks’, and so on, is very hard to identify. The best attempt to date is Wittgenstein’s 1922, so-called ‘picture theory’, by which an elementary proposition is a configuration of terms, with whatever stare of affairs it reported, as an atomic fact is a configuration of simple objects, an atomic fact corresponds to an elementary proposition and makes it true, when their configurations are identical and when the terms in the proposition for it to the similarly-placed objects in the fact, and the truth value of each complex proposition the truth values entail of the elementary ones. However, eve if this account is correct as far as it goes, it would need to be completed with plausible theories of ‘logical configuration’, ‘rudimentary proposition’, ‘reference’ and ‘entailment’, none of which is better-off to come.
The cental characteristic of truth One that any adequate theory must explain is that when a proposition satisfies its ‘conditions of proof or verification’ then it is regarded as true. To the extent that the property of corresponding with reality is mysterious, we are going to find it impossible to see what we take to verify a proposition should show the possession of that property. Therefore, a tempting alternative to the correspondence theory an alternative that eschews obscure, metaphysical concept that explains quite straightforwardly why Verifiability infers, truth is simply to identify truth with Verifiability (Peirce, 1932). This idea can take on variously formed. One version involves the further assumption that verification is ‘holistic’, . . . ‘in that a belief is justified (i.e., verified) when it is part of an entire system of beliefs that are consistent and ‘counter balanced’ (Bradley, 1914 and Hempel, 1935). This is known as the ‘coherence theory of truth’. Another version involves the assumption associated with each proposition, some specific procedure for finding out whether one should believe it or not. On this account, to say that a proposition is true is to sa that the appropriate procedure would verify (Dummett, 1979. and Putnam, 1981). While mathematics this amounts to the identification of truth with provability.
The attractions of the verificationist account of truth are that it is refreshingly clear compared with the correspondence theory, and that it succeeds in connecting truth with verification. The trouble is that the bond it postulates between these notions is implausibly strong. We do in true statements’ take verification to indicate truth, but also we recognize the possibility that a proposition may be false in spite of there being impeccable reasons to believe it, and that a proposition may be true although we are not able to discover that it is. Verifiability and ruth are no doubt highly correlated, but surely not the same thing.
A third well-known account of truth is known as ‘pragmatism’ (James, 1909 and Papineau, 1987). As we have just seen, the verificationist selects a prominent property of truth and considers the essence of truth. Similarly, the pragmatist focuses on another important characteristic namely, that true belief is a good basis for action and takes this to be the very nature of truth. True assumpsitions are said to be, by definition, those that provoke actions with desirable results. Again, we have an account statement with a single attractive explanatory characteristic, besides, it postulates between truth and its alleged analysand in this case, utility is implausibly close. Granted, true belief tends to foster success, but it happens regularly that actions based on true beliefs lead to disaster, while false assumptions, by pure chance, produce wonderful results.
One of the few uncontroversial facts about truth is that the proposition that snow is white if and only if snow is white, the proposition that lying is wrong is true if and only if lying is wrong, and so on. Traditional theories acknowledge this fact but regard it as insufficient and, as we have seen, inflate it with some further principle of the form, ‘X is true if and only if ‘X’ has property ‘P’ (such as corresponding to reality, Verifiability, or being suitable as a basis for action), which is supposed to specify what truth is. Some radical alternatives to the traditional theories result from denying the need for any such further specification (Ramsey, 1927, Strawson, 1950 and Quine, 1990). For example, ne might suppose that the basic theory of truth contains nothing more that equivalences of the form, ‘The proposition that ‘p’ is true if and only if ‘p’ (Horwich, 1990).
That is, a proposition, ‘K’ with the following properties, that from ‘K’ and any further premises of the form. ‘Einstein’s claim was the proposition that p’ you can imply p’. Whatever it is, now supposes, as the deflationist says, that our understanding of the truth predicate consists in the stimulative decision to accept any instance of the schema. ‘The proposition that p is true if and only if p’, then your problem is solved. For ‘K’ is the proposition, ‘Einstein’s claim is true ’, it will have precisely the inferential power needed. From it and ‘Einstein’s claim is the proposition that quantum mechanics are wrong’, you can use Leibniz’s law to imply ‘The proposition that quantum mechanic is wrong is true; which given the relevant axiom of the deflationary theory, allows you to derive ‘Quantum mechanics is wrong’. Thus, one point in favour of the deflationary theory is that it squares with a plausible story about the function of our notion of truth, in that its axioms explain that function without the need for further analysis of ‘what truth is’.
Not all variants of deflationism have this quality virtue, according to the redundancy performatives theory of truth, the pair of sentences, ‘The proposition that p is true’ and plain ‘p’s’, has the same meaning and expresses the same statement as one and another, so it is a syntactic illusion to think that p is true’ attributes any sort of property to a proposition (Ramsey, 1927 and Strawson, 1950). Yet in that case, it becomes hard to explain why we are entitled to infer ‘The proposition that quantum mechanics are wrong is true’ form ‘Einstein’s claim is the proposition that quantum mechanics are wrong. ‘Einstein’s claim is true’. For if truth is not property, then we can no longer account for the inference by invoking the law that if ‘X’, appears identical with ‘Y’ then any property of ‘X’ is a property of ‘Y’, and vice versa. Thus the redundancy/performatives theory, by identifying rather than merely correlating the contents of ‘The proposition that p is true’ and ‘p, precludes the prospect of a good explanation of one on truth’s most significant and useful characteristics. So, putting restrictions on our assembling claim to the weak is better, of its equivalence schema: The proposition that ‘p’ is true is and is only ‘p’.
Support for deflationism depends upon the possibleness of showing that its axiom instances of the equivalence schema unsupplements by any further analysis, will suffice to explain all the central facts about truth, for example, that the verification of a proposition indicates its truth, and that true beliefs have a practical value. The first of these facts follows trivially from the deflationary axioms, for given ours a prior knowledge of the equivalence of ‘p’ and ‘The a propositions that ‘p is true’, any reason to believe that ‘p’ becomes an equally good reason to believe that the preposition that ‘p’ is true. We can also explain the second fact in terms of the deflationary axioms, but not quite so easily. Consider, to begin with, beliefs of the form that if I perform the act ‘A’, then my desires will be fulfilled. Notice that the psychological role of such a belief is, roughly, to cause the performance of ‘A’. In other words, given that I do have belief, then typically.
I will perform the act ‘A’
Notice also that when the belief is true then, given the deflationary axioms, the performance of ‘A’ will in fact lead to the fulfilment of one’s desires, i.e., If being true, then if I perform ‘A’, and my desires will be fulfilled.
Therefore, if it is true, then my desires will be fulfilled. So valuing the truth of beliefs of that form is quite treasonable. Nevertheless, inference has derived such beliefs from other beliefs and can be expected to be true if those other beliefs are true. So assigning a value to the truth of any belief that might be used in such an inference is reasonable.
To the extent that such deflationary accounts can be given of all the acts involving truth, then the explanatory demands on a theory of truth will be met by the collection of all statements like, ‘The proposition that snow is white is true if and only if snow is white’, and the sense that some deep analysis of truth is needed will be undermined.
Nonetheless, there are several strongly felt objections to deflationism. One reason for dissatisfaction is that the theory has an infinite number of axioms, and therefore cannot be completely written down. It can be described, as the theory whose axioms are the propositions of the fore ‘p if and only if it is true that p’, but not explicitly formulated. This alleged defect has led some philosophers to develop theories that show, first, how the truth of any proposition derives from the referential properties of its constituents, and second, how the referential properties of primitive constituents are determinated (Tarski, 1943 and Davidson, 1969). However, assuming that all propositions including belief attributions remain controversial, law of nature and counterfactual conditionals depends for their truth values on what their constituents refer to implicate. In addition, there is no immediate prospect of a presentable, finite possibility of reference, so that it is far form clear that the infinite, list-like character of deflationism can be avoided.
Additionally, it is commonly supposed that problems about the nature of truth are intimately bound up with questions as to the accessibility and autonomy of facts in various domains: Questions about whether the facts can be known, and whether they can exist independently of our capacity to discover them (Dummett, 1978, and Putnam, 1981). One might reason, for example, that if ‘T is true’ means’ nothing more than ‘T will be verified’, then certain forms of scepticism, specifically, those that doubt the correctness of our methods of verification, that will be precluded, and that the facts will have been revealed as dependent on human practices. Alternatively, it might be said that if truth were an inexplicable, primitive, non-epistemic property, then the fact that ‘T’ is true would be completely independent of ‘us’. Moreover, we could, in that case, have no reason to assume that the propositions we believe in, that in adopting its property, so scepticism would be unavoidable. In a similar vein, it might be thought that as special, and perhaps undesirable features of the deflationary approach, is that truth is deprived of such metaphysical or epistemological implications.
Upon closer scrutiny, in that, it is far from clear that there exists ‘any’ account of truth with consequences regarding the accessibility or autonomy of non-semantic matters. For although an account of truth may be expected to have such implications for facts of the form ‘T is true’, it cannot be assumed without further argument that the same conclusions will apply to the fact ’T’. For it cannot be assumed that ‘T’ and ‘T’ are true’ and is equivalent to one another given the account of ‘true’ that is being employed. Of course, if truth is defined in the way that the deflationist proposes, then the equivalence holds by definition. Nevertheless, if truth is defined by reference to some metaphysical or epistemological characteristic, then the equivalence schema is thrown into doubt, pending some demonstration that the trued predicate, in the sense assumed, will be satisfied in as far as there are thought to be epistemological problems hanging over ‘T’s’ that do not threaten ‘T is true’, giving the needed demonstration will be difficult. Similarly, if ‘truth’ is so defined that the fact, ‘T’ is felt to be more, or less, independent of human practices than the fact that ‘T is true’, then again, it is unclear that the equivalence schema will hold. It would seem, therefore, that the attempt to base epistemological or metaphysical conclusions on a theory of truth must fail because in any such attempt the equivalence schema will be simultaneously relied on and undermined.
The most influential idea in the theory of meaning in the past hundred yeas is the thesis that meaning of an indicative sentence is given by its truth-conditions. On this conception, to understand a sentence is to know its truth-conditions. The conception was first clearly formulated by Frége (1848-1925), was developed in a distinctive way by the early Wittgenstein (1889-1951), and is a leading idea of Davidson (1917-). The conception has remained so central that those who offer opposing theories characteristically define their position by reference to it.
The conception of meaning as truth-conditions necessarily are not and should not be advanced as a complete account of meaning. For instance, one who understands a language must have some idea of the range of speech acts conventionally acted by the various types of a sentence in the language, and must have some idea of the significance of various kinds of speech acts. The claim of the theorist of truth-conditions should as an alternative is targeted on the notion of content: If two indicative sentences differ in what they strictly and literally say, then this difference is fully accounted for by the difference in their truth-conditions. Most basic to truth-conditions is simply of a statement that is the condition the world must meet if the statement is to be true. To know this condition is equivalent to knowing the meaning of the statement. Although this sounds as if it gives a solid anchorage for meaning, some of the security disappears when it turns out that the truth condition can only be defined by repeating the very same statement, as a truth condition of ‘snow is white’ is that snow is white, the truth condition of ‘Britain would have capitulated had Hitler invaded’ is the Britain would have capitulated had Hitler invaded. It is disputed whether this element of running-on-the-spot disqualifies truth conditions from playing the central role in a substantive theory of meaning. Truth-conditional theories of meaning are sometimes opposed by the view that to know the meaning of a statement is to be able to use it in a network of inferences.
Whatever it is that makes, what would otherwise be mere sounds and inscriptions into instruments of communication and understanding. The philosophical problem is to demystify this power, and to relate it to what we know of ourselves and the world. Contributions to the study include the theory of ‘speech acts’ and the investigation of communication and the relationship between words and ideas and the world and surrounding surfaces, by which some persons express by a sentence are often a function of the environment in which he or she is placed. For example, the disease I refer to by a term like ‘arthritis’ or the kind of tree I refer to as a ‘maple’ will be defined by criteria of which I know next to nothing. The raises the possibility of imagining two persons in alternatively differently environmental, but in which everything appears the same to each of them, but between them they define a space of philosophical problems. They are the essential components of understanding nd any intelligible proposition that is true must be capable of being understood. Such that which is expressed by an utterance or sentence, the proposition or claim made about the world may by extension, the content of a predicated or other sub-sentential component is what it contributes to the content of sentences that contain it. The nature of content is the cental concern of the philosophy of language.
In particularly, the problems of indeterminancy of translation, inscrutability of reference, language, predication, reference, rule following, semantics, translation, and the topics referring to subordinate headings associated with ‘logic’. The loss of confidence in determinate meaning (‘Each is another encoding’) is an element common both to postmodern uncertainties in the theory of criticism, and to the analytic tradition that follows writers such as Quine (1908-). Still it may be asked, why should we suppose that fundamental epistemic notions should be keep an account of for in behavioural terms what grounds are there for supposing that ‘p knows p’ is a subjective matter in the prestigiousness of its statement between some subject statement and physical theory of physically forwarded of an objection, between nature and its mirror? The answer is that the only alternative seems to be to take knowledge of inner states as premises from which our knowledge of other things is normally implied, and without which our knowledge of other things is normally inferred, and without which knowledge would be ungrounded. However, it is not really coherent, and does not in the last analysis make sense, to suggest that human knowledge have foundations or grounds. It should be remembered that to say that truth and knowledge ‘can only be judged by the standards of our own day’ is not to say that it is less meaningful nor is it ‘more “cut off from the world, which we had supposed. Conjecturing it is as just‘ that nothing counts as justification, unless by reference to what we already accept, and that at that place is no way to get outside our beliefs and our oral communication so as to find some experiment with others than coherence. The fact is that the professional philosophers have thought it might be otherwise, since one and only they are haunted by the clouds of epistemological scepticism.
What Quine opposes as ‘residual Platonism’ is not so much the hypostasising of non-physical entities as the notion of ‘correspondence’ with things as the final court of appeal for evaluating present practices. Unfortunately, Quine, for all that it is incompatible with its basic insights, substitutes for this correspondence to physical entities, and specially to the basic entities, whatever they turn out to be, of physical science. Nevertheless, when their doctrines are purified, they converge on a single claim. That no account of knowledge can depend on the assumption of some privileged relations to reality. Their work brings out why an account of knowledge can amount only to a description of human behaviour.
What, then, is to be said of these ‘inner states’, and of the direct reports of them that have played so important a role in traditional epistemology? For a person to feel is nothing else than for him to have an ability to make a certain type of non-inferential report, to attribute feelings to infants is to acknowledge in them latent abilities of this innate kind. Non-conceptual, non-linguistic ‘knowledge’ of what feelings or sensations is like is attributively to beings on the basis of potential membership of our community. Infants and the more attractive animals are credited with having feelings on the basis of that spontaneous sympathy that we extend to anything humanoid, in contrast with the mere ‘response to stimuli’ attributed to photoelectric cells and to animals about which no one feels sentimentally. Supposing that moral prohibition against hurting infants is consequently wrong and the better-looking animals are; those moral prohibitions grounded’ in their possession of feelings. The relation of dependence is really the other way round. Similarly, we could not be mistaken in supposing that a four-year-old child has knowledge, but no one-year-old, any more than we could be mistaken in taking the word of a statute that eighteen-year-old can marry freely but seventeen-year-old cannot. (There is no more ‘ontological ground’ for the distinction that may suit ‘us’ to make in the former case than in the later.) Again, such a question as ‘Are robots’ conscious?’ Calling for a decision on our part whether or not to treat robots as members of our linguistic community. All this is a piece with the insight brought into philosophy by Hegel (1770-1831), that the individual apart from his society is just another animal.
Willard van Orman Quine, the most influential American philosopher of the latter half of the 20th century, when after the wartime period in naval intelligence, punctuating the rest of his career with extensive foreign lecturing and travel. Quine’s early work was on mathematical logic, and issued in “A System of Logistic” (1934), “Mathematical Logic” (1940), and “Methods of Logic” (1950), whereby it was with the collection of papers from a “Logical Point of View” (1953) that his philosophical importance became widely recognized. Quine’s work dominated concern with problems of convention, meaning, and synonymy cemented by “Word and Object” (1960), in which the indeterminancy of radical translation first takes centre-stage. In this and many subsequent writings Quine takes a bleak view of the nature of the language with which we ascribe thoughts and beliefs to ourselves and others. These ‘intentional idioms’ resist smooth incorporation into the scientific world view, and Quine responds with scepticism toward them, not quite endorsing ‘eliminativism’, but regarding them as second-rate idioms, unsuitable for describing strict and literal facts. For similar reasons he has consistently expressed suspicion of the logical and philosophical propriety of appeal to logical possibilities and possible worlds. The languages that are properly behaved and suitable for literal and true descriptions of the world as those of mathematics and science. The entities to which our best theories refer must be taken with full seriousness in our ontologies, although an empiricist. Quine thus supposes that the abstract objects of set theory are required by science, and therefore exist. In the theory of knowledge Quine associated with a ‘holistic view’ of verification, conceiving of a body of knowledge in terms of a web touching experience at the periphery, but with each point connected by a network of relations to other points.
Quine is also known for the view that epistemology should be naturalized, or conducted in a scientific spirit, with the object of investigation being the relationship, in human beings, between the voice of experience and the outputs of belief. Although Quine’s approaches to the major problems of philosophy have been attacked as betraying undue ‘scientism’ and sometimes ‘behaviourism’, the clarity of his vision and the scope of his writing made him the major focus of Anglo-American work of the past forty years in logic, semantics, and epistemology. As well as the works cited his writings’ cover “The Ways of Paradox and Other Essays” (1966), “Ontological Relativity and Other Essays” (1969), “Philosophy of Logic” (1970), “The Roots of Reference” (1974) and “The Time of My Life: An Autobiography” (1985).
Coherence is a major player in the theatre of knowledge. There are cogence theories of belief, truth and justification, as these are to combine themselves in the various ways to yield theories of knowledge coherence theories of belief are concerned with the content of beliefs. Consider a belief you now have, the beliefs that you are reading a page in a book, in so, that what makes that belief the belief that it is? What makes it the belief that you are reading a page in a book than the belief that you have a monster in the garden?
One answer is that the belief has a coherent place or role in a system of beliefs, perception or the having the perceptivity that has its influence on beliefs. As, you respond to sensory stimuli by believing that you are reading a page in a book than believing that you have a monster in the garden. Belief has an influence on action, or its belief is a desire to act, if belief will differentiate the differences between them, that its belief is a desire or if you were to believe that you are reading a page than if you believed in something about a monster. Sortal perceptivals hold accountably the perceptivity and action that are indeterminate to its content if its belief is the action as if stimulated by its inner and latent coherence in that of your belief, however. The same stimuli may produce various beliefs and various beliefs may produce the same action. The role that gives the belief the content it has is the role it plays within a network of relations to other beliefs, some latently causal than others that relate to the role in inference and implication. For example, I infer different things from believing that I am reading a page in a book than from any other belief, justly as I infer about other beliefs.
The information of perceptibility and the output of an action supplement the central role of the systematic relations the belief has to other belief, but the systematic relations give the belief the specific contentual representation it has. They are the fundamental source of the content of belief. That is how coherence comes in. A belief has the representational content by which it does because of the way in which it coheres within a system of beliefs (Rosenberg, 1988). We might distinguish weak coherence theories of the content of beliefs from stronger coherence theories. Weak coherence theories affirm that coherence is one determinant of the representation given that the contents are of belief. Strong coherence theories of the content of belief affirm that coherence is the sole determinant of the contentual representations of belief.
When we turn from belief to justification, we confront a similar group of coherence theories. What makes one belief justified and another not? Again, there is a distinction between weak and strong theoretic principles that govern its theory of coherence. Weak theories tell ‘us’ that the way in which a belief coheres with a background system of beliefs is one determinant of justification, other typical determinants being perception, memory, and intuitive ‘projection’, are, however strong theories, or dominant projections are in coherence to justification as solely a matter of how a belief coheres with a system of latent hierarchal beliefs. There is, nonetheless, another distinction that cuts across the distinction between weak and strong coherence theories between positive and negative coherence theory (Pollock, 1986). A positive coherence theory tells ‘us’ that if a belief coheres with a background system of belief, then the belief is justifiable. A negative coherence theory tells ‘us’ that if a belief fails to cohere with a background system of beliefs, then the belief is not justifiable. We might put this by saying that, according to the positivity of a coherence theory, coherence has the power to produce justification, while according to its being adhered by negativity, the coherence theory has only the power to nullify justification.
Least of mention, a strong coherence theory of justification is a formidable combination by which a positive and a negative theory tell ‘us’ that a belief is justifiable if and only if it coheres with a background system of inter-connectivity of beliefs. Coherence theories of justification and knowledge have most often been rejected for being unable to deal with an accountable justification toward the perceptivity upon the projection of knowledge (Audi, 1988, and Pollock, 1986), and, therefore, considering a perceptual example that will serve as a kind of crucial test will be most appropriate. Suppose that a person, call her Julie, and works with a scientific instrumentation that has a gauging measure upon temperatures of liquids in a container. The gauge is marked in degrees, she looks at the gauge and sees that the reading is 105 degrees. What is she justifiably to believe, and why? Is she, for example, justified in believing that the liquid in the container is 105 degrees? Clearly, that depends on her background beliefs. A weak coherence theorist might argue that, though her belief that she sees the shape 105 is immediately justified as direct sensory evidence without appeal to a background system, the belief that the location in the container is 105 degrees results from coherence with a background system of latent beliefs that affirm to the shaping perceptivity that its 105 as visually read to be 105 degrees on the gauge that measures the temperature of the liquid in the container. This, nonetheless, of a weak coherence view that combines coherence with direct perceptivity as its evidence, in that the foundation of justification, is to account for the justification of our beliefs.
A strong coherence theory would go beyond the claim of the weak coherence theory to affirm that the justification of all beliefs, including the belief that one sees the shaping to sensory data that holds accountably of a measure of 100, or even the more cautious belief that one sees a shape, resulting from the perceptivals of coherence theory, in that it coheres with a background system. One may argue for this strong coherence theory in a number of different ways. One line or medium through which to appeal to the coherence theory of contentual representations. If the content of the perceptual belief results from the relations of the belief to other beliefs in a network system of beliefs, then one may notably argue that the justification of perceptivity, that the belief is a resultant from which its relation of the belief to other beliefs, in the network system of beliefs is in argument for the strong coherence theory is that without any assumptive reason that the coherence theory of contentual beliefs, in as much as the supposed causes that only produce the consequences we expect. Consider the very cautious belief that I see a shape. How may the justifications for that perceptual belief are an existent result that is characterized of its material coherence with a background system of beliefs? What might the background system tell ‘us’ that would justify that belief? Our background system contains a simple and primal theory about our relationship to the world and surrounding surfaces that we perceive as it is or should be believed. To come to the specific point at issue, we believe that we can tell a shape when we see one, completely differentiated its form as perceived to sensory data, that we are to trust of ourselves about such simple matters as whether we see a shape before ‘us’ or not, as in the acceptance of opening to nature the inter-connectivity between belief and the progression through which is acquired from past experiential conditions of application, and not beyond deception. Moreover, when Julie sees the believing desire to act upon what either coheres with a weak or strong coherence of theory, she shows that its belief, as a measurable quality or entity of 100, has the essence in as much as there is much more of a structured distinction of circumstance, which is not of those that are deceptive about whether she sees that shape or sincerely does not see of its shaping distinction, however. Visible light is good, and the numeral shapes are large, readily discernible and so forth. These are beliefs that Trust has single handedly authenticated reasons for justification. Her successive malignance to sensory access to data involved is justifiably a subsequent belief, in that with those beliefs, and so she is justified and creditable.
The philosophical; problems include discovering whether belief differs from other varieties of assent, such as ‘acceptance’ discovering to what extent degrees of belief is possible, understanding the ways in which belief is controlled by rational and irrational factors, and discovering its links with other properties, such as the possession of conceptual or linguistic skills. This last set of problems includes the question of whether prelinguistic infants or animals are properly said to have beliefs.
Thus, we might think of coherence as inference to the best explanation based on a background system of beliefs, since we are not aware of such inferences for the most part, the inferences must be interpreted as unconscious inferences, as information processing, based on or finding the background system that proves most convincing of acquiring its act and used from the motivational force that its underlying and hidden desire are to do so. One might object to such an account on the grounds that not all justifiable inferences are self-explanatory, and more generally, the account of coherence may, at best, is ably successful to competitions that are based on background systems (BonJour, 1985, and Lehrer, 1990). The belief that one sees a shape competes with the claim that one does not, with the claim that one is deceived, and other sceptical objections. The background system of beliefs informs one that one is acceptingly trustworthy and enables one to meet the objections. A belief coheres with a background system just in case it enables one to meet the sceptical objections and in the way justifies one in the belief. This is a standard strong coherence theory of justification (Lehrer, 1990).
Illustrating the relationship between positive and negative coherence theories in terms of the standard coherence theory is easy. If some objection to a belief cannot be met in terms of the background system of beliefs of a person, then the person is not justified in that belief. So, to return to Julie, suppose that she has been told that a warning light has been installed on her gauge to tell her when it is not functioning properly and that when the red light is on, the gauge is malfunctioning. Suppose that when she sees the reading of 105, she also sees that the red light is on. Imagine, finally, that this is the first time the red light has been on, and, after years of working with the gauge, Julie, who has always placed her trust in the gauge, believes what the gauge tells her, that the liquid in the container is at 105 degrees. Though she believes what she reads is at 105 degrees is not a justified belief because it fails to cohere with her background belief that the gauge is malfunctioning. Thus, the negative coherence theory tells ‘us’ that she is not justified in her belief about the temperature of the contents in the container. By contrast, when the red light is not illuminated and the background system of trust tells her that under such conditions that gauge is a trustworthy indicator of the temperature of the liquid in the container, then she is justified. The positive coherence theory tells ‘us’ that she is justified in her belief because her belief coheres with her background system of trust tells she that under such conditions that gauge is a trustworthy indicator of the temperature of the liquid in the container, then she is justified. The positive coherence theory tells ‘us’ that she is justified in her belief because her belief coheres with her background system continues as a trustworthy system.
The foregoing sketch and illustration of coherence theories of justification have a common feature, namely, that they are what is called internalistic theories of justification what makes of such a view are the absence of any requirement that the person for whom the belief is justified have any cognitive access to the relation of reliability in question. Lacking such access, such a person will usually, have no reason for thinking the belief is true or likely to be true, but will, on such an account, are none the lesser to appear epistemologically justified in accepting it. Thus, such a view arguably marks a major break from the modern epistemological traditions, which identifies epistemic justification with having a reason, perhaps even a conclusive reason, for thinking that the belief is true. An epistemologist working within this tradition is likely to feel that the externalist, than offering a competing account of the same concept of epistemic justification with which the traditional epistemologist is concerned, has simply changed the subject.
They are theories affirming that coherence is a matter of internal relations between beliefs and that justification is a matter of coherence. If, then, justification is solely a matter of internal relations between beliefs, we are left with the possibility that the internal relations might fail to correspond with any external reality. How, one might object, can be to assume the including of interiority. A subjective notion of justification bridge the gap between mere true belief, which might be no more than a lucky guess, and knowledge, which must be grounded in some connection between internal subjective conditions and external objective realities?
The answer is that it cannot and that something more than justified true belief is required for knowledge. This result has, however, been established quite apart from consideration of coherence theories of justification. What are required maybes put by saying that the justification that one must be undefeated by errors in the background system of beliefs? Justification is undefeated by errors just in case any correction of such errors in the background system of belief would sustain the justification of the belief on the basis of the corrected system. So knowledge, on this sort of positivity is acclaimed by the coherence theory, which is the true belief that coheres with the background belief system and corrected versions of that system. In short, knowledge is true belief plus justification resulting from coherence and undefeated by error (Lehrer, 1990). The connection between internal subjective conditions of belief and external objectivity are from which reality’s result from the required correctness of our beliefs about the relations between those conditions and realities. In the example of Julie, she believes that her internal subjectivity to conditions of sensory data in which the experience and perceptual beliefs are connected with the external objectivity in which reality is the temperature of the liquid in the container in a trustworthy manner. This background belief is essential to the justification of her belief that the temperature of the liquid in the container is 105 degrees, and the correctness of that background belief is essential to the justification remaining undefeated. So our background system of beliefs contains a simple theory about our relation to the external world that justifies certain of our beliefs that cohere with that system. For instance, such justification to convert to knowledge, that theory must be sufficiently free from error so that the coherence is sustained in corrected versions of our background system of beliefs. The correctness of the simple background theory provides the connection between the internal condition and external reality.
The coherence theory of truth arises naturally out of a problem raised by the coherence theory of justification. The problem is that anyone seeking to determine whether she has knowledge is confined to the search for coherence among her beliefs. The sensory experiences she has been deaf-mute until they are represented in the form of some perceptual belief. Beliefs are the engines that pull the train of justification. Nevertheless, what assurance do we have that our justification is based on true beliefs? What justification do we have that any of our justifications are undefeated? The fear that we might have none, that our beliefs might be the artifacts of some deceptive demon or scientist, leads to the quest to reduce truth to some form, perhaps an idealized form, of justification (Rescher, 1973, and Rosenberg, 1980). That would close the threatening sceptical gap between justification and truth. Suppose that a belief is true if and only if it is justifiable of some person. For such a person there would be no gap between justification and truth or between justification and undefeated justification. Truth would be coherence with some ideal background system of beliefs, perhaps one expressing a consensus among systems or some consensus among belief systems or some convergence toward a consensus. Such a view is theoretically attractive for the reduction it promises, but it appears open to profound objectification. One is that there is a consensus that we can all be wrong about at least some matters, for example, about the origins of the universe. If there is a consensus that we can all be wrong about something, then the consensual belief system rejects the equation of truth with the consensus. Consequently, the equation of truth with coherence with a consensual belief system is itself incoherent.
Coherence theories of the content of our beliefs and the justification of our beliefs themselves cohere with our background systems but coherence theories of truth do not. A defender of Coherentism must accept the logical gap between justified belief and truth, but may believe that our capacities suffice to close the gap to yield knowledge. That view is, at any rate, a coherent one.
What makes a belief justified and what makes a true belief knowledge? Thinking that whether a belief deserves one of these appraisals is natural depends on what causal subject to have the belief. In recent decades a number of epistemologists have pursed this plausible idea with a variety of specific proposals. Some causal theories of knowledge have it that a true belief that ‘p’ is knowledge just in case it has the right causal connection to the fact that ‘p’. Such a criterion can be applied only to cases where the fact that ‘p’ is a sort that can enter causal relations, this seems to exclude mathematically and other necessary facts and perhaps any fact expressed by a universal generalization, and proponents of this sort of criterion have usually of this sort of criterion have usually supposed that it is limited to perceptual knowledge of particular facts about the subject’s environment.
For example, Armstrong (1973, ch 12) proposed that a belief of the form ‘This (perceived) object is F’ is (non-inferential) knowledge if and only if the belief is a completely reliable sign that the perceived object is ‘F’, that is, the fact that the object is ‘F’ contributed to causing the belief and its doing so depended on properties of the believer such that the laws of nature dictated that, for any subject ‘χ is to occur, and so thus a perceived object of ‘y’, if χ’ undergoing those properties are for ‘us’ to believe that ‘y’ is ‘F’, then ‘y’ is ‘F’. (Dretske (1981) offers a similar account, in terms of the belief’s being caused by a signal received by the perceiver that carries the information that the object is ‘F’.
This sort of condition fails, however, to be sufficient for non-inferential perceptual knowledge because it is compatible with the belief’s being unjustified, and an unjustifiable belief cannot be knowledge. For example, suppose that your mechanisms for colour perception are working well, but you have been given good reason to think otherwise, to think, say, that the substantive primary colours that are perceivable, that things look chartreuse to you and chartreuse things look magenta. If you fail to heed these reasons you have for thinking that your colour perception or sensory data is a way. Believing in a ‘thing’, which looks to blooms of vividness that you are to believe of its chartreuse, your belief will fail to be justified and will therefore fail to be knowledge, even though it is caused by the thing’s being magenta in such a way as to be a completely reliable sign, or to carry the information, in that the thing is magenta.
One could fend off this sort of counterexample by simply adding to the causal condition the requirement that the belief be justified, buy this enriched condition would still be insufficient. Suppose, for example, that in nearly all people, but not in you, as it happens, causes the aforementioned aberration in colour perceptions. The experimenter tells you that you have taken such a drug but then says, ‘no, hold off a minute, the pill you took was just a placebo’, suppose further, that this last thing the experimenter tells you is false. Her telling you that it was a false statement, and, again, telling you this gives you justification for believing of a thing that looks a subtractive primary colour to you that it is a sensorial primary colour, in that the fact you were to expect that the experimenters last statements were false, making it the case that your true belief is not knowledgeably correct, thought as though to satisfy its causal condition.
Goldman (1986) has proposed an importantly different causal criterion namely, that a true belief is knowledge, if it is produced by a type of process that is ‘globally’ and ‘locally’ reliable. Causing true beliefs is sufficiently high is globally reliable if its propensity. Local reliability has to do with whether the process would have produced a similar but false belief in certain counterfactual situations alternative to the actual situation. This way of marking off true beliefs that are knowledge does not require the fact believed to be casually related to the belief, and so it could in principle apply to knowledge of any kind of truth.
Goldman requires that global reliability of the belief-producing process for the justification of a belief, he requires it also for knowledge because justification is required for knowledge, in what requires for knowledge but does not require for justification, which is locally reliable. His idea is that a justified true belief is knowledge if the type of process that produced it would not have produced it in any relevant counterfactual situation in which it is false. The relevant alternative account of knowledge can be motivated by noting that other concepts exhibit the same logical structure. Two examples of this are the concept ‘flat’ and the concept ‘empty’ (Dretske, 1981). Both appear to be absolute concepts-A space is empty only if it does not contain anything and a surface is flat only if it does not have any bumps. However, the absolute character of these concepts is relative to a standard. In the case of ‘flat’, there is a standard for what counts as a bump and in the case of ‘empty’, there is a standard for what counts as a thing. To be flat is to be free of any relevant bumps and to be empty is to be devoid of all relevant things.
Nevertheless, the human mind abhors a vacuum. When an explicit, coherent world-view is absent, it functions on the basis of a tactic one. A tactic world-view is not subject to a critical evaluation, and it can easily harbour inconsistencies. Indeed, our tactic set of beliefs about the nature of reality is made of contradictory bits and pieces. The dominant component is a leftover from another period, the Newtonian ‘clock universe’ still lingers as we cling to this old and tired model because we know of nothing else that can take its place. Our condition is the condition of a culture that is in the throes of a paradigm shift. A major paradigm shift is complex and difficult because a paradigm holds ‘us captive: We see reality through it, as through coloured glasses, but we do not know that, we are convinced that we see reality as it is. Hence the appearance of a new and different paradigm is often incomprehensible. To someone raised believing that the Earth is flat, the suggestion that the Earth is spherical would seem preposterous: If the Earth were spherical, would not the poor antipodes fall ‘down’ into the sky?
Yet, as we face a new millennium, we are forced to face this challenge. The fate of the planet is in question, and it was brought to its present precarious condition largely because of our trust in the Newtonian paradigm. As Newtonian world-view has to go, and, if one looks carefully, the main feature of the new, emergent paradigm can be discerned. The search for these features is what was the influence of a fading paradigm. All paradigms include subterranean realms of tactic assumptions, the influence of which outlasts the adherence to the paradigm itself.
The first line of exploration suggests the ‘weird’ aspects of the quantum theory, with fertile grounds for our feeling of which should disappear in inconsistencies with the prevailing world-view. This feeling is in replacing by the new one, i.e., if one believes that the Earth is flat, the story of Magellan’s travels is quite puzzling: How travelling due west is possible for a ship and, without changing direct. Arrive at its place of departure? Obviously, when the flat-Earth paradigm is replaced by the belief that Earth is spherical, the puzzle is instantly resolved.
The founders of Relativity and quantum mechanics were deeply engaging but incomplete, in that none of them attempted to construct a philosophical system, however, that the mystery at the heart of the quantum theory called for a revolution in philosophical outlooks. During which time, the 1920's, when quantum mechanics reached maturity, began the construction of a full-blooded philosophical system that was based not only on science but on nonscientific modes of knowledge as well. As, the fading influence drawn upon the paradigm goes well beyond its explicit claim. We believe, as the scenists and philosophers did, that when we wish to find out the truth about the universe, nonscientific nodes of processing human experiences can be ignored, poetry, literature, art, music are all wonderful, but, in relation to the quest for knowledge of the universe, they are irrelevant. Yet, it was Alfred North Whitehead who pointed out the fallacy of this speculative assumption. In this, as well as in other aspects of thinking of some reality in which are the building blocks of reality are not material atoms but ‘throbs of experience’. Whitehead formulated his system in the late 1920s, and yet, as far as I know, the founders of quantum mechanics were unaware of it. It was not until 1963 that J. M. Burgers pointed out that its philosophy accounts very well for the main features of the quanta, especially the ‘weird ones’, enabling as in some aspects of reality is ‘higher’ or ’deeper’ than others, and if so, what is the structure of such hierarchical divisions? What of our place in the universe? Finally, what is the relationship between the great aspiration within the lost realms of nature? An attempt to endow ‘us’ with a cosmological meaning in such a universe seems totally absurd, and, yet, this very universe is just a paradigm, not the truth. When you reach its end, you may be willing to join the alternate view as accorded to which, surprisingly bestow ‘us’ with what is restored, although in a post-postmodern context.
The philosophical implications of quantum mechanics have been regulated by subjective matter’s, as to emphasis the connections between what I believe, in that investigations of such interconnectivity are anticipatorially the hesitations that are an exclusion held within the western traditions, however, the philosophical thinking, from Plato to Platinous had in some aspects of interpretational presentation of her expression of a consensus of the physical community. Other aspects are shared by some and objected to (sometimes vehemently) by others. Still other aspects express my own views and convictions, as turning about to be more difficult that anticipated, discovering that a conversational mode would be helpful, but, their conversations with each other and with me in hoping that all will be not only illuminating but finding to its read may approve in them, whose dreams are dreams among others than themselves.
These examples make it seem likely that, if there is a criterion for what makes an alternative situation relevant that will save Goldman’s claim about reliability and the acceptance of knowledge, it will not be simple.
The interesting thesis that counts as a causal theory of justification, in the meaning of ‘causal theory’ intend of the belief that is justified just in case it was produced by a type of process that is ‘globally’ reliable, that is, its propensity to produce true beliefs-that can be defined to a favourably bringing close together the proportion of the belief and to what it produces, or would produce where it used as much as opportunity allows, that is true-is sufficiently that a belief acquires favourable epistemic status by having some kind of reliable linkage to the truth. Variations of this view have been advanced for both knowledge and justified belief. The first formulations of are reliably in its account of knowing appeared in if not by F.P. Ramsey (1903-30) who made important contributions to mathematical logic, probability theory, the philosophy of science and economics. Instead of saying that quarks have such-and-such properties, the Ramsey sentence says that it is moderately something that has those properties. If the process is repeated for all of the theoretical terms, the sentence gives the ‘topic-neutral’ structure of the theory, but removes any implication that we know what the term so covered have as a meaning. It leaves open the possibility of identifying the theoretical item with whatever, but it is that best fits the description provided, thus, substituting the term by a variable, and existentially qualifying into the result. Ramsey was one of the first thinkers to accept a ‘redundancy theory of truth’, which he combined its radical views of the function of many kinds of the proposition. Neither generalizations, nor causal propositions, not those treating probabilities or ethics, described facts, but each has a different specific function in our intellectual commentators on the early works of Wittgenstein, and his continuing friendship with the latter liked to Wittgenstein’s return to Cambridge and to philosophy in 1929.
The most sustained and influential application of these ideas were in the philosophy of mind, or brain, as Ludwig Wittgenstein (1889-1951) whom Ramsey persuaded that remained work for him to do, the way that is most undoubtedly was of an appealingly charismatic figure in a 20th-century philosophy, living and writing with a power and intensity that frequently overwhelmed his contemporaries and readers, the early period is centred on the ‘picture theory of meaning’ according to which sentence represents a state of affairs by being a kind of picture or model of it. Containing the elements that were in corresponding to those of the state of affairs and structure or form that mirrors that a structure of the state of affairs that it represents. All logic complexity is reduced to that of the ‘propositional calculus, and all propositions are ‘truth-function’ of atomic or basic propositions.
In the layer period the emphasis shifts dramatically to the actions of people and the role linguistic activities play in their lives. Thus, whereas in the “Tractatus” language is placed in a static, formal relationship with the world, in the later work Wittgenstein emphasis its use in the context of standardized social activities of ordering, advising, requesting, measuring, counting, excising concerns for each other, and so on. These different activities are thought of as so many ‘language games’ that together make or a form of life. Philosophy typically ignores this diversity, and in generalizing and abstracting distorts the real nature of its subject-matter. In addition to the “Tractatus”and the”investigations” collections of Wittgenstein’s work published posthumously include “Remarks on the Foundations of Mathematics” (1956), “Notebooks” (1914-1916) (1961), “Pholosophische Bemerkungen” (1964), “Zettel” (1967, and “On Certainty” (1969).
Clearly, there are many forms of reliabilism. Just as there are many forms of ‘Foundationalism’ and ‘coherence’. How is reliabilism related to these other two theories of justification? It is usually regarded as a rival. This is aptly so, in as far as Foundationalism and Coherentism traditionally focussed on purely evidential relations than psychological processes, but reliabilism might also be offered as a deeper-level theory, subsuming some of the precepts of either Foundationalism or Coherentism. Foundationalism says that there are ‘basic’ beliefs, which acquire justification without dependence on inference, reliabilism might rationalize this indicating that the basic beliefs are formed by reliable non-inferential processes. Coherence stresses the primary of systematicity in all doxastic decision-making. Reliabilism might rationalize this by pointing to increases in reliability that accrue from systematicity consequently, reliabilism could complement Foundationalism and coherence than completed with them.
These examples make it seem likely that, if there is a criterion for what makes an alternate situation relevant that will save Goldman’s claim about local reliability and knowledge. Will did not be simple. The interesting thesis that counts as a causal theory of justification, in the making of ‘causal theory’ intended for the belief as it is justified in case it was produced by a type of process that is ‘globally’ reliable, that is, its propensity to produce true beliefs that can be defined, to a well-thought-of approximation, as the proportion of the beliefs it produces, or would produce where it used as much as opportunity allows, that is true is sufficiently relializable. Variations of this view have been advanced for both knowledge and justified belief, its first formulation of a reliability account of knowing appeared in the notation from F.P.Ramsey (1903-30). The theory of probability, he was the first to show how a ‘personalists theory’ could be developed, based on a precise behavioural notion of preference and expectation. In the philosophy of language. Much of Ramsey’s work was directed at saving classical mathematics from ‘intuitionism’, or what he called the ‘Bolshevik menace of Brouwer and Weyl. In the theory of probability he was the first to show how a personalists theory could be developed, based on precise behavioural notation of preference and expectation. In the philosophy of language, Ramsey was one of the first thankers, which he combined with radical views of the function of many kinds of a proposition. Neither generalizations, nor causal propositions, nor those treating probability or ethics, describe facts, but each has a different specific function in our intellectual economy. Ramsey was one of the earliest commentators on the early work of Wittgenstein, and his continuing friendship with Wittgenstein.
Ramsey’s sentence theory is the sentence generated by taking all the sentences affirmed in a scientific theory that use some term, e.g., ‘quark’. Replacing the term by a variable, and existentially quantifying into the result. Instead of saying that quarks have such-and-such properties, the Ramsey sentence says that there is something that has those properties. If the process is repeated for all of a group of the theoretical terms, the sentence gives the ‘topic-neutral’ structure of the theory, but removes any implication that we know what the term so treated characterized. It leaves open the possibility of identifying the theoretical item with whatever, and it is that best fits the description provided. Virtually, all theories of knowledge. Of course, share an externalist component in requiring truth as a condition for known in. Reliabilism goes further, however, in trying to capture additional conditions for knowledge by ways of a nomic, counterfactual or other such ‘external’ relations between belief and truth. Closely allied to the nomic sufficiency account of knowledge, primarily due to Dretshe (1971, 1981), A. I. Goldman (1976, 1986) and R. Nozick (1981). The core of this approach is that X’s belief that ‘p’ qualifies as knowledge just in case ‘X’ believes ‘p’, because of reasons that would not obtain unless ‘p’s’ being true, or because of a process or method that would not yield belief in ‘p’ if ‘p’ were not true. For example, ‘X’ would not have its current reasons for believing there is a telephone before it. Perhaps, would it not come to believe that this in the way it suits the purpose, thus, there is a differentiable fact of a reliable guarantor that the belief’s bing true. A stouthearted and valiant counterfactual approach says that ‘X’ knows that ‘p’ only if there is no ‘relevant alternative’ situation in which ‘p’ is false but ‘X’ would still believe that a proposition ‘p’; must be sufficient to eliminate all the alternatives to ‘p’ where an alternative to a proposition ‘p’ is a proposition incompatible with ‘p’? . That in, one’s justification or evidence for ‘p’ must be sufficient for one to know that every alternative to ‘p’ is false. This element of our evolving thinking, about which knowledge is exploited by sceptical arguments. These arguments call our attentions to alternatives that our evidence sustains itself with no elimination. The sceptic inquires to how we know that we are not seeing a cleverly disguised mule. While we do have some evidence against the likelihood of such as deception, intuitively knowing that we are not so deceived is not strong enough for ‘us’. By pointing out alternate but hidden points of nature, in that we cannot eliminate, as well as others with more general application, as dreams, hallucinations, etc., the sceptic appears to show that every alternative is seldom. If ever, satisfied.
This conclusion conflicts with another strand in our thinking about knowledge, in that we know many things. Thus, there is a tension in our ordinary thinking about knowledge ~. We believe that knowledge is, in the sense indicated, an absolute concept and yet, we also believe that there are many instances of that concept.
If one finds absoluteness to be too central a component of our concept of knowledge to be relinquished, one could argue from the absolute character of knowledge to a sceptical conclusion (Unger, 1975). Most philosophers, however, have taken the other course, choosing to respond to the conflict by giving up, perhaps reluctantly, the absolute criterion. This latter response holds as sacrosanct our commonsense belief that we know many things (Pollock, 1979 and Chisholm, 1977). Each approach is subject to the criticism that it preserves one aspect of our ordinary thinking about knowledge at the expense of denying another. The theory of relevant alternatives can be viewed as an attempt to provide a more satisfactory response to this tension in our thinking about knowledge. It attempts to characterize knowledge in a way that preserves both our belief that knowledge is an absolute concept and our belief that we have knowledge.
Just as space, the classical questions include: Is space real? Is it some kind of mental construct or artefact of our ways of perceiving and thinking? Is it ‘substantival’ or purely? relational’? According to substantivalism, space is an objective thing consisting of points or regions at which, or in which, things are located. Opposed to this is relationalism, according to which the only thing that is real about space are the spatial (and temporal) relations between physical objects. Substantivalism was advocated by Clarke speaking for Newton, and relationalism by Leibniz, in their famous correspondence, and the debate continues today. There is also an issue whether the measure of space and time are objective e, or whether an element of convention enters them. Whereby, the influential analysis of David Lewis suggests that a regularity hold as a matter of convention when it solves a problem of co-ordination in a group. This means that it is to the benefit of each member to conform to the regularity, providing the other do so. Any number of solutions to such a problem may exist, for example, it is to the advantages of each of us to drive on the same side of the road as others, but indifferent whether we all drive o the right or the left. One solution or another may emerge for a variety of reasons. It is notable that on this account convections may arise naturally; they do not have to be the result of specific agreement. This frees the notion for use in thinking about such things as the origin of language or of political society.
Finding to a theory that magnifies the role of decisions, or free selection from among equally possible alternatives, in order to show that what appears to be objective or fixed by nature is in fact an artefact of human convention, similar to conventions of etiquette, or grammar, or law. Thus one might suppose that moral rules owe more to social convention than to anything imposed from outside, or hat supposedly inexorable necessities are in fact the shadow of our linguistic conventions. The disadvantage of conventionalism is that it must show that alternative, equally workable e conventions could have been adopted, and it is often easy to believe that, for example, if we hold that some ethical norm such as respect for promises or property is conventional, we ought to be able to show that human needs would have been equally well satisfied by a system involving a different norm, and this may be hard to establish.
A convention also suggested by Paul Grice (1913-88) directing participants in conversation to pay heed to an accepted purpose or direction of the exchange. Contributions made without paying this attention are liable to be rejected for other reasons than straightforward falsity: Something rue but unhelpful or inappropriate may meet with puzzlement or rejection. We can thus never infer fro the fact that it would be inappropriate to say something in some circumstance that what would be aid, were we to say it, would be false. This inference was frequently and in ordinary language philosophy, it being argued, for example, that since we do not normally say ‘there sees to be a barn there’ when there is unmistakably a barn there, it is false that on such occasions there seems to be a barn there.
There are two main views on the nature of theories. According to the ‘received view’ theories are partially interpreted axiomatic systems, according to the semantic view, a theory is a collection of models (Suppe, 1974). However, a natural language comes ready interpreted, and the semantic problem is no that of the specification but of understanding the relationship between terms of various categories (names, descriptions, predicates, adverbs . . .) and their meanings. An influential proposal is that this relationship is best understood by attempting to provide a ‘truth definition’ for the language, which will involve giving terms and structure of different kinds have on the truth-condition of sentences containing them.
The axiomatic method . . . as, . . . a proposition lid down as one from which we may begin, an assertion that we have taken as fundamental, at least for the branch of enquiry in hand. The axiomatic method is that of defining as a set of such propositions, and the ‘proof procedures’ or finding of how a proof ever gets started. Suppose I have as premises (1) p and (2) p ➞ q. Can I infer q? Only, it seems, if I am sure of, (3) (p & p ➞q) ➞q. Can I then infer q? Only, it seems, if I am sure that (4) (p & p ➞ q) ➞ q) ➞ q. For each new axiom (N) I need a further axiom (N + 1) telling me that the set so far implies q, and the regress never stops. The usual solution is to treat a system as containing not only axioms, but also rules of reference, allowing movement fro the axiom. The rule ‘modus ponens’ allows us to pass from the first two premises to q. Charles Dodgson Lutwidge (1832-98) better known as Lewis Carroll’s puzzle shows that it is essential to distinguish two theoretical categories, although there may be choice about which to put in which category.
This type of theory (axiomatic) usually emerges as a body of (supposes) truths that are not nearly organized, making the theory difficult to survey or study a whole. The axiomatic method is an idea for organizing a theory (Hilbert 1970): one tries to select from among the supposed truths a small number from which all others can be seen to be deductively inferable. This makes the theory rather more tractable since, in a sense, all the truths are contained in those few. In a theory so organized, the few truths from which all others are deductively inferred are called axioms. In that, jus t as algebraic and differential equations, which were used to study mathematical and physical processes, could themselves be made mathematical objects, so axiomatic theories, like algebraic and differential equations, which are means of representing physical processes and mathematical structures, could be made objects of mathematical investigation.
In the traditional (as in Leibniz, 1704), many philosophers had the conviction that all truths, or all truths about a particular domain, followed from a few principles. These principles were taken to be either metaphysically prior or epistemologically prior or in the fist sense, they were taken to be entities of such a nature that what exists is ‘caused’ by them. When the principles were taken as epistemologically prior, that is, as axioms, either they were taken to be epistemologically privileged, e.g., self-evident, not needing to be demonstrated or (again, inclusive ‘or’) to be such that all truths do follow from them (by deductive inferences). Gödel (1984) showed that treating axiomatic theories as themselves mathematical objects, that mathematics, and even a small part of mathematics, elementary number theory, could not be axiomatized, that, more precisely, any class of axioms that in such that we could effectively decide, of any proposition, whether or not it was in the class, would be too small to capture all of the truths.
The use of a model to test for the consistency of an axiomatized system is older than modern logic. Descartes’s algebraic interpretation of Euclidean geometry provides a way of showing that if the theory of real numbers is consistent, so is the geometry. Similar mapping had been used by mathematicians in the 19th century for example to show that if Euclidean geometry is consistent, so are various non-Euclidean geometries. Model theory is the general study of this kind of procedure: The study of interpretations of formal system. Proof theory studies relations of deducibility as defined purely syntactically, that is, without reference to the intended interpretation of the calculus. More formally, a deductively valid argument starting from true premises, that yields the conclusion between formulae of a system. But once the notion of an interpretation is in place we can ask whether a formal system meets certain conditions. In particular, can it lead us from sentences that are true under some interpretation to ones that are false under the same interpretation? And if a sentence is true under all interpretations, is it also a theorem of the system? We can define a notion of validity (a formula is valid if it is true in all interpretations) and semantic consequence (a formulae, written
{A1 . . . An} ⊨ B, if it is true in all interpretations in which they are true) The central questions for a calculus will be whether all and only its theorems are valid, and whether {A1 . . . An} ⊨ B, if and only if {A1. . . . An} ⊢ B. These are the questions of the soundness and completeness of a formal system. For the propositional calculus this turns into the question of whether the proof theory delivers as theorems all and only tautologies. There are many axiomatizations of the propositional calculus that are consistent an complete. Gödel proved in 1929 that first-order predicate calculus is complete: any formula that is true under every interpretation is a theorem of the calculus.
The propositional calculus or logical calculus whose expressions are letter present sentences or propositions, and constants representing operations on those propositions to produce others of higher complexity. The operations include conjunction, disjunction, material implication and negation (although these need not be primitive). Propositional logic was partially anticipated by the Stoics but researched maturity only with the work of Frége, Russell, and Wittgenstein.
The concept introduced by Frége of a function taking a number of names as arguments, and delivering one proposition as the value. The idea is that ‘χ love’s y’ is a propositional function, which yields the proposition ‘John loves Mary’ from those two arguments (in that order). A propositional function is therefore roughly equivalent to a property or relation. In Principia Mathematica, Russell and Whitehead take propositional functions to be the fundamental function, since the theory of descriptions could be taken as showing that other expressions denoting functions are incomplete symbols.
Keeping in mind, the two classical truth-values that a statement, proposition, or sentence can take. It is supposed in classical (two-valued) logic, that each statement has one of these values, and none has both. A statement is then false if and only if it is not true. The basis of this scheme is that to each statement there corresponds a determinate truth condition, or way the world must be for it to be true, and otherwise false. Statements may be felicitous or infelicitous in other dimensions (polite, misleading, apposite, witty, etc.) but truth is the central normative governing assertion. Considerations of vagueness may introduce greys into black-and-white scheme. For the issue of whether falsity is the only way of failing to be true.
Formally, it is nonetheless, that any suppressed premise or background framework of thought necessary to make an argument valid, or a position tenable. More formally, a presupposition has been defined as a proposition whose truth is necessary for either the truth or the falsity of another statement. Thus, if ‘p’ presupposes ‘q’, ‘q’ must be true for p to be either true or false. In the theory of knowledge of Robin George Collingwood (1889-1943), any propositions capable of truth or falsity stand on a bed of ‘absolute presuppositions’ which are not properly capable of truth or falsity, since a system of thought will contain no way of approaching such a question. It was suggested by Peter Strawson (1919-), in opposition to Russell’s theory of ‘definite’ descriptions, that ‘there exists a King of France’ is a presupposition of ‘the King of France is bald’, the latter being neither true, nor false, if there is no King of France. It is, however, a little unclear whether the idea is that no statement at all is made in such a case, or whether a statement is made, but fails of being either true or false. The former option preserves classical logic, since we can still say that every statement is either true or false, but the latter does not, since in classical logic the law of ‘bivalence’ holds, and ensures that nothing at all is presupposed for any proposition to be true or false. The introduction of presupposition therefore means that either a third truth-value is found, ‘intermediate’ between truth and falsity, or that classical logic is preserved, but it is impossible to tell whether a particular sentence expresses a proposition that is a candidate for truth ad falsity, without knowing more than the formation rules of the language. Each suggestion carries costs, and there is some consensus that at least where definite descriptions are involved, examples like the one given are equally well handed by regarding the overall sentence false when the existence claim fails.
A proposition may be true or false it be said to take the truth-value true, and if the latter the truth-value false. The idea behind the term is the analogy between assigning a propositional variable one or other of these values, as a formula of the propositional calculus, and assigning an object as the value of many other variable. Logics with intermediate values are called many-valued logics. Then, a truth-function of a number of propositions or sentences is a function of them that has a definite truth-value, depend only on the truth-values of the constituents. Thus (p & q) is a combination whose truth-value is true when ‘p’ is true and ‘q’ is true, and false otherwise, ¬ p is a truth-function of ‘p’, false when ‘p’ is true and true when ‘p’ is false. The way in which the value of the whole is determined by the combinations of values of constituents is presented in a truth table.
In whatever manner, truths of fact cannot be reduced to any identity and our only way of knowing them is empirically, by reference to the facts of the empirical world.
A proposition is knowable deductively if it can be known without experience of the specific course of events in the actual world. It may, however, be allowed that some experience is required to acquire the concepts involved in an deductive proposition. Some thing is knowable only empirical if it can be known deductively. The distinction given one of the fundamental problem areas of epistemology. The category of deductive propositions is highly controversial, since it is not clear how pure thought, unaided by experience, can give rise to any knowledge at all, and it has always been a concern of empiricism to deny that it can. The two great areas in which it seems to be so are logic and mathematics, so empiricists have commonly tried to show either that these are not areas of real, substantive knowledge, or that in spite of appearances their knowledge that we have in these areas is actually dependent on experience. The former line tries to show sense trivial or analytic, or matters of notation conventions of language. The latter approach is particularly y associated with Quine, who denies any significant slit between propositions traditionally thought of as speculatively, and other deeply entrenched beliefs that occur in our overall view of the world.
Another contested category is that of speculative concepts, supposed to be concepts that cannot be ‘derived’ from experience, bu t which are presupposed in any mode of thought about the world, time, substance, causation, number, and self are candidates. The need for such concept s, and the nature of the substantive a prior I knowledge to which they give rise, is the central concern of Kant ‘s Critique of Pure Reason.
Likewise, since their denial does not involve a contradiction, there is merely contingent: Their could have been in other ways a hold of the actual world, but not every possible one. Some examples are ‘Caesar crossed the Rubicon’ and ‘Leibniz was born in Leipzig’, as well as propositions expressing correct scientific generalizations. In Leibniz’s view truths of fact rest on the principle of sufficient reason, which is a reason why it is so. This reason is that the actual world (by which he means the total collection of things past, present and future) is better than any other possible world and therefore created by God. The foundation of his thought is the conviction that to each individual there corresponds a complete notion, knowable only to God, from which is deducible all the properties possessed by the individual at each moment in its history. It is contingent that God actualizes te individual that meets such a concept, but his doing so is explicable by the principle of ‘sufficient reason’, whereby God had to actualize just that possibility in order for this to be the best of all possible worlds. This thesis is subsequently lampooned by Voltaire (1694-1778), in whom of which was prepared to take refuge in ignorance, as the nature of the soul, or the way to reconcile evil with divine providence.
In defending the principle of sufficient reason sometimes described as the principle that nothing can be so without there being a reason why it is so. But the reason has to be of a particularly potent kind: eventually it has to ground contingent facts in necessities, and in particular in the reason an omnipotent and perfect being would have for actualizing one possibility than another. Among the consequences of the principle is Leibniz’s relational doctrine of space, since if space were an infinite box there could be no reason for the world to be at one point in rather than another, and God placing it at any point violate the principle. In Abelard’s (1079-1142), as in Leibniz, the principle eventually forces te recognition that the actual world is the best of all possibilities, since anything else would be inconsistent with the creative power that actualizes possibilities.
If truth consists in concept containment, then it seems that all truths are analytic and hence necessary. If they are all necessary, surely they are all truths of reason. In that not every truth can be reduced to an identity in a finite number of steps; in some instances revealing the connection between subject and predicate concepts would require an infinite analysis, while this may entail that we cannot prove such proposition as a prior, it does not appear to show that proposition could have ben false. Intuitively, it seems a better ground for supposing that it is a necessary truth of a special sort. A related question arises from the idea that truths of fact depend on God’s decision to create the best world: If it is part of the concept of this world that it is best, how could its existence be other than necessary? An accountable and responsively answered explanation would be so, that any relational question that brakes the norm lay eyes on its existence in the manner other than hypothetical necessities, i.e., it follows from God’s decision to create the world, but God had the power to create this world, but God is necessary, so how could he have decided to do anything else? Leibniz says much more about these matters, but it is not clear whether he offers any satisfactory solutions.
The view that the terms in which we think of some area are sufficiently infected with error for it to be better to abandon them than to continue to try to give coherent theories of their use. Eliminativism should be distinguished from scepticism that claims that we cannot know the truth about some area; eliminativism claims rather that there is no truth there to be known, in the terms that we currently think. An eliminativist about theology simply counsels abandoning the terms or discourse of theology, and that will include abandoning worries about the extent of theological knowledge.
Eliminativists in the philosophy of mind counsel abandoning the whole network of terms mind, consciousness, self, qualia that usher in the problems of mind and body. Sometimes the argument for doing this is that we should wait for a supposed future understanding of ourselves, based on cognitive science and better than any our current mental descriptions provide, sometimes it is supposed that physicalism shows that no mental description of ourselves could possibly be true.
Greek scepticism centred on the value of enquiry and questioning, scepticism is now the denial that knowledge or even rational belief is possible, either about some specific subject-matter, e.g., ethics, o r in any atra whatsoever. Classically, scepticism springs from the observation that the best methods in some area seem to fall short of giving us contact with the truth, e.g., there is a gulf between appearance and reality, and in frequency cites the conflicting judgements that our methods deliver, with the result that questions of truth become undecidable.
Sceptical tendencies emerged in the 14th-century writings of Nicholas of Autrecourt. His criticisms of any certainty beyond the immediate deliverance of the senses and basic logic, and in particular of any knowledge of either intellectual or material substances, anticipate the later scepticism of Balye and Hume. The; latter distinguishes between Pyrrhonistic and excessive scepticism, which he regarded as unlivable, and the more mitigated scepticism that accepts everyday or commonsense beliefs (not as the delivery of reason, but as due more to custom and habit), but is duly wary of the power of reason to give us much more. Mitigated scepticism is thus closer to the attitude fostered by ancient scepticism from Pyrrho through to Sexus Empiricus. Although the phrase ‘Cartesian scepticism’ is sometimes used, Descartes himself was not a sceptic, but in the method of doubt, uses a sceptical scenario in order to begin the process of finding a secure mark of knowledge. Descartes himself trusts a category of ‘clear and distinct’ ideas, not far removed from the phantasia kataleptiké of the Stoics.
Scepticism should not be confused with relativism, which is a doctrine about the nature of truth, and may be motivated by trying to avoid scepticism. Nor is it identical with eliminativism, which counsels abandoning an area of thought altogether, not because we cannot know the truth, but because there are no truths capable of being framed in the terms we use.
Descartes’s theory of knowledge starts with the quest for certainty, for an indubitable starting-point or foundation on the basis alone of which progress is possible. This is eventually found in the celebrated ‘Cogito ergo sum’: I think therefore I am. By locating the point of certainty in my own awareness of my own self, Descartes gives a first-person twist to the theory of knowledge that dominated them following centuries in spite of various counter-attacks on behalf of social and public starting-points. The metaphysics associated with this priority is the famous Cartesian dualism, or separation of mind and matter into two different but interacting substances, Descartes rigorously and rightly sees that it takes divine dispensation to certify any relationship between the two realms thus divided, and to prove the reliability of the senses invokes a ‘clear and distinct perception’ of highly dubious proofs of the existence of a benevolent deity. This has not met general acceptance: as Hume drily puts it, ‘to have recourse to the veracity of the supreme Being, in order to prove the veracity of our senses, is surely making a very unexpected circuit’.
In his own time Descartes’s conception of the entirely separate substance of the mind was recognized to give rise to insoluble problems of the nature of the causal connection between the two. It also gives rise to the problem, insoluble in its own terms, of other minds. Descartes’s notorious denial that non-human animals are conscious is a stark illustration of the problem. In his conception of matter Descartes also gives preference to rational cogitation over anything derived from the senses. Since we can conceive of the matter of a ball of wax surviving changes to its sensible qualities, matter is not an empirical concept, but eventually an entirely geometrical one, with extension and motion as its only physical nature. Descartes’s thought, as reflected in Leibniz, that the qualities of sense experience have no resemblance to qualities of things, so that knowledge of the external world is essentially knowledge of structure rather than of filling. On this basis Descartes erects a remarkable physics. Since matter is in effect the same as extension there can be no empty space or ‘void’, since there is no empty space motion is not a question of occupying previously empty space, but is to be thought of in terms of vortices (like the motion of a liquid).
Although the structure of Descartes’s epistemology, theory of mind, and theory of matter have ben rejected many times, their relentless exposure of the hardest issues, their exemplary clarity, and even their initial plausibility, all contrive to make him the central point of reference for modern philosophy.
The self conceived as Descartes presents it in the first two Meditations: aware only of its own thoughts, and capable of disembodied existence, neither situated in a space nor surrounded by others. This is the pure self of ‘I-ness’ that we are tempted to imagine as a simple unique thing that make up our essential identity. Descartes’s view that he could keep hold of this nugget while doubting everything else is criticized by Lichtenberg and Kant, and most subsequent philosophers of mind.
Descartes holds that we do not have any knowledge of any empirical proposition about anything beyond the contents of our own minds. The reason, roughly put, is that there is a legitimate doubt about all such propositions because there is no way to deny justifiably that our senses are being stimulated by some cause (an evil spirit, for example) which is radically different from the objects that we normally think affect our senses.
He also points out, that the senses (sight, hearing, touch, etc., are often unreliable, and ‘it is prudent never to trust entirely those who have deceived us even once’, he cited such instances as the straight stick that looks ben t in water, and the square tower that looks round from a distance. This argument of illusion, has not, on the whole, impressed commentators, and some of Descartes’ contemporaries pointing out that since such errors become known as a result of further sensory information, it cannot be right to cast wholesale doubt on the evidence of the senses. But Descartes regarded the argument from illusion as only the first stage in a softening up process which would ‘lead the mind away from the senses’. He admits that there are some cases of sense-base belief about which doubt would be insane, e.g., the belief that I am sitting here by the fire, wearing a winter dressing gown’.
Descartes was to realize that there was nothing in this view of nature that could explain or provide a foundation for the mental, or from direct experience as distinctly human. In a mechanistic universe, he said, there is no privileged place or function for mind, and the separation between mind and matter is absolute. Descartes was also convinced, that the immaterial essences that gave form and structure to this universe were coded in geometrical and mathematical ideas, and this insight led him to invent algebraic geometry.
A scientific understanding of these ideas could be derived, said Descartes, with the aid of precise deduction, and he also claimed that the contours of physical reality could be laid out in three-dimensional coordinates. Following the publication of Newton’s Principia Mathematica in 1687, reductionism and mathematical modelling became the most powerful tools of modern science. And the dream that the entire physical world could be known and mastered through the extension and refinement of mathematical theory became the central feature and guiding principle of scientific knowledge.
Having to its recourse of knowledge, its cental questions include the origin of knowledge, the place of experience in generating knowledge, and the place of reason in doing so, the relationship between knowledge and certainty, and between knowledge and the impossibility of error, the possibility of universal scepticism, and the changing forms of knowledge that arise from new conceptualizations of the world. All of these issues link with other central concerns of philosophy, such as the nature of truth and the natures of experience and meaning.
Foundationalism was associated with the ancient Stoics, and in the modern era with Descartes (1596-1650). Who discovered his foundations in the ‘clear and distinct’ ideas of reason? Its main opponent is Coherentism, or the view that a body of propositions mas be known without a foundation in certainty, but by their interlocking strength, than as a crossword puzzle may be known to have been solved correctly even if each answer, taken individually, admits of uncertainty. Difficulties at this point led the logical passivists to abandon the notion of an epistemological foundation altogether, and to flirt with the coherence theory of truth. It is widely accepted that trying to make the connection between thought and experience through basic sentences depends on an untenable ‘myth of the given’.
Still in spite of these concerns, the problem, least of mention, is of defining knowledge in terms of true beliefs plus some favoured relations between the believer and the facts that began with Plato’s view in the “Theaetetus,” that knowledge is true belief, and some logos. Due of its nonsynthetic epistemology, the enterprising of studying the actual formation of knowledge by human beings, without aspiring to certify those processes as rational, or its proof against ‘scepticism’ or even apt to yield the truth. Natural epistemology would therefore blend into the psychology of learning and the study of episodes in the history of science. The scope for ‘external’ or philosophical reflection of the kind that might result in scepticism or its refutation is markedly diminished. Despite the fact that the terms of modernity are so distinguished as exponents of the approach include Aristotle, Hume, and J. S. Mills.
The task of the philosopher of a discipline would then be to reveal the correct method and to unmask counterfeits. Although this belief lay behind much positivist philosophy of science, few philosophers now subscribe to it. It places too well a confidence in the possibility of a purely previous ‘first philosophy’, or viewpoint beyond that of the work one’s way of practitioners, from which their best efforts can be measured as good or bad. These standpoints now seem that too many philosophers to be a fanciefancy, that the more modest of tasks that are actually adopted at various historical stages of investigation into different areas with the aim not so much of criticizing but more of systematization, in the presuppositions of a particular field at a particular tie. There is still a role for local methodological disputes within the community investigators of some phenomenon, with one approach charging that another is unsound or unscientific, but logic and philosophy will not, on the modern view, provide an independent arsenal of weapons for such battles, which indeed often come to seem more like political bids for ascendancy within a discipline.
This is an approach to the theory of knowledge that sees an important connection between the growth of knowledge and biological evolution. An evolutionary epistemologist claims that the development of human knowledge processed through some natural selection process, the best example of which is Darwin’s theory of biological natural selection. There is a widespread misconception that evolution proceeds according to some plan or direct, but it has neither, and the role of chance ensures that its future course will be unpredictable. Random variations in individual organisms create tiny differences in their Darwinian fitness. Some individuals have more offsprings than others, and the characteristics that increased their fitness thereby become more prevalent in future generations. Once upon a time, at least a mutation occurred in a human population in tropical Africa that changed the haemoglobin molecule in a way that provided resistance to malaria. This enormous advantage caused the new gene to spread, with the unfortunate consequence that sickle-cell anaemia came to exist.
Chance can influence the outcome at each stage: First, in the creation of genetic mutation, second, in wether the bearer lives long enough to show its effects, thirdly, in chance events that influence the individual’s actual reproductive success, and fourth, in whether a gene even if favoured in one generation, is, happenstance, eliminated in the next, and finally in the many unpredictable environmental changes that will undoubtedly occur in the history of any group of organisms. As Harvard biologist Stephen Jay Gould has so vividly expressed that process over again, the outcome would surely be different. Not only might there not be humans, there might not even be anything like mammals.
We will often emphasis the elegance of traits shaped by natural selection, but the common idea that nature creates perfection needs to be analysed carefully. The extent to which evolution achieves perfection depends on exactly what you mean. If you mean “Does natural selections always take the best path for the long-term welfare of a species?” The answer is no. That would require adaption by group selection, and this is, unlikely. If you mean “Does natural selection creates every adaption that would be valuable?” The answer again, is no. For instance, some kinds of South American monkeys can grasp branches with their tails. The trick would surely also be useful to some African species, but, simply because of bad luck, none have it. Some combination of circumstances started some ancestral South American monkeys using their tails in ways that ultimately led to an ability to grab onto branches, while no such development took place in Africa. Mere usefulness of a trait does not necessitate a means in that what will understandably endure phylogenesis or evolution.
This is an approach to the theory of knowledge that sees an important connection between the growth of knowledge and biological evolution. An evolutionary epistemologist claims that the development of human knowledge proceeds through some natural selection process, the best example of which is Darwin’s theory of biological natural selection. The three major components of the model of natural selection are variation selection and retention. According to Darwin’s theory of natural selection, variations are not pre-designed to do certain functions. Rather, these variations that do useful functions are selected. While those that do not employ of some coordinates in that are regainfully purposed are also, not to any of a selection, as duly influenced of such a selection, that may have responsibilities for the visual aspects of a variational intentionally occurs. In the modern theory of evolution, genetic mutations provide the blind variations: Blind in the sense that variations are not influenced by the effects they would have-the likelihood of a mutation is not correlated with the benefits or liabilities that mutation would confer on the organism, the environment provides the filter of selection, and reproduction provides the retention. Fatnesses are achieved because those organisms with features that make them less adapted for survival do not survive in connection with other organisms in the environment that have features that are better adapted. Evolutionary epistemology applies this blind variation and selective retention model to the growth of scientific knowledge and to human thought processes overall.
The parallel between biological evolution and conceptual or ‘epistemic’ evolution can be seen as either literal or analogical. The literal version of evolutionary epistemology deeds biological evolution as the main cause of the growth of knowledge. On this view, called the ‘evolution of cognitive mechanic programs’, by Bradie (1986) and the ‘Darwinian approach to epistemology’ by Ruse (1986), that growth of knowledge occurs through blind variation and selective retention because biological natural selection itself is the cause of epistemic variation and selection. The most plausible version of the literal view does not hold that all human beliefs are innate but rather than the mental mechanisms that guide the acquisitions of non-innate beliefs are themselves innately and the result of biological natural selection. Ruse, (1986) demands of a version of literal evolutionary epistemology that he links to sociolology (Rescher, 1990).
On the analogical version of evolutionary epistemology, called the ‘evolution of theory’s program’, by Bradie (1986). The ‘Spenserians approach’ (after the nineteenth century philosopher Herbert Spencer) by Ruse (1986), the development of human knowledge is governed by a process analogous to biological natural selection, rather than by an instance of the mechanism itself. This version of evolutionary epistemology, introduced and elaborated by Donald Campbell (1974) as well as Karl Popper, sees the [partial] fit between theories and the world as explained by a mental process of trial and error known as epistemic natural selection.
Both versions of evolutionary epistemology are usually taken to be types of naturalized epistemology, because both take some empirical facts as a starting point for their epistemological project. The literal version of evolutionary epistemology begins by accepting evolutionary theory and a materialist approach to the mind and, from these, constructs an account of knowledge and its developments. In contrast, the metaphorical version does not require the truth of biological evolution: It simply draws on biological evolution as a source for the model of natural selection. For this version of evolutionary epistemology to be true, the model of natural selection need only apply to the growth of knowledge, not to the origin and development of species. Crudely put, evolutionary epistemology of the analogical sort could still be true even if Creationism is the correct theory of the origin of species.
Although they do not begin by assuming evolutionary theory, most analogical evolutionary epistemologists are naturalized epistemologists as well, their empirical assumptions, least of mention, implicitly come from psychology and cognitive science, not evolutionary theory. Sometimes, however, evolutionary epistemology is characterized in a seemingly non-naturalistic fashion. Campbell (1974) says that ‘if one is expanding knowledge beyond what one knows, one has no choice but to explore without the benefit of wisdom’, i.e., blindly. This, Campbell admits, makes evolutionary epistemology close to being a tautology (and so not naturalistic). Evolutionary epistemology does assert the analytic claim that when expanding one’s knowledge beyond what one knows, one must precessed to something that is already known, but, more interestingly, it also makes the synthetic claim that when expanding one’s knowledge beyond what one knows, one must proceed by blind variation and selective retention. This claim is synthetic because it can be empirically falsified. The central claim of evolutionary epistemology is synthetic, not analytic. If the central contradictory, which they are not. Campbell is right that evolutionary epistemology does have the analytic feature he mentions, but he is wrong to think that this is a distinguishing feature, since any plausible epistemology has the same analytic feature (Skagestad, 1978).
Two extraordinary issues lie to awaken the literature that involves questions about ‘realism’, i.e., What metaphysical commitment does an evolutionary epistemologist have to make? Progress, i.e., according to evolutionary epistemology, does knowledge develop toward a goal? With respect to realism, many evolutionary epistemologists endorse that is called ‘hypothetical realism’, a view that combines a version of epistemological ‘scepticism’ and tentative acceptance of metaphysical realism. With respect to progress, the problem is that biological evolution is not goal-directed, but the growth of human knowledge seems to be. Campbell (1974) worries about the potential dis-analogy here but is willing to bite the stone of conscience and admit that epistemic evolution progress toward a goal (truth) while biologic evolution does not. Many another has argued that evolutionary epistemologists must give up the ‘truth-topic’ sense of progress because a natural selection model is in essence, is non-teleological, as an alternative, following Kuhn (1970), and embraced in the accompaniment with evolutionary epistemology.
Among the most frequent and serious criticisms levelled against evolutionary epistemology is that the analogical version of the view is false because epistemic variation is not blind (Skagestad, 1978, 613-16, and Ruse, 1986, ch.2 (. Stein and Lipton (1990) have argued, however, that this objection fails because, while epistemic variation is not random, its constraints come from heuristics that, for the most part, are selective retention. Further, Stein and Lipton come to the conclusion that heuristics are analogous to biological pre-adaptions, evolutionary pre-biological pre-adaptions, evolutionary cursors, such as a half-wing, a precursor to a wing, which have some function other than the function of their descendable structures: The function of descendable structures, the function of their descendable character embodied to its structural foundations, is that of the guidelines of epistemic variation is, on this view, not the source of disanalogy, but the source of a more articulated account of the analogy.
Many evolutionary epistemologists try to combine the literal and the analogical versions (Bradie, 1986, and Stein and Lipton, 1990), saying that those beliefs and cognitive mechanisms, which are innate results from natural selection of the biological sort and those that are innate results from natural selection of the epistemic sort. This is reasonable as long as the two parts of this hybrid view are kept distinct. An analogical version of evolutionary epistemology with biological variation as its only source of blondeness would be a null theory: This would be the case if all our beliefs are innate or if our non-innate beliefs are not the result of blind variation. An appeal to the legitimate way to produce a hybrid version of evolutionary epistemology since doing so trivializes the theory. For similar reasons, such an appeal will not save an analogical version of evolutionary epistemology from arguments to the effect that epistemic variation is blind (Stein and Lipton, 1990).
Although it is a new approach to theory of knowledge, evolutionary epistemology has attracted much attention, primarily because it represents a serious attempt to flesh out a naturalized epistemology by drawing on several disciplines. In science is relevant to understanding the nature and development of knowledge, then evolutionary theory is among the disciplines worth a look. Insofar as evolutionary epistemology looks there, it is an interesting and potentially fruitful epistemological programme.
What makes a belief justified and what makes a true belief knowledge? Thinking that whether a belief deserves one of these appraisals is natural depends on what caused the depicted branch of knowledge to have the belief. In recent decades a number of epistemologists have pursued this plausible idea with a variety of specific proposals. Some causal theories of knowledge have it that a true belief that ‘p’ is knowledge just in case it has the right causal connection to the fact that ‘p’. Such a criterion can be applied only to cases where the fact that ‘p’ is a sort that can reach causal relations, as this seems to exclude mathematically and there necessary facts and perhaps any fact expressed by a universal generalization, and proponents of this sort of criterion have usually supposed that it is limited to perceptual representations where knowledge of particular facts about subjects’ environments.
For example, Armstrong (1973), predetermined that a position held by a belief in the form ‘This perceived object is ‘F’ is [non-inferential] knowledge if and only if the belief is a completely reliable sign that the perceived object is ‘F’, that is, the fact that the object is ‘F’ contributed to causing the belief and its doing so depended on properties of the believer such that the laws of nature dictated that, for any subject ‘χ’ and perceived object ‘y’, if ‘χ’ has those properties and believed that ‘y’ is ‘F’, then ‘y’ is ‘F’. (Dretske (1981) offers a rather similar account, in terms of the belief’s being caused by a signal received by the perceiver that carries the information that the object is ‘F’).
Goldman (1986) has proposed an importantly different causal criterion, namely, that a true belief is knowledge if it is produced by a type of process that is ‘globally’ and ‘locally’ reliable. Causing true beliefs is sufficiently high is globally reliable if its propensity. Local reliability has to do with whether the process would have produced a similar but false belief in certain counterfactual situations alternative to the actual situation. This way of marking off true beliefs that are knowledge does not require the fact believed to be causally related to the belief, and so it could in principle apply to knowledge of any kind of truth.
Goldman requires the global reliability of the belief-producing process for the justification of a belief, he requires it also for knowledge because justification is required for knowledge. What he requires for knowledge, but does not require for justification is local reliability. His idea is that a justified true belief is knowledge if the type of process that produced it would not have produced it in any relevant counterfactual situation in which it is false. Its purported theory of relevant alternatives can be viewed as an attempt to provide a more satisfactory response to this tension in our thinking about knowledge. It attempts to characterize knowledge in a way that preserves both our belief that knowledge is an absolute concept and our belief that we have knowledge.
According to the theory, we need to qualify rather than deny the absolute character of knowledge. We should view knowledge as absolute, reactive to certain standards (Dretske, 1981 and Cohen, 1988). That is to say, in order to know a proposition, our evidence need not eliminate all the alternatives to that preposition, rather for ‘us’, that we can know our evidence eliminates al the relevant alternatives, where the set of relevant alternatives (a proper subset of the set of all alternatives) is determined by some standard. Moreover, according to the relevant alternatives view, and the standards determining that of the alternatives is raised by the sceptic are not relevant. If this is correct, then the fact that our evidence cannot eliminate the sceptic’s alternative does not lead to a sceptical result. For knowledge requires only the elimination of the relevant alternatives, so the relevant alternative view preserves in both strands in our thinking about knowledge. Knowledge is an absolute concept, but because the absoluteness is relative to a standard, we can know many things.
The interesting thesis that counts as a causal theory of justification (in the meaning of ‘causal theory’ intended here) are that: A belief is justified in case it was produced by a type of process that is ‘globally’ reliable, that is, its propensity to produce true beliefs-that can be defined (to a good approximation) As the proportion of the beliefs it produces (or would produce) that is true is sufficiently great.
This proposal will be adequately specified only when we are told (I) how much of the causal history of a belief counts as part of the process that produced it, (ii) which of the many types to which the process belongs is the type for purposes of assessing its reliability, and (iii) relative to why the world or worlds are the reliability of the process type to be assessed the actual world, the closet worlds containing the case being considered, or something else? Let ‘us’ look at the answers suggested by Goldman, the leading proponent of a reliabilist account of justification.
(1) Goldman (1979, 1986) takes the relevant belief producing process to include only the proximate causes internal to the believer. So, for instance, when recently I believed that the telephone was ringing the process that produced the belief, for purposes of assessing reliability, includes just the causal chain of neural events from the stimulus in my ear’s inward ands other concurrent brain states on which the production of the belief depended: It does not include any events’ I the telephone, or the sound waves travelling between it and my ears, or any earlier decisions I made that were responsible for my being within hearing distance of the telephone at that time. It does seem intuitively plausible of a belief depends should be restricted to internal omnes proximate to the belief. Why? Goldman does not tell ‘us’. One answer that some philosophers might give is that it is because a belief’s being justified at a given time can depend only on facts directly accessible to the believer’s awareness at that time (for, if a believer ought to holds only beliefs that are justified, she can tell at any given time what beliefs would then be justified for her). However, this cannot be Goldman’s answer because he wishes to include in the relevantly process neural events that are not directly accessible to consciousness.
(2) Once the reliabilist has told ‘us’ how to delimit the process producing a belief, he needs to tell ‘us’ which of the many types to which it belongs is the relevant type. Coincide, for example, the process that produces your current belief that you see a book before you. One very broad type to which that process belongs would be specified by ‘coming to a belief as to something one perceives as a result of activation of the nerve endings in some of one’s sense-organs’. A constricted type, in which that unvarying processes belong would be specified by ‘coming to a belief as to what one sees as a result of activation of the nerve endings in one’s retinas’. A still narrower type would be given by inserting in the last specification a description of a particular pattern of activation of the retina’s particular cells. Which of these or other types to which the token process belongs is the relevant type for determining whether the type of process that produced your belief is reliable?
If we select a type that is too broad, as having the same degree of justification various beliefs that intuitively seem to have different degrees of justification. Thus the broadest type we specified for your belief that you see a book before you apply also to perceptual beliefs where the object seen is far away and seen only briefly is less justified. On the other hand, is we are allowed to select a type that is as narrow as we please, then we make it out that an obviously unjustified but true belief is produced by a reliable type of process. For example, suppose I see a blurred shape through the fog far in a field and unjustifiedly, but correctly, believe that it is a sheep: If we include enough details about my retinal image is specifying te type of the visual process that produced that belief, we can specify a type is likely to have only that one instanced and is therefore 100 percent reliable. Goldman conjectures (1986) that the relevant process type is ‘the narrowest type that is casually operative’. Presumably, a feature of the process producing beliefs were causally operatives in producing it just in case some alternative feature instead, but it would not have led to that belief. (We need to say ‘some’ here rather than ‘any’, because, for example, when I see an oak or pine tree, the particular ‘like-minded’ material bodies of my retinal image is causably clearly toward the operatives in producing my belief that what is seen as a tree, even though there are alternative shapes, for example, ‘pineish’ or ‘birchness’ ones, that would have produced the same belief.)
(3) Should the justification of a belief in a hypothetical, non-actual example turn on the reliability of the belief-producing process in the possible world of the example? That leads to the implausible result in that in a world run by a Cartesian demon-a powerful being who causes the other inhabitants of the world to have rich and coherent sets of perceptual and memory impressions that are all illusory the perceptual and memory beliefs of the other inhabitants are all unjustified, for they are produced by processes that are, in that world, quite unreliable. If we say instead that it is the reliability of the processes in the actual world that matters, we get the equally undesired result that if the actual world is a demon world then our perceptual and memory beliefs are all unjustified.
Goldman’s solution (1986) is that the reliability of the process types is to be gauged by their performance in ‘normal’ worlds, that is, worlds consistent with ‘our general beliefs about the world . . . ‘about the sorts of objects, events and changes that occur in it’. This gives the intuitively right results for the problem cases just considered, but indicate by inference an implausible proportion of making compensations for alternative tending toward justification. If there are people whose general beliefs about the world are very different from mine, then there may, on this account, be beliefs that I can correctly regard as justified (ones produced by processes that are reliable in what I take to be a normal world) but that they can correctly regard as not justified.
However, these questions about the specifics are dealt with, and there are reasons for questioning the basic idea that the criterion for a belief’s being justified is its being produced by a reliable process. Thus and so, doubt about the sufficiency of the reliabilist criterion is prompted by a sort of example that Goldman himself uses for another purpose. Suppose that being in brain-state ‘B’ always causes one to believe that one is in brained-state ‘B’. Here the reliability of the belief-producing process is perfect, but ‘we can readily imagine circumstances in which a person goes into grain-state ‘B’ and therefore has the belief in question, though this belief is by no means justified’ (Goldman, 1979). Doubt about the necessity of the condition arises from the possibility that one might know that one has strong justification for a certain belief and yet that knowledge is not what actually prompts one to believe. For example, I might be well aware that, having read the weather bureau’s forecast that it will be much hotter tomorrow. I have ample reason to be confident that it will be hotter tomorrow, but I irrationally refuse to believe it until Wally tells me that he feels in his joints that it will be hotter tomorrow. Here what prompts me to believe dors not justify my belief, but my belief is nevertheless justified by my knowledge of the weather bureau’s prediction and of its evidential force: I can advert to any disavowable inference that I ought not to be holding the belief. Indeed, given my justification and that there is nothing untoward about the weather bureau’s prediction, my belief, if true, can be counted knowledge. This sorts of example raises doubt whether any causal conditions, are it a reliable process or something else, is necessary for either justification or knowledge.
Philosophers and scientists alike, have often held that the simplicity or parsimony of a theory is one reason, all else being equal, to view it as true. This goes beyond the unproblematic idea that simpler theories are easier to work with and gave greater aesthetic appeal.
One theory is more parsimonious than another when it postulates fewer entities, processes, changes or explanatory principles: The simplicity of a theory depends on essentially the same consecrations, though parsimony and simplicity obviously become the same. Demanding clarification of what makes one theory simpler or more parsimonious is plausible than another before the justification of these methodological maxims can be addressed.
If we set this description problem to one side, the major normative problem is as follows: What reason is there to think that simplicity is a sign of truth? Why should we accept a simpler theory instead of its more complex rivals? Newton and Leibniz thought that the answer was to be found in a substantive fact about nature. In “Principia,” Newton laid down as his first Rule of Reasoning in Philosophy that ‘nature does nothing in vain . . . ‘for Nature is pleased with simplicity and affects not the pomp of superfluous causes’. Leibniz hypothesized that the actual world obeys simple laws because God’s taste for simplicity influenced his decision about which world to actualize.
The tragedy of the Western mind, described by Koyré, is a direct consequence of the stark Cartesian division between mind and world. We discovered the ‘certain principles of physical reality’, said Descartes, ‘not by the prejudices of the senses, but by the light of reason, and which thus possess so great evidence that we cannot doubt of their truth’. Since the real, or that which actually exists external to ourselves, was in his view only that which could be represented in the quantitative terms of mathematics, Descartes conclude that all quantitative aspects of reality could be traced to the deceitfulness of the senses.
The most fundamental aspect of the Western intellectual tradition is the assumption that there is a fundamental division between the material and the immaterial world or between the realm of matter and the realm of pure mind or spirit. The metaphysical frame-work based on this assumption is known as ontological dualism. As the word dual implies, the framework is predicated on an ontology, or a conception of the nature of God or Being, that assumes reality has two distinct and separable dimensions. The concept of Being as continuous, immutable, and having a prior or separate existence from the world of change dates from the ancient Greek philosopher Parmenides. The same qualities were associated with the God of the Judeo-Christian tradition, and they were considerably amplified by the role played in theology by Platonic and Neoplatonic philosophy.
Nicolas Copernicus, Galileo, Johannes Kepler, and Isaac Newton were all inheritors of a cultural tradition in which ontological dualism was a primary article of faith. Hence the idealization of the mathematical ideal as a source of communion with God, which dates from Pythagoras, provided a metaphysical foundation for the emerging natural sciences. This explains why, the creators of classical physics believed that doing physics was a form of communion with the geometrical and mathematical forms’ resident in the perfect mind of God. This view would survive in a modified form in what is now known as Einsteinian epistemology and accounts in no small part for the reluctance of many physicists to accept the epistemology associated with the Copenhagen Interpretation.
At the beginning of the nineteenth century, Pierre-Sinon LaPlace, along with a number of other French mathematicians, advanced the view that the science of mechanics constituted a complete view of nature. Since this science, by observing its epistemology, had revealed itself to be the fundamental science, the hypothesis of God was, they concluded, entirely unnecessary.
LaPlace is recognized for eliminating not only the theological component of classical physics but the ‘entire metaphysical component’ as well’. The epistemology of science requires, he said, that we proceed by inductive generalizations from observed facts to hypotheses that are ‘tested by observed conformity of the phenomena’. What was unique about LaPlace’s view of hypotheses was his insistence that we cannot attribute reality to them. Although concepts like force, mass, motion, cause, and laws are obviously present in classical physics, they exist in LaPlace’s view only as quantities. Physics is concerned, he argued, with quantities that we associate as a matter of convenience with concepts, and the truths about nature are only the quantities.
As this view of hypotheses and the truths of nature as quantities was extended in the nineteenth century to a mathematical description of phenomena like heat, light, electricity, and magnetism. LaPlace’s assumptions about the actual character of scientific truths seemed correct. This progress suggested that if we could remove all thoughts about the ‘nature of’ or the ‘source of’ phenomena, the pursuit of strictly quantitative concepts would bring us to a complete description of all aspects of physical reality. Subsequently, figures like Comte, Kirchhoff, Hertz, and Poincaré developed a program for the study of nature hat was quite different from that of the original creators of classical physics.
The seventeenth-century view of physics as a philosophy of nature or as natural philosophy was displaced by the view of physics as an autonomous science that was ‘the science of nature’. This view, which was premised on the doctrine of positivism, promised to subsume all of nature with a mathematical analysis of entities in motion and claimed that the true understanding of nature was revealed only in the mathematical description. Since the doctrine of positivism assumes that the knowledge we call physics resides only in the mathematical formalism of physical theory, it disallows the prospect that the vision of physical reality revealed in physical theory can have any other meaning. In the history of science, the irony is that positivism, which was intended to banish metaphysical concerns from the domain of science, served to perpetuate a seventeenth-century metaphysical assumption about the relationship between physical reality and physical theory.
Epistemology since Hume and Kant has drawn back from this theological underpinning. Indeed, the very idea that nature is simple (or uniform) has come in for a critique. The view has taken hold that a preference for simple and parsimonious hypotheses is purely methodological: It is constitutive of the attitude we call ‘scientific’ and makes no substantive assumption about the way the world is.
A variety of otherwise diverse twentieth-century philosophers of science have attempted, in different ways, to flesh out this position. Two examples must suffice here: Hesse (1969) as, for summaries of other proposals. Popper (1959) holds that scientists should prefer highly falsifiable (improbable) theories: He tries to show that simpler theories are more falsifiable, also Quine (1966), in contrast, sees a virtue in theories that are highly probable, he argues for a general connection between simplicity and high probability.
Both these proposals are global. They attempt to explain why simplicity should be part of the scientific method in a way that spans all scientific subject matters. No assumption about the details of any particular scientific problem serves as a premiss in Popper’s or Quine’s arguments.
Newton and Leibniz thought that the justification of parsimony and simplicity flows from the hand of God: Popper and Quine try to justify these methodologically median of importance is without assuming anything substantive about the way the world is. In spite of these differences in approach, they have something in common. They assume that all users of parsimony and simplicity in the separate sciences can be encompassed in a single justifying argument. That recent developments in confirmation theory suggest that this assumption should be scrutinized. Good (1983) and Rosenkrantz (1977) has emphasized the role of auxiliary assumptions in mediating the connection between hypotheses and observations. Whether a hypothesis is well supported by some observations, or whether one hypothesis is better supported than another by those observations, crucially depends on empirical background assumptions about the inference problem here. The same view applies to the idea of prior probability (or, prior plausibility). In of a single hypo-physical science if chosen as an alternative to another even though they are equally supported by current observations, this must be due to an empirical background assumption.
Principles of parsimony and simplicity mediate the epistemic connection between hypotheses and observations. Perhaps these principles are able to do this because they are surrogates for an empirical background theory. It is not that there is one background theory presupposed by every appeal to parsimony; This has the quantifier order backwards. Rather, the suggestion is that each parsimony argument is justified only to each degree that it reflects an empirical background theory about the subjective matter. On this theory is brought out into the open, but the principle of parsimony is entirely dispensable (Sober, 1988).
This ‘local’ approach to the principles of parsimony and simplicity resurrects the idea that they make sense only if the world is one way rather than another. It rejects the idea that these maxims are purely methodological. How defensible this point of view is, will depend on detailed case studies of scientific hypothesis evaluation and on further developments in the theory of scientific inference.
It is usually not found of one and the same that, an inference is a (perhaps very complex) act of thought by virtue of which act (1) I pass from a set of one or more propositions or statements to a proposition or statement and (2) it appears that the latter are true if the former is or are. This psychological characterization has occurred over a wider summation of literature under more lesser than inessential variations. Desiring a better characterization of inference is natural. Yet attempts to do so by constructing a fuller psychological explanation fail to comprehend the grounds on which inference will be objectively valid-A point elaborately made by Gottlob Frége. Attempts to understand the nature of inference through the device of the representation of inference by formal-logical calculations or derivations better (1) leave ‘us’ puzzled about the relation of formal-logical derivations to the informal inferences they are supposedly to represent or reconstruct, and (2) leaves ‘us’ worried about the sense of such formal derivations. Are these derivations inference? Are not informal inferences needed in order to apply the rules governing the constructions of formal derivations (inferring that this operation is an application of that formal rule)? These are concerns cultivated by, for example, Wittgenstein.
Coming up with an adequate characterization of inference-and even working out what would count as a very adequate characterization here is demandingly by no means nearly some resolved philosophical problem.
The rule of inference, as for raised by Lewis Carroll, the Zeno-like problem of how a ‘proof’ ever gets started. Suppose I have as premises (I) ‘p’ and (ii) p ➝ q. Can I infer ‘q’? Only, it seems, if I am sure of (iii) (p & p ➝q) ➝ q. Can I then infer ‘q’? Only, it seems, if I am sure that (iv) (p & p ➝ q & (p & p ➝ q) ➝ q) ➝ q. For each new axiom (N) I need a further axiom (N + 1) telling me that the set so far implies ‘q’, and the regress never stops. The usual solution is to treat a system as containing not only axioms, but also rules of inference, allowing movement from the axioms. The rule ‘modus ponens’ allow ‘us’ to pass from the first premise to ‘q’. Carroll’s puzzle shows that distinguishing two theoretical categories is essential, although there may be choice about which theses to put in which category.
Traditionally, a proposition that is not a ‘conditional’, as with the ‘affirmative’ and ‘negative’, modern opinion is wary of the distinction, since what appears categorical may vary with the choice of a primitive vocabulary and notation. Apparently categorical propositions may also turn out to be disguised conditionals: ‘X’ is intelligent (categorical?) Equivalent, if ‘X’ is given a range of tasks, she does them better than many people (conditional?). The problem is not merely one of classification, since deep metaphysical questions arise when facts that seem to be categorical and therefore solid, come to seem by contrast conditional, or purely hypothetical or potential.
Its condition of some classified necessity is so proven sufficient that if ‘p’ is a necessary condition of ‘q’, then ‘q’ cannot be true unless ‘p’; is true? If ‘p’ is a sufficient condition, thus steering well is a necessary condition of driving in a satisfactory manner, but it is not sufficient, for one can steer well but drive badly for other reasons. Confusion may result if the distinction is not heeded. For example, the statement that ‘A’ causes ‘B’ may be interpreted to mean that ‘A’ is itself a sufficient condition for ‘B’, or that it is only a necessary condition fort ‘B’, or perhaps a necessary parts of a total sufficient condition. Lists of conditions to be met for satisfying some administrative or legal requirement frequently attempt to give individually necessary and jointly sufficient sets of conditions.
What is more, that if any proposition of the form ‘if p then q’. The condition hypothesized, ‘p’. Is called the antecedent of the conditionals, and ‘q’, the consequent? Various kinds of conditional have been distinguished. Its weakest is that of ‘material implication’, merely telling that either ‘not-p’, or ‘q’. Stronger conditionals include elements of ‘modality’, corresponding to the thought that ‘if p is truer then q must be true’. Ordinary language is very flexible in its use of the conditional form, and there is controversy whether conditionals are better treated semantically, yielding differently finds of conditionals with different meanings, or pragmatically, in which case there should be one basic meaning with surface differences arising from other implicatures.
It follows from the definition of ‘strict implication’ that a necessary proposition is strictly implied by any proposition, and that an impossible proposition strictly implies any proposition. If strict implication corresponds to ‘q follows from p’, then this means that a necessary proposition follows from anything at all, and anything at all follows from an impossible proposition. This is a problem if we wish to distinguish between valid and invalid arguments with necessary conclusions or impossible premises.
The Humean problem of induction is that if we would suppose that there is some property ‘A’ concerning and observational or an experimental situation, and that out of a large number of observed instances of ‘A’, some fraction m/n (possibly equal to 1) has also been instances of some logically independent property ‘B’. Suppose further that the background proportionate circumstances not specified in these descriptions have been varied to a substantial degree and that there is no collateral information available concerning the frequency of ‘B’s’ among ‘A’s or concerning causal or nomologically connections between instances of ‘A’ and instances of ‘B’.
In this situation, an ‘enumerative’ or ‘instantial’ induction inference would move rights from the premise, that m/n of observed ‘A’s’ are ‘B’s’ to the conclusion that approximately m/n of all ‘A’s’ are ‘B’s. (The usual probability qualification will be assumed to apply to the inference, rather than being part of the conclusion.) Here the class of ‘A’s’ should be taken to include not only unobserved ‘A’s’ and future ‘A’s’, but also possible or hypothetical ‘A’s’ (an alternative conclusion would concern the probability or likelihood of the adjacently observed ‘A’ being a ‘B’).
The traditional or Humean problem of induction, often referred to simply as ‘the problem of induction’, is the problem of whether and why inferences that fit this schema should be considered rationally acceptable or justified from an epistemic or cognitive standpoint, i.e., whether and why reasoning in this way is likely to lead to true claims about the world. Is there any sort of argument or rationale that can be offered for thinking that conclusions reached in this way are likely to be true in the corresponding premisses is true ‒or even that their chances of truth are significantly enhanced?
Hume’s discussion of this issue deals explicitly only with cases where all observed ‘A’s’ are ‘B’s’ and his argument applies just as well to the more general case. His conclusion is entirely negative and sceptical: Inductive inferences are not rationally justified, but are instead the result of an essentially a-rational process, custom or habit. Hume (1711-76) challenges the proponent of induction to supply a cogent line of reasoning that leads from an inductive premise to the corresponding conclusion and offers an extremely influential argument in the form of a dilemma (a few times referred to as ‘Hume’s fork’), that either our actions are determined, in which case we are not responsible for them, or they are the result of random events, under which case we are also not responsible for them.
Such reasoning would, he argues, have to be either deductively demonstrative reasoning in the concerning relations of ideas or ‘experimental’, i.e., empirical, that reasoning concerning matters of fact or existence. It cannot be the former, because all demonstrative reasoning relies on the avoidance of contradiction, and it is not a contradiction to suppose that ‘the course of nature may change’, that an order that was observed in the past and not of its continuing against the future: But it cannot be, as the latter, since any empirical argument would appeal to the success of such reasoning about an experience, and the justifiability of generalizing from experience are precisely what is at issue-so that any such appeal would be question-begging. Hence, Hume concludes that there can be no such reasoning (1748).
An alternative version of the problem may be obtained by formulating it with reference to the so-called Principle of Induction, which says roughly that the future will resemble the past or, somewhat better, that unobserved cases will resemble observed cases. An inductive argument may be viewed as enthymematic, with this principle serving as a supposed premiss, in which case the issue is obviously how such a premiss can be justified. Hume’s argument is then that no such justification is possible: The principle cannot be justified a prior because having possession of been true in experiences without obviously begging the question is not contradictory to have possession of been true in experiences without obviously begging the question.
The predominant recent responses to the problem of induction, at least in the analytic tradition, in effect accept the main conclusion of Hume’s argument, namely, that inductive inferences cannot be justified in the sense of showing that the conclusion of such an inference is likely to be true if the premise is true, and thus attempt to find another sort of justification for induction. Such responses fall into two main categories: (I) Pragmatic justifications or ‘vindications’ of induction, mainly developed by Hans Reichenbach (1891-1953), and (ii) ordinary language justifications of induction, whose most important proponent is Frederick, Peter Strawson (1919-). In contrast, some philosophers still attempt to reject Hume’s dilemma by arguing either (iii) That, contrary to appearances, induction can be inductively justified without vicious circularity, or (iv) that an anticipatory justification of induction is possible after all. In that:
(1) Reichenbach’s view is that induction is best regarded, not as a form of inference, but rather as a ‘method’ for arriving at posits regarding, i.e., the proportion of ‘A’s’ remain additionally of ‘B’s’. Such a posit is not a claim asserted to be true, but is instead an intellectual wager analogous to a bet made by a gambler. Understood in this way, the inductive method says that one should posit that the observed proportion is, within some measure of an approximation, the true proportion and then continually correct that initial posit as new information comes in.
The gambler’s bet is normally an ‘appraised posit’, i.e., he knows the chances or odds that the outcome on which he bets will actually occur. In contrast, the inductive bet is a ‘blind posit’: We do not know the chances that it will succeed or even that success is that it will succeed or even that success is possible. What we are gambling on when we make such a bet is the value of a certain proportion in the independent world, which Reichenbach construes as the limit of the observed proportion as the number of cases increases to infinity. Nevertheless, we have no way of knowing that there are even such a limit, and no way of knowing that the proportion of ‘A’s’ are in addition of ‘B’s’ converges in the end on some stable value than varying at random. If we cannot know that this limit exists, then we obviously cannot know that we have any definite chance of finding it.
What we can know, according to Reichenbach, is that ‘if’ there is a truth of this sort to be found, the inductive method will eventually find it’. That this is so is an analytic consequence of Reichenbach’s account of what it is for such a limit to exist. The only way that the inductive method of making an initial posit and then refining it in light of new observations can fail eventually to arrive at the true proportion is if the series of observed proportions never converges on any stable value, which means that there is no truth to be found pertaining the proportion of ‘A’s additionally constitute ‘B’s’. Thus, induction is justified, not by showing that it will succeed or indeed, that it has any definite likelihood of success, but only by showing that it will succeed if success is possible. Reichenbach’s claim is that no more than this can be established for any method, and hence that induction gives ‘us’ our best chance for success, our best gamble in a situation where there is no alternative to gambling.
This pragmatic response to the problem of induction faces several serious problems. First, there are indefinitely many other ‘methods’ for arriving at posits for which the same sort of defence can be given-methods that yield the same result as the inductive method over time but differ arbitrarily before long. Despite the efforts of others, it is unclear that there is any satisfactory way to exclude such alternatives, in order to avoid the result that any arbitrarily chosen short-term posit is just as reasonable as the inductive posit. Second, even if there is a truth of the requisite sort to be found, the inductive method is only guaranteed to find it or even to come within any specifiable distance of it in the indefinite long run. All the same, any actual application of inductive results always takes place in the presence to the future eventful states in making the relevance of the pragmatic justification to actual practice uncertainly. Third, and most important, it needs to be emphasized that Reichenbach’s response to the problem simply accepts the claim of the Humean sceptic that an inductive premise never provides the slightest reason for thinking that the corresponding inductive conclusion is true. Reichenbach himself is quite candid on this point, but this does not alleviate the intuitive implausibility of saying that we have no more reason for thinking that our scientific and commonsense conclusions that result in the induction of it ‘ . . . is true’ than, to use Reichenbach’s own analogy (1949), a blind man wandering in the mountains who feels an apparent trail with his stick has for thinking that following it will lead him to safety.
An approach to induction resembling Reichenbach’s claiming in that those particular inductive conclusions are posits or conjectures, than the conclusions of cogent inferences, is offered by Popper. However, Popper’s view is even more overtly sceptical: It amounts to saying that all that can ever be said in favour of the truth of an inductive claim is that the claim has been tested and not yet been shown to be false.
(2) The ordinary language response to the problem of induction has been advocated by many philosophers, none the less, Strawson claims that the question whether induction is justified or reasonable makes sense only if it tacitly involves the demand that inductive reasoning meet the standards appropriate to deductive reasoning, i.e., that the inductive conclusions are shown to follow deductively from the inductive assumption. Such a demand cannot, of course, be met, but only because it is illegitimate: Inductive and deductive reasons are simply fundamentally different kinds of reasoning, each possessing its own autonomous standards, and there is no reason to demand or expect that one of these kinds meet the standards of the other. Whereas, if induction is assessed by inductive standards, the only ones that are appropriate, then it is obviously justified.
The problem here is to understand to what this allegedly obvious justification of an induction amount. In his main discussion of the point (1952), Strawson claims that it is an analytic true statement that believing it a conclusion for which there is strong evidence is reasonable and an analytic truth that inductive evidence of the sort captured by the schema presented earlier constitutes strong evidence for the corresponding inductive conclusion, thus, apparently yielding the analytic conclusion that believing it a conclusion for which there is inductive evidence is reasonable. Nevertheless, he also admits, indeed insists, that the claim that inductive conclusions will be true in the future is contingent, empirical, and may turn out to be false (1952). Thus, the notion of reasonable belief and the correlative notion of strong evidence must apparently be understood in ways that have nothing to do with likelihood of truth, presumably by appeal to the standard of reasonableness and strength of evidence that are accepted by the community and are embodied in ordinary usage.
Understood in this way, Strawson’s response to the problem of inductive reasoning does not speak to the central issue raised by Humean scepticism: The issue of whether the conclusions of inductive arguments are likely to be true. It amounts to saying merely that if we reason in this way, we can correctly call ourselves ‘reasonable’ and our evidence ‘strong’, according to our accepted community standards. Nevertheless, to the undersealing of issue of wether following these standards is a good way to find the truth, the ordinary language response appears to have nothing to say.
(3) The main attempts to show that induction can be justified inductively have concentrated on showing that such as a defence can avoid circularity. Skyrms (1975) formulate, perhaps the clearest version of this general strategy. The basic idea is to distinguish different levels of inductive argument: A first level in which induction is applied to things other than arguments: A second level in which it is applied to arguments at the first level, arguing that they have been observed to succeed so far and hence are likely to succeed in general: A third level in which it is applied in the same way to arguments at the second level, and so on. Circularity is allegedly avoided by treating each of these levels as autonomous and justifying the argument at each level by appeal to an argument at the next level.
One problem with this sort of move is that even if circularity is avoided, the movement to higher and higher levels will clearly eventually fail simply for lack of evidence: A level will reach at which there have been enough successful inductive arguments to provide a basis for inductive justification at the next higher level, and if this is so, then the whole series of justifications collapses. A more fundamental difficulty is that the epistemological significance of the distinction between levels is obscure. If the issue is whether reasoning in accord with the original schema offered above ever provides a good reason for thinking that the conclusion is likely to be true, then it still seems question-begging, even if not flatly circular, to answer this question by appeal to anther argument of the same form.
(4) The idea that induction can be justified on a pure priori basis is in one way the most natural response of all: It alone treats an inductive argument as an independently cogent piece of reasoning whose conclusion can be seen rationally to follow, although perhaps only with probability from its premise. Such an approach has, however, only rarely been advocated (Russell, 19132 and BonJour, 1986), and is widely thought to be clearly and demonstrably hopeless.
Many on the reasons for this pessimistic view depend on general epistemological theses about the possible or nature of anticipatory cognition. Thus if, as Quine alleges, there is no a prior justification of any kind, then obviously a prior justification for induction is ruled out. Or if, as more moderate empiricists have in claiming some preexistent knowledge should be analytic, then again a prevenient justification for induction seems to be precluded, since the claim that if an inductive premise ids truer, then the conclusion is likely to be true does not fit the standard conceptions of ‘analyticity’. A consideration of these matters is beyond the scope of the present spoken exchange.
There are, however, two more specific and quite influential reasons for thinking that an early approach is impossible that can be briefly considered, first, there is the assumption, originating in Hume, but since adopted by very many of others, that a move forward in the defence of induction would have to involve ‘turning induction into deduction’, i.e., showing, per impossible, that the inductive conclusion follows deductively from the premise, so that it is a formal contradiction to accept the latter and deny the former. However, it is unclear why a prior approach need be committed to anything this strong. It would be enough if it could be argued that it is deductively unlikely that such a premise is true and corresponding conclusion false.
Second, Reichenbach defends his view that pragmatic justification is the best that is possible by pointing out that a completely chaotic world in which there is simply not true conclusion to be found as to the proportion of ‘A’s’ in addition that occur of, but B’s’ is neither impossible nor unlikely from a purely a prior standpoint, the suggestion being that therefore there can be no a prior reason for thinking that such a conclusion is true. Nevertheless, there is still a substring wayin laying that a chaotic world is a prior neither impossible nor unlikely without any further evidence does not show that such a world os not a prior unlikely and a world containing such-and-such regularity might anticipatorially be somewhat likely in relation to an occurrence of a long-run patten of evidence in which a certain stable proportion of observed ‘A’s’ are ‘B’s’ ~. An occurrence, it might be claimed, that would be highly unlikely in a chaotic world (BonJour, 1986).
Goodman’s ‘new riddle of induction’ purports that we suppose that before some specific time ’t’ (perhaps the year 2000) we observe a larger number of emeralds (property A) and find them all to be green (property B). We proceed to reason inductively and conclude that all emeralds are green Goodman points out, however, that we could have drawn a quite different conclusion from the same evidence. If we define the term ‘grue’ to mean ‘green if examined before ’t’ and blue examined after t ʹ, then all of our observed emeralds will also be gruing. A parallel inductive argument will yield the conclusion that all emeralds are gruing, and hence that all those examined after the year 2000 will be blue. Presumably the first of these concisions is genuinely supported by our observations and the second is not. Nevertheless, the problem is to say why this is so and to impose some further restriction upon inductive reasoning that will permit the first argument and exclude the second.
The obvious alternative suggestion is that ‘grue. Similar predicates do not correspond to genuine, purely qualitative properties in the way that ‘green’ and ‘blueness’ does, and that this is why inductive arguments involving them are unacceptable. Goodman, however, claims to be unable to make clear sense of this suggestion, pointing out that the relations of formal desirability are perfectly symmetrical: Grue’ may be defined in terms if, ‘green’ and ‘blue’, but ‘green’ an equally well be defined in terms of ‘grue’ and ‘green’ (blue if examined before ‘t’ and green if examined after ‘t’).
The ‘grued, paradoxes’ demonstrate the importance of categorization, in that sometimes it is itemized as ‘gruing’, if examined of a presence to the future, before future time ‘t’ and ‘green’, or not so examined and ‘blue’. Even though all emeralds in our evidence class grue, we ought must infer that all emeralds are gruing. For ‘grue’ is unprojectible, and cannot transmit credibility from known to unknown cases. Only projectable predicates are right for induction. Goodman considers entrenchment the key to projectibility having a long history of successful protection, ‘grue’ is entrenched, lacking such a history, ‘grue’ is not. A hypothesis is projectable, Goodman suggests, only if its predicates (or suitable related ones) are much better entrenched than its rivalrous past successes that do not assume future ones. Induction remains a risky business. The rationale for favouring entrenched predicates is pragmatic. Of the possible projections from our evidence class, the one that fits with past practices enables ‘us’ to utilize our cognitive resources best. Its prospects of being true are worse than its competitors’ and its cognitive utility is greater.
So, to a better understanding of induction we should then term is most widely used for any process of reasoning that takes ‘us’ from empirical premises to empirical conclusions supported by the premises, but not deductively entailed by them. Inductive arguments are therefore kinds of applicative arguments, in which something beyond the content of the premise is inferred as probable or supported by them. Induction is, however, commonly distinguished from arguments to theoretical explanations, which share this applicative character, by being confined to inferences in which he conclusion involves the same properties or relations as the premises. The central example is induction by simple enumeration, where from premises telling that Fa, Fb, Fc . . . ‘where a, b, c’s, are all of some kind ‘G’, it is inferred that G’s from outside the sample, such as future G’s, will be ‘F’, or perhaps that all G’s are ‘F’. In this, which and the other persons deceive them, children may infer that everyone is a deceiver: Different, but similar inferences of a property by some object to the same object’s future possession of the same property, or from the constancy of some law-like pattern in events and states of affairs ti its future constancy. All objects we know of attract each other with a force inversely proportional to the square of the distance between them, so perhaps they all do so, and will always do so.
The rational basis of any inference was challenged by Hume, who believed that induction presupposed belie in the uniformity of nature, but that this belief has no defence in reason, and merely reflected a habit or custom of the mind. Hume was not therefore sceptical about the role of reason in either explaining it or justifying it. Trying to answer Hume and to show that there is something rationally compelling about the inference referred to as the problem of induction. It is widely recognized that any rational defence of induction will have to partition well-behaved properties for which the inference is plausible (often called projectable properties) from badly behaved ones, for which it is not. It is also recognized that actual inductive habits are more complex than those of similar enumeration, and that both common sense and science pay attention to such giving factors as variations within the sample giving ‘us’ the evidence, the application of ancillary beliefs about the order of nature, and so on.
Nevertheless, the fundamental problem remains that ant experience condition by application show ‘us’ only events occurring within a very restricted part of a vast spatial and temporal order about which we then come to believe things.
Uncompounded by its belonging of a confirmation theory finding of the measure to which evidence supports a theory fully formalized confirmation theory would dictate the degree of confidence that a rational investigator might have in a theory, given some-body of evidence. The grandfather of confirmation theory is Gottfried Leibniz (1646-1718), who believed that a logically transparent language of science would be able to resolve all disputes. In the 20th century a fully formal confirmation theory was a main goal of the logical positivist, since without it the central concept of verification by empirical evidence itself remains distressingly unscientific. The principal developments were due to Rudolf Carnap (1891-1970), culminating in his “Logical Foundations of Probability” (1950). Carnap’s idea was that the measure necessitated would be the proportion of logically possible states of affairs in which the theory and the evidence both hold, compared ti the number in which the evidence itself holds that the probability of a preposition, relative to some evidence, is a proportion of the range of possibilities under which the proposition is true, compared to the total range of possibilities left by the evidence. The difficulty with the theory lies in identifying sets of possibilities so that they admit of measurement. It therefore demands that we can put a measure on the ‘range’ of possibilities consistent with theory and evidence, compared with the range consistent with the evidence alone.
Among the obstacles the enterprise meets, is the fact that while evidence covers only a finite range of data, the hypotheses of science may cover an infinite range. In addition, confirmation proves to vary with the language in which the science is couched, and the Carnapian programme has difficulty in separating genuinely confirming variety of evidence from less compelling repetition of the same experiment. Confirmation also proved to be susceptible to acute paradoxes. Finally, scientific judgement seems to depend on such intangible factors as the problems facing rival theories, and most workers have come to stress instead the historically situated scene of what would appear as a plausible distinction of a scientific knowledge at a given time.
Arose to the paradox of which when a set of apparent incontrovertible premises is given to unacceptable or contradictory conclusions. To solve a paradox will involve showing either that there is a hidden flaw in the premises, or that the reasoning is erroneous, or that the apparently unacceptable conclusion can, in fact, be tolerated. Paradoxes are therefore important in philosophy, for until one is solved it shows that there is something about our reasoning and our concepts that we do not understand. What is more, and somewhat loosely, a paradox is a compelling argument from unacceptable premises to an unacceptable conclusion: More strictly speaking, a paradox is specified to be a sentence that is true if and only if it is false. A characterized objection lesson of it would be: “The displayed sentence is false.”
Seeing that this sentence is false if true is easy, and true if false, a paradox, in either of the senses distinguished, presents an important philosophical challenger. Epistemologists are especially concerned with various paradoxes having to do with knowledge and belief. In other words, for example, the Knower paradox is an argument that begins with apparently impeccable premisses about the concepts of knowledge and inference and derives an explicit contradiction. The origin of the reasoning is the ‘surprise examination paradox’: A teacher announces that there will be a surprise examination next week. A clever student argues that this is impossible. ‘The test cannot be on Friday, the last day of the week, because it would not be a surprise. We would know the day of the test on Thursday evening. This means we can also rule out Thursday. For after we learn that no test has been given by Wednesday, we would know the test is on Thursday or Friday -and would already know that it s not on Friday and would already know that it is not on Friday by the previous reasoning. The remaining days can be eliminated in the same manner’.
February 10, 2010
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment