Many, though not all, adherents of the received view allow for explanation by subsumption under statistical laws. Hempel (1965) offers as an example the case of a man who recovered quickly from a streptococcus infection as a result of treatment with penicillin. Although not all strep infections’ clar up quickly under this treatment, the probability of recovery in such cases is high, and this is sufficient for legitimate explanation According to Hempel. This example conforms to the inductive-statistical (I-S) model. Such explanations are viewed as arguments, but they are inductive than deductive. In these instances the explanation confers high inductive probability on the explanandum. An explanation of a particular fact satisfying either the D-N or I-S model is an argument to the effect that the fact in question was to b e expected by virtue of the explanans.
The received view been subjected to strenuous criticism by adherents of the causal/mechanical approach to scientific explanation (Salmon 1990). Many objections to the received view we engendered by he absence of caudal constraints (due largely to worries about Hume’s critique) on the N-D and I-S models. Beginning in the late 1950s, Michael Scriven advanced serious counter-examples to Hempel’s models: He was followed in the 1960s by Wesley Salmon and in the 1970s by Peter Railton. As accorded to the view, one explains phenomena identifying causes (a death is explained resalting from a massive cerebral haemorrhage) or by exposing underlying mechanisms (the behaviour of a gas is explained in terms of the motion of constituent molecules).
A unification approach to explanation carries with the basic idea that we understand our world more adequately to the extent that we can reduce the number of independent assumptions we must introduce to account for what goes on in it. Accordingly, we understand phenomena to the degree that we can fit them into an overall world picture or Weltanschauung. In order to serve in scientific explanation, the world picture must be scientifically well founded.
During the pas half-century much philosophical attention has ben focussed on explanation in science and in history. Considerable controversy has surrounded the question of whether historical explanation must be scientific, or whether history requires explanations of different types. Many diverse views have been articulated: The forgoing brief survey does not exhaust the variety (Salmon, 19990).
In everyday life we encounter many types of explanation, which appear not to raise philosophical difficulties, in addition to those already made of mention. Prior to take-off a flight attendant explains how to use the safety equipment on the aero-plane. In a museum the guide explain the significance of a famous painting. A mathematics teacher explains a geometrical proof to a bewildered student. A newspaper story explains how a prisoner escaped. Additional examples come easily to mind, the main point is to remember the great variety of contexts in which explanations are sought and given into.
Another item of importance to epistemology is the wider held notion that non-demonstrative inferences can be characterized as inference to the best explanation. Given the variety of views on the nature of explanation, this popular slogan can hardly provide a useful philosophical analysis
Early versions of defeasibility theories had difficulty allowing for the existence of evidence that was "merely misleading," as in the case where one does know that h3: "Tom Grabit stole a book from the library," thanks to having seen him steal it, yet where, unbeknown to oneself, Tom’s mother out of dementia gas testified that Tom was far away from the library at the time of the theft. One’s justifiably believing that she gave the testimony would destroy one’s justification for believing that h3" if added by itself to one’s present evidence.
At least some defeasibility theories cannot deal with the knowledge one has while dying that h4: ‘In this life there is no timer at which I believe that ‘d’, where the proposition that 'd' expresses the details regarding some philosophical matter, e.g., the maximum number of blades of grass ever simultaneously growing on the earth. When it just so happens that it is true that ‘d’, defeasibility analyses typically consider the addition to one’s dying thoughts of a belief that ‘d’ in such a way as to improperly rule out actual knowledge that ‘h4'.
A quite different approach to knowledge, and one able to deal with some Gettier-type cases, involves developing some type of causal theory of Propositional knowledge. The interesting thesis that counts as a causal theory of justification (in the meaning of "causal theory": Intended here) is the that of a belief is justified just in case it was produced by a type of process that is "globally" reliable, that is, its propensity to produce true beliefs-that can be defined (to a god enough approximation) as the proportion of the bailiffs it produces (or would produce where it used as much as opportunity allows) that are true-is sufficiently meaningful-variations of this view have been advanced for both knowledge and justified belief. The first formulation of reliability account of knowing appeared in a note by F.P. Ramsey (1931), who said that a belief was knowledge if it is true, certain can obtain by a reliable process. P. Unger (1968) suggested that 'S’ knows that ‘p’ just in case it is not at all accidental that ‘S’ is right about its being the casse that ‘p’. D.M. Armstrong (1973) said that a non-inferential belief qualified as knowledge if the belief has properties that are nominally sufficient for its truth, i.e., guarantee its truth through and by the laws of nature.
Such theories require that one or another specified relation hold that can be characterized by mention of some aspect of cassation concerning one’s belief that ‘h’ (or one’s acceptance of the proposition that ‘h’) and its relation to state of affairs ‘h*’, e.g., 'h' causes the belief: 'h' is causally sufficient for the belief 'h' and the belief have a common cause. Such simple versions of a causal theory are able to deal with the original Notgot case, since it involves no such causal relationship, but cannot explain why there is ignorance in the variants where Notgot and Berent Enç (1984) have pointed out that sometimes one knows of ‘χ’ that is ø thanks to recognizing a feature merely corelated with the presence of oness without endorsing a causal theory themselves, there suggest that it would need to be elaborated so as to allow that one’s belief that ‘χ’ has ø has been caused by a factor whose correlation with the presence of oness has caused in oneself, e.g., by evolutionary adaption in one’s ancestors, the disposition that one manifests in acquiring the belief in response to the correlated factor. Not only does this strain the unity of as causal theory by complicating it, but no causal theory without other shortcomings has been able to cover instances of deductively reasoned knowledge.
Causal theories of Propositional knowledge differ over whether they deviate from the tripartite analysis by dropping the requirements that one’s believing (accepting) that ‘h’ be justified. The same variation occurs regarding reliability theories, which present the Knower as reliable concerning the issue of whether or not ‘h’, in the sense that some of one’s cognitive or epistemic states, θ, are such that, given further characteristics of oneself-possibly including relations to factors external to one and which one may not be aware-it is nomologically necessary (or at least probable) that ‘h’. In some versions, the reliability is required to be ‘global’ in as far as it must concern a nomologically (probabilistic) relationship) relationship of states of type θ to the acquisition of true beliefs about a wider range of issues than merely whether or not ‘h’. There is also controversy about how to delineate the limits of what constitutes a type of relevant personal state or characteristic. (For example, in a case where Mr Notgot has not been shamming and one does know thereby that someone in the office owns a Ford, such as a way of forming beliefs about the properties of persons spatially close to one, or instead something narrower, such as a way of forming beliefs about Ford owners in offices partly upon the basis of their relevant testimony?)
One important variety of reliability theory is a conclusive reason account, which includes a requirement that one’s reasons for believing that ‘h’ be such that in one’s circumstances, if h* were not to occur then, e.g., one would not have the reasons one does for believing that ‘h’, or, e.g., one would not believe that ‘h’. Roughly, the latter is demanded by theories that treat a Knower as ‘tracking the truth’, theories that include the further demand that is roughly, if it were the case, that ‘h’, then one would believe that ‘h’. A version of the tracking theory has been defended by Robert Nozick (1981), who adds that if what he calls a ‘method’ has been used to arrive at the belief that ‘h’, then the antecedent clauses of the two conditionals that characterize tracking will need to include the hypothesis that one would employ the very same method.
But unless more conditions are added to Nozick’s analysis, it will be too weak to explain why one lack’s knowledge in a version of the last variant of the tricky Mr Notgot case described above, where we add the following details: (a) Mr Notgot’s compulsion is not easily changed, (b) while in the office, Mr Notgot has no other easy trick of the relevant type to play on one, and finally for one’s belief that ‘h’, not by reasoning through a false belief ut by basing belief that ‘h’, upon a true existential generalization of one’s evidence.
Nozick’s analysis is in addition too strong to permit anyone ever to know that ‘h’: ‘Some of my beliefs about beliefs might be otherwise, e.g., I might have rejected on of them’. If I know that ‘h5' then satisfaction of the antecedent of one of Nozick’s conditionals would involve its being false that ‘h5', thereby thwarting satisfaction of the consequent’s requirement that I not then believe that ‘h5'. For the belief that ‘h5' is itself one of my beliefs about beliefs (Shope, 1984).
Some philosophers think that the category of knowing for which is true. Justified believing (accepting) is a requirement constituting only a species of Propositional knowledge, construed as an even broader category. They have proposed various examples of ‘PK’ that do not satisfy the belief and/ort justification conditions of the tripartite analysis. Such cases are often recognized by analyses of Propositional knowledge in terms of powers, capacities, or abilities. For instance, Alan R. White (1982) treats ‘PK’ as merely the ability to provide a correct answer to a possible questions, however, White may be equating ‘producing’ knowledge in the sense of producing ‘the correct answer to a possible question’ with ‘displaying’ knowledge in the sense of manifesting knowledge. (White, 1982). The latter can be done even by very young children and some non-human animals independently of their being asked questions, understanding questions, or recognizing answers to questions. Indeed, an example that has been proposed as an instance of knowing that ‘h’ without believing or accepting that ‘h’ can be modified so as to illustrate this point. Two examples concerns an imaginary person who has no special training or information about horses or racing, but who in an experiment persistently and correctly picks the winners of upcoming horseraces. If the example is modified so that the hypothetical ‘seer’ never picks winners but only muses over whether those horses wight win, or only reports those horses winning, this behaviour should be as much of a candidate for the person’s manifesting knowledge that the horse in question will win as would be the behaviour of picking it as a winner.
These considerations now placed upon our table, least that we take to consider of their vulnerability, that is in regard to their limitation: Edward Craig’s analysis (1990) of the concept of knowing of a person’s being a satisfactory informant in relation to an inquirer who wants to find out whether or not ‘h’. Craig realizes that counterexamples to his analysis appear to be constituted by Knower who are too recalcitrant to inform the inquirer, or too incapacitate to inform, or too discredited to be worth considering (as with the boy who cried ‘Wolf’). Craig admits that this might make preferable some alternative view of knowledge as a different state that helps to explain the presence of the state of being a suitable informant when the latter does obtain. Such the alternate, which offers a recursive definition that concerns one’s having the power to proceed in a way representing the state of affairs, causally involved in one’s proceeding in this way. When combined with a suitable analysis of representing, this theory of propositional knowledge can be unified with a structurally similar analysis of knowing how to do something.
Knowledge and belief, according to most epistemologists, knowledge entails belief, so that I cannot know that such and such is the case unless I believe that such and such is the case. Others think this entailment thesis can be rendered more accurately if we substitute for belief some closely related attitude. For instance, several philosophers would prefer to say that knowledge entail psychological certainties (Prichard, 1950 and Ayer, 1956) or conviction (Lehrer, 1974) or acceptance (Lehrer, 1989). None the less, there are arguments against all versions of the thesis that knowledge requires having a belief-like attitude toward the known. These arguments are given by philosophers who think that knowledge and belief (or a facsimile) are mutually incompatible (the incomparability thesis), or by ones who say that knowledge does not entail belief, or vice versa, so that each may exist without the other, but the two may also coexist (the separability thesis).
The incompatibility thesis is sometimes traced to Plato (429-347 Bc) in view of his claim that knowledge is infallible while belief or opinion is fallible (“Republic” 476-9). But this claim would not support the thesis. Belief might be a component of an infallible form of knowledge in spite of the fallibility of belief. Perhaps, knowledge involves some factor that compensates for the fallibility of belief.
A. Duncan-Jones (1939: Also Vendler, 1978) cite linguistic evidence to back up the incompatibility thesis. He notes that people often say ‘I do not believe she is guilty. I know she is’ and the like, which suggest that belief rule out knowledge. However, as Lehrer (1974) indicates, the above exclamation is only a more emphatic way of saying "I do not just believe she is guilty, I know she is" where "just" makes it especially clear that the speaker is signalling that she has something more salient than mere belief, not that she has something inconsistent with belief, namely knowledge. Compare: "You do not hurt him, you killed him.'
H.A. Prichard (1966) offers a defence of the incompatibility thesis that hinges on the equation of knowledge with certainty (both infallibility and psychological certitude) and the assumption that when we believe in the truth of a claim we are not certain about its truth. Given that belief always involves uncertainty while knowledge never dies, believing something rules out the possibility of knowing it. Unfortunately, however, Prichard gives ‘us’ no goods reason to grant that states of belief are never ones involving confidence. Conscious beliefs clearly involve some level of confidence, to suggest that we cease to believe things about which we are completely confident is bizarre.
A.D. Woozley (1953) defends a version of the separability thesis. Woozley’s version, which deals with psychological certainty rather than belief per se, is that knowledge can exist in the absence of confidence about the item known, although might also be accompanied by confidence as well. Woozley remarks that the test of whether I know something is "what I can do, where what I can do may include answering questions." On the basis of this remark he suggests that even when people are unsure of the truth of a claim, they might know that the claim is true. We unhesitatingly attribute knowledge to people who give correct responses on examinations even if those people show no confidence in their answers. Woozley acknowledges, however, that it would be odd for those who lack confidence to claim knowledge. It would be peculiar to say, I am unsure whether my answer is true: Still, I know it is correct But this tension Woozley explains using a distinction between conditions under which we are justified in making a claim (such as a claim to know something), and conditions under which the claim we make is true. While ‘I know such and such’ might be true even if I am unsure whether such and such holds, nonetheless it would be inappropriate for me to claim that I know that such and such unless I were sure of the truth of my claim.
Colin Radford (1966) extends Woozley’s defence of the separability thesis. In Radford’s view, not only is knowledge compatible with the lack of certainty, it is also compatible with a complete lack of belief. He argues by example. In one example, Jean has forgotten that he learned some English history year’s priori and yet he is able to give several correct responses to questions such as "When did the Battle of Hastings occur?" Since he forgot that he took history, he considers the correct response to be no more than guesses. Thus, when he says that the Battle of Hastings took place in 1066 he would deny having the belief that the Battle of Hastings took place in 1066. A disposition he would deny being responsible (or having the right to be convincing) that 1066 was the correct date. Radford would none the less insist that Jean know when the Battle occurred, since clearly be remembering the correct date. Radford admits that it would be inappropriate for Jean to say that he knew when the Battle of Hastings occurred, but, like Woozley he attributes the impropriety to a fact about when it is and is not appropriate to claim knowledge. When we claim knowledge, we ought, at least to believe that we have the knowledge we claim, or else our behaviour is ‘intentionally misleading’.
Those that agree with Radford’s defence of the separability thesis will probably think of belief as an inner state that can be detected through introspection. That Jean lack’s beliefs about English history is plausible on this Cartesian picture since Jean does not find himself with any beliefs about English history when ne seek them out. One might criticize Radford, however, by rejecting that Cartesian view of belief. One could argue that some beliefs are thoroughly unconscious, for example. Or one could adopt a behaviourist conception of belief, such as Alexander Bain’s (1859), according to which having beliefs is a matter of the way people are disposed to behave (and has not Radford already adopted a behaviourist conception of knowledge?) Since Jean gives the correct response when queried, a form of verbal behaviour, a behaviourist would be tempted to credit him with the belief that the Battle of Hastings occurred in 1066.
D.M. Armstrong (1873) takes a different tack against Radford. Jean does know that the Battle of Hastings took place in 1066. Armstrong will grant Radford that point, in fact, Armstrong suggests that Jean believe that 1066 is not the date the Battle of Hastings occurred, for Armstrong equates the belief that such and such is just possible but no more than just possible with the belief that such and such is not the case. However, Armstrong insists, Jean also believes that the Battle did occur in 1066. After all, had Jean been mistaught that the Battle occurred in 1066, and subsequently "guessed" that it took place in 1066, we would surely describe the situation as one in which Jean’s false belief about the Battle became unconscious over time but persisted of a memory trace that was causally responsible for his guess. Out of consistency, we must describe Radford’s original case as one that Jean’s true belief became unconscious but persisted long enough to cause his guess. Thus, while Jean consciously believes that the Battle did not occur in 1066, unconsciously he does believe it occurred in 1066. So after all, Radford does not have a counterexample to the claim that knowledge entails belief.
Armstrong’s response to Radford was to reject Radford’s claim that the examinee lacked the relevant belief about English history. Another response is to argue that the examinee lacks the knowledge Radford attributes to him (cf. Sorenson, 1982). If Armstrong is correct in suggesting that Jean believes both that 1066 is and that it is not the date of the Battle of Hastings, one might deny Jean knowledge on the grounds that people who believe the denial of what they believe cannot be said t know the truth of their belief. Another strategy might be to compare the examine case with examples of ignorance given in recent attacks on externalist accounts of knowledge (needless to say. Externalists themselves would tend not to favour this strategy). Consider the following case developed by BonJour (1985): For no apparent reason, Samantha believes that she is clairvoyant. Again, for no apparent reason, she one day comes to believe that the President is in New York City, even though she has every reason to believe that the President is in Washington, DC. In fact, Samantha is a completely reliable clairvoyant, and she has arrived at her belief about the whereabouts of the President thorough the power of her clairvoyance. Yet surely Samantha’s belief is completely irrational. She is not justified in thinking what she does. If so, then she does not know where the President is. But Radford’s examinee is unconventional. Even if Jean lacks the belief that Radford denies him, Radford does not have an example of knowledge that is unattended with belief. Suppose that Jean’s memory had been sufficiently powerful to produce the relevant belief. As Radford says, in having every reason to suppose that his response is mere guesswork, and he has every reason to consider his belief false. His belief would be an irrational one, and hence one about whose truth Jean would be ignorant.
Least has been of mention to an approaching view from which ‘perception’ basis upon itself as a fundamental philosophical topic both for its central place in ant theory of knowledge, and its central place un any theory of consciousness. Philosophy in this area is constrained by a number of properties that we believe to hold of perception, (1) It gives "us" knowledge of the world around "us," (2) We are conscious of that world by being aware of "sensible qualities": Colour, sounds, tastes, smells, felt warmth, and the shapes and positions of objects in the environment. (3) Such consciousness is effected through highly complex information channels, such as the output of the three different types of colour-sensitive cells in the eye, or the channels in the ear for interpreting pulses of air pressure as frequencies of sound. (4) There ensues even more complex neurophysiological coding of that information, and eventually higher-order brain functions bring it about that we interpreted the information so received. (Much of this complexity has been revealed by the difficulties of writing programs enabling computers to recognize quite simple aspects of the visual scene.) The problem is to avoid thinking of here being a central, ghostly, conscious self, fed information in the same way that a screen if fed information by a remote television camera. Once such a model is in place, experience will seem like a veil getting between ‘us’ and the world, and the direct objects of perception will seem to be private items in an inner theatre or sensorium. The difficulty of avoiding this model is epically cute when we considered the secondary qualities of colour, sound, tactile feelings and taste, which can easily seem to have a purely private existence inside the perceiver, like sensation of pain. Calling such supposed items names like ‘sense-data’ or ‘percepts’ exacerbates the tendency, but once the model is in place, the first property, that perception gives ‘us’ knowledge of the world and its surrounding surfaces, is quickly threatened, for there will now seem little connection between these items in immediate experience and any independent reality. Reactions to this problem include "scepticism" and "idealism."
A more hopeful approach is to claim that the complexities of (3) and (4) explain how we can have direct acquaintance of the world, than suggesting that the acquaintance we do have been at best indirect. It is pointed out that perceptions are not like sensation, precisely because they have a content, or outer-directed nature. To have a perception is to be aware of the world for being such-and-such a way, than to enjoy a mere modification of sensation. But such direct realism has to be sustained in the face of the evident personal (neurophysiological and other) factors determining haw we perceive. One approach is to ask why it is useful to be conscious of what we perceive, when other aspects of our functioning work with information determining responses without any conscious awareness or intervention. A solution to this problem would offer the hope of making consciousness part of the natural world, than a strange optional extra.
Furthering, perceptual knowledge is knowledge acquired by or through the senses and includes most of what we know. We cross intersections when we see the light turn green, head for the kitchen when we smell the roast burning, squeeze the fruit to determine its ripeness, and climb out of bed when we hear the alarm ring. In each case we come to know something-that the light has turned green, that the roast is burning, that the melon is overripe, and that it is time to get up-by some sensory means. Seeing that the light has turned green is learning something-that the light has turned green-by use of the eyes. Feeling that the melon is overripe is coming to know a fact-that the melon is overripe-by one’s sense to touch. In each case the resulting knowledge is somehow based on, derived from or grounded in the sort of experience that characterizes the sense modality in question.
Much of our perceptual knowledge is indirect, dependent or derived. By this I mean that the facts we describe ourselves as learning, as coming to know, by perceptual means are pieces of knowledge that depend on our coming to know something else, some other fact, in a more direct way. We see, by the gauge, that we need gas, see, by the newspapers, that our team has lost again, see, by her expression, that she is nervous. This derived or dependent sort of knowledge is particularly prevalent in the cases of vision, but it occurs, to a lesser degree, in every sense modality. We install bells and other noise-makers so that we calm for example, hear (by the bell) that someone is at the door and (by the alarm) that its time to get up. When we obtain knowledge in this way, it is clear that unless one sees-hence, comes to know something about the gauge (that it says) and (hence, know) that one is described as coming to know by perceptual means. If one cannot hear that the bell is ringing, one cannot-in at least in this way-hear that one’s visitors have arrived. In such cases one sees (hears, smells, etc.) that ‘a’ is ‘F’, coming to know thereby that ‘a’ is ‘F’, by seeing (hearing, etc.) that some other condition, ‘b’s’ being ‘G’, obtains when this occurs, the knowledge (that ‘a’ is ‘F’) is derived from, or dependent on, the more basic perceptual knowledge that ‘b’ is ‘G’.
And finally, the representational Theory of mind (RTM) (which goes back at least to Aristotle) takes as its starting point commonsense mental states, such as thoughts, beliefs, desires, perceptions and images. Such states are said to have "intentionality"-they are about or refer to things, and may be evaluated with respect to properties like consistency, truth, appropriateness and accuracy. (For example, the thought that cousins are not related is inconsistent, the belief that Elvis is dead is true, the desire to eat the moon is inappropriate, a visual experience of a ripe strawberry as red is accurate, an image of George W. Bush with deadlocks is inaccurate.)
The Representational Theory of Mind, defines such intentional mental states as relations to mental representations, and explains the intentionality of the former in terms of the semantic properties of the latter. For example, to believe that Elvis is dead is to be appropriately related to a mental representation whose propositional content is that Elvis is dead. (The desire that Elvis be dead, the fear that he is dead, the regret that he is dead, etc., involve different relations to the same mental representation.) To perceive a strawberry is to have a sensory experience of some kind which is appropriately related to (e.g., caused by) the strawberry Representational theory of mind also understands mental processes such as thinking, reasoning and imagining as sequences of intentional mental states. For example, to imagine the moon rising over a mountain is to entertain a series of mental images of the moon (and a mountain). To infer a proposition q from the propositions p and if 'p' then 'q' is (among other things) to have a sequence of thoughts of the form 'p', 'if p' then 'q', 'q'.
Contemporary philosophers of mind have typically supposed (or at least hoped) that the mind can be naturalized-i.e., that all mental facts have explanations in the terms of natural science. This assumption is shared within cognitive science, which attempts to provide accounts of mental states and processes in terms (ultimately) of features of the brain and central nervous system. In the course of doing so, the various sub-disciplines of cognitive science (including cognitive and computational psychology and cognitive and computational neuroscience) postulate a number of different kinds of structures and processes, many of which are not directly implicated by mental states and processes as commonsensical conceived. There remains, however, a shared commitment to the idea that mental states and processes are to be explained in terms of mental representations.
In philosophy, recent debates about mental representation have centred around the existence of propositional attitudes (beliefs, desires, etc.) and the determination of their contents (how they come to be about what they are about), and the existence of phenomenal properties and their relation to the content of thought and perceptual experience. Within cognitive science itself, the philosophically relevant debates have been focussed on the computational architecture of the brain and central nervous system, and the compatibility of scientific and commonsense accounts of mentality.
Intentional Realists such as Dretske (e.g., 1988) and Fodor (e.g., 1987) note that the generalizations we apply in everyday life in predicting and explaining each other's behaviour (often collectively referred to as "folk psychology") are both remarkably successful and indispensable. What a person believes, doubts, desires, fears, etc. is a highly reliable indicator of what that person will do; and we have no other way of making sense of each other's behaviour than by ascribing such states and applying the relevant generalizations. We are thus committed to the basic truth of commonsense psychology and, hence, to the existence of the states its generalizations refer to. (Some realists, such as Fodor, also hold that commonsense psychology will be vindicated by cognitive science, given that propositional attitudes can be construed as computational relations to mental representations.)
Intentional Eliminativists, such as Churchland, (perhaps) Dennett and (at one time) Stich argue that no such things as propositional attitudes (and their constituent representational states) are implicated by the successful explanation and prediction of our mental lives and behaviour. Churchland denies that the generalizations of commonsense propositional-attitude psychology are true. He (1981) argues that folk psychology is a theory of the mind with a long history of failure and decline, and that it resists incorporation into the framework of modern scientific theories (including cognitive psychology). As such, it is comparable to alchemy and phlogiston theory, and ought to suffer a comparable fate. Commonsense psychology is false, and the states (and representations) it postulates simply don't exist. (It should be noted that Churchland is not an eliminativist about mental representation tout court.
Dennett (1987) grants that the generalizations of commonsense psychology are true and indispensable, but denies that this is sufficient reason to believe in the entities they appear to refer to. He argues that to give an intentional explanation of a system's behaviour is merely to adopt the "intentional stance" toward it. If the strategy of assigning contentful states to a system and predicting and explaining its behaviour (on the assumption that it is rational-i.e. that it behaves as it should, given the propositional attitudes it should have in its environment) is successful, then the system is intentional, and the propositional-attitude generalizations we apply to it are true. But there is nothing more to having a propositional attitude than this.
Though he has been taken to be thus claiming that intentional explanations should be construed instrumentally, Dennett (1991) insists that he is a "moderate" realist about propositional attitudes, since he believes that the patterns in the behaviour and behavioural dispositions of a system on the basis of which we (truly) attribute intentional states to it are objectively real. In the event that there are two or more explanatorily adequate but substantially different systems of intentional ascriptions to an individual, however, Dennett claims there is no fact of the matter about what the system believes (1987, 1991). This does suggest an irrealism at least with respect to the sorts of things Fodor and Dretske take beliefs to be; though it is not the view that there is simply nothing in the world that makes intentional explanations true.
(Davidson 1973, 1974 and Lewis 1974 also defend the view that what it is to have a propositional attitude is just to be interpretable in a particular way. It is, however, not entirely clear whether they intend their views to imply irrealism about propositional attitudes.). Stich (1983) argues that cognitive psychology does not (or, in any case, should not) taxonomize mental states by their semantic properties at all, since attribution of psychological states by content is sensitive to factors that render it problematic in the context of a scientific psychology. Cognitive psychology seeks causal explanations of behaviour and cognition, and the causal powers of a mental state are determined by its intrinsic "structural" or "syntactic" properties. The semantic properties of a mental state, however, are determined by its extrinsic properties-e.g., its history, environmental or intra-mental relations. Hence, such properties cannot figure in causal-scientific explanations of behaviour. (Fodor 1994 and Dretske 1988 are realist attempts to come to grips with some of these problems.) Stich proposes a syntactic theory of the mind, on which the semantic properties of mental states play no explanatory role.
It is a traditional assumption among realists about mental representations that representational states come in two basic varieties (Boghossian 1995). There are those, such as thoughts, which are composed of concepts and have no phenomenal ("what-it's-like") features ("qualia"), and those, such as sensory experiences, which have phenomenal features but no conceptual constituents. (Non-conceptual content is usually defined as a kind of content that states of a creature lacking concepts might nonetheless enjoy. On this taxonomy, mental states can represent either in a way analogous to expressions of natural languages or in a way analogous to drawings, paintings, maps or photographs. (Perceptual states such as seeing that something is blue, are sometimes thought of as hybrid states, consisting of, for example, a Non-conceptual sensory experience and a thought, or some more integrated compound of sensory and conceptual components.)
Some historical discussions of the representational properties of mind (e.g., Aristotle 1984, Locke 1689/1975, Hume 1739/1978) seem to assume that Non-conceptual representations-percepts ("impressions"), images ("ideas") and the like-are the only kinds of mental representations, and that the mind represents the world in virtue of being in states that resemble things in it. On such a view, all representational states have their content in virtue of their phenomenal features. Powerful arguments, however, focussing on the lack of generality (Berkeley 1975), ambiguity (Wittgenstein 1953) and non-compositionality (Fodor 1981) of sensory and imagistic representations, as well as their unsuitability to function as logical (Frége 1918/1997, Geach 1957) or mathematical (Frége 1884/1953) concepts, and the symmetry of resemblance (Goodman 1976), convinced philosophers that no theory of mind can get by with only Non-conceptual representations construed in this way.
Contemporary disagreement over Non-conceptual representation concerns the existence and nature of phenomenal properties and the role they play in determining the content of sensory experience. Dennett (1988), for example, denies that there are such things as qualia at all; while Brandom (2002), McDowell (1994), Rey (1991) and Sellars (1956) deny that they are needed to explain the content of sensory experience. Among those who accept that experiences have phenomenal content, some (Dretske, Lycan, Tye) argue that it is reducible to a kind of intentional content, while others (Block, Loar, Peacocke) argue that it is irreducible.
There has also been dissent from the traditional claim that conceptual representations (thoughts, beliefs) lack phenomenology. Chalmers (1996), Flanagan (1992), Goldman (1993), Horgan and Tiensen (2003), Jackendoff (1987), Levine (1993, 1995, 2001), McGinn (1991), Pitt (2004), Searle (1992), Siewert (1998) and Strawson (1994), claim that purely symbolic (conscious) representational states themselves have a (perhaps proprietary) phenomenology. If this claim is correct, the question of what role phenomenology plays in the determination of content reprises for conceptual representation; and the eliminativist ambitions of Sellars, Brandom, Rey, would meet a new obstacle. (It would also raise prima face problems for reductionist representationalism
The representationalist thesis is often formulated as the claim that phenomenal properties are representational or intentional. However, this formulation is ambiguous between a reductive and a non-deductive claim (though the term ‘representationalism’ is most often used for the reductive claim). On one hand, it could mean that the phenomenal content of an experience is a kind of intentional content (the properties it represents). On the other, it could mean that the (irreducible) phenomenal properties of an experience determine an intentional content. Representationalists such as Dretske, Lycan and Tye would assent to the former claim, whereas phenomenalists such as Block, Chalmers, Loar and Peacocke would assent to the latter. (Among phenomenalists, there is further disagreement about whether qualia are intrinsically representational (Loar) or not (Block, Peacocke).
Most (reductive) representationalists are motivated by the conviction that one or another naturalistic explanation of intentionality is, in broad outline, correct, and by the desire to complete the naturalization of the mental by applying such theories to the problem of phenomenality. (Needless to say, most phenomenalists (Chalmers is the major exception) are just as eager to naturalize the phenomenal-though not in the same way.)
The main argument for representationalism appeals to the transparency of experience (cf. Tye 2000: 45-51). The properties that characterize what it's like to have a perceptual experience are presented in experience as properties of objects perceived: in attending to an experience, one seems to "see through it" to the objects and properties it is experiences of. They are not presented as properties of the experience itself. If nonetheless they were properties of the experience, perception would be massively deceptive. But perception is not massively deceptive. According to the representationalist, the phenomenal character of an experience is due to its representing objective, non-experiential properties. (In veridical perception, these properties are locally instantiated; in illusion and hallucination, they are not.) On this view, introspection is indirect perception: one comes to know what phenomenal features one's experience has by coming to know what objective features it represents.
In order to account for the intuitive differences between conceptual and sensory representations, representationalists appeal to their structural or functional differences. Dretske (1995), for example, distinguishes experiences and thoughts on the basis of the origin and nature of their functions: an experience of a property 'P' is a state of a system whose evolved function is to indicate the presence of 'P' in the environment; a thought representing the property 'P', on the other hand, is a state of a system whose assigned (learned) function is to calibrate the output of the experiential system. Rey (1991) takes both thoughts and experiences to be relations to sentences in the language of thought, and distinguishes them on the basis of (the functional roles of) such sentences' constituent predicates. Lycan (1987, 1996) distinguishes them in terms of their functional-computational profiles. Tye (2000) distinguishes them in terms of their functional roles and the intrinsic structure of their vehicles: thoughts are representations in a language-like medium, whereas experiences are image-like representations consisting of "symbol-filled arrays." (the account of mental images in Tye 1991.)
Phenomenalists tend to make use of the same sorts of features (function, intrinsic structure) in explaining some of the intuitive differences between thoughts and experiences; but they do not suppose that such features exhaust the differences between phenomenal and non-phenomenal representations. For the phenomenalism, it is the phenomenal properties of experiences-qualia themselves-that constitute the fundamental difference between experience and thought. Peacocke (1992), for example, develops the notion of a perceptual "scenario" (an assignment of phenomenal properties to coordinates of a three-dimensional egocentric space), whose content is "correct" (a semantic property) if in the corresponding "scene" (the portion of the external world represented by the scenario) properties are distributed as their phenomenal analogues are in the scenario.
Another sort of representation championed by phenomenalists (e.g., Block, Chalmers (2003) and Loar (1996)) is the "phenomenal concept"-a conceptual/phenomenal hybrid consisting of a phenomenological "sample" (an image or an occurrent sensation) integrated with (or functioning as) a conceptual component. Phenomenal concepts are postulated to account for the apparent fact (among others) that, as McGinn (1991) puts it, "you cannot form [introspective] concepts of conscious properties unless you yourself instantiate those properties." One cannot have a phenomenal concept of a phenomenal property 'P', and, hence, phenomenal beliefs about P, without having experience of 'P', because 'P' itself is (in some way) constitutive of the concept of 'P'. (Jackson 1982, 1986 and Nagel 1974.)
Though imagery has played an important role in the history of philosophy of mind, the important contemporary literature on it is primarily psychological. In a series of psychological experiments done in the 1970s (summarized in Kosslyn 1980 and Shepard and Cooper 1982), subjects' response time in tasks involving mental manipulation and examination of presented figures was found to vary in proportion to the spatial properties (size, orientation, etc.) of the figures presented. The question of how these experimental results are to be explained has kindled a lively debate on the nature of imagery and imagination.
Kosslyn (1980) claims that the results suggest that the tasks were accomplished via the examination and manipulation of mental representations that themselves have spatial properties-i.e., pictorial representations, or images. Others, principally Pylyshyn (1979, 1981, 2003), argue that the empirical facts can be explained in terms exclusively of discursive, or propositional representations and cognitive processes defined over them. (Pylyshyn takes such representations to be sentences in a language of thought.)
The idea that pictorial representations are literally pictures in the head is not taken seriously by proponents of the pictorial view of imagery The claim is, rather, that mental images represent in a way that is relevantly like the way pictures represent. (Attention has been focussed on visual imagery-hence the designation ‘pictorial’; though of course there may imagery in other modalities-auditory, olfactory, etc.-as well.)
The distinction between pictorial and discursive representation can be characterized in terms of the distinction between analog and digital representation (Goodman 1976). This distinction has itself been variously understood (Fodor & Pylyshyn 1981, Goodman 1976, Haugeland 1981, Lewis 1971, McGinn 1989), though a widely accepted construal is that analog representation is continuous (i.e., in virtue of continuously variable properties of the representation), while digital representation is discrete (i.e., in virtue of properties a representation either has or doesn't have) (Dretske 1981). (An analog/digital distinction may also be made with respect to cognitive processes. (Block 1983.)) On this understanding of the analog/digital distinction, imagistic representations, which represent in virtue of properties that may vary continuously (such as being more or less bright, loud, vivid, etc.), would be analog, while conceptual representations, whose properties do not vary continuously (a thought cannot be more or less about Elvis: either it is or it is not) would be digital.
It might be supposed that the pictorial/discursive distinction is best made in terms of the phenomenal and nonphenomenal distinction, but it is not obvious that this is the case. For one thing, there may be nonphenomenal properties of representations that vary continuously. Moreover, there are ways of understanding pictorial representation that presuppose neither phenomenality nor analogicity. According to Kosslyn (1980, 1982, 1983), a mental representation is "quasi-pictorial" when every part of the representation corresponds to a part of the object represented, and relative distances between parts of the object represented are preserved among the parts of the representation. But distances between parts of a representation can be defined functionally rather than spatially-for example, in terms of the number of discrete computational steps required to combine stored information about them. (Rey 1981.)
Tye (1991) proposes a view of images on which they are hybrid representations, consisting both of pictorial and discursive elements. On Tye's account, images are "(labelled) interpreted symbol-filled arrays." The symbols represent discursively, while their arrangement in arrays has representational significance (the location of each "cell" in the array represents a specific viewer-centred 2-D location on the surface of the imagined object)
The contents of mental representations are typically taken to be abstract objects (properties, relations, propositions, sets, etc.). A pressing question, especially for the naturalist, is how mental representations come to have their contents. Here the issue is not how to naturalize content (abstract objects can't be naturalized), but, rather, how to provide a naturalistic account of the content-determining relations between mental representations and the abstract objects they express. There are two basic types of contemporary naturalistic theories of content-determination, causal-informational and functional.
Causal-informational theories (Dretske 1981, 1988, 1995) hold that the content of a mental representation is grounded in the information it carries about what does (Devitt 1996) or would (Fodor 1987, 1990) cause it to occur. There is, however, widespread agreement that causal-informational relations are not sufficient to determine the content of mental representations. Such relations are common, but representation is not. Tree trunks, smoke, thermostats and ringing telephones carry information about what they are causally related to, but they do not represent (in the relevant sense) what they carry information about. Further, a representation can be caused by something it does not represent, and can represent something that has not caused it.
The main attempts to specify what makes a causal-informational state a mental representation are Asymmetric Dependency Theories (e.g., Fodor 1987, 1990, 1994) and Teleological Theories (Fodor 1990, Millikan 1984, Papineau 1987, Dretske 1988, 1995). The Asymmetric Dependency Theory distinguishes merely informational relations from representational relations on the basis of their higher-order relations to each other: informational relations depend upon representational relations, but not vice-versa. For example, if tokens of a mental state type are reliably caused by horses, cows-on-dark-nights, zebras-in-the-mist and Great Danes, then they carry information about horses, etc. If, however, such tokens are caused by cows-on-dark-nights, etc. because they were caused by horses, but not vice versa, then they represent horses.
According to Teleological Theories, representational relations are those a representation-producing mechanism has the selected (by evolution or learning) function of establishing. For example, zebra-caused horse-representations do not mean zebra, because the mechanism by which such tokens are produced has the selected function of indicating horses, not zebras. The horse-representation-producing mechanism that responds to zebras is malfunctioning.
Functional theories (Block 1986, Harman 1973), hold that the content of a mental representation is grounded in its (causal computational, inferential) relations to other mental representations. They differ on whether relata should include all other mental representations or only some of them, and on whether to include external states of affairs. The view that the content of a mental representation is determined by its inferential/computational relations with all other representations is holism; the view it is determined by relations to only some other mental states is localism (or molecularism). (The view that the content of a mental state depends on none of its relations to other mental states is atomism.) Functional theories that recognize no content-determining external relata have been called solipsistic (Harman 1987). Some theorists posit distinct roles for internal and external connections, the former determining semantic properties analogous to sense, the latter determining semantic properties analogous to reference (McGinn 1982, Sterelny 1989)
(Reductive) representationalists (Dretske, Lycan, Tye) usually take one or another of these theories to provide an explanation of the (Non-conceptual) content of experiential states. They thus tend to be Externalists about phenomenological as well as conceptual content. Phenomenalists and non-deductive representationalists (Block, Chalmers, Loar, Peacocke, Siewert), on the other hand, take it that the representational content of such states is (at least in part) determined by their intrinsic phenomenal properties. Further, those who advocate a phenomenology-based approach to conceptual content (Horgan and Tiensen, Loar, Pitt, Searle, Siewert) also seem to be committed to internalist individuation of the content (if not the reference) of such states.
Generally, those who, like informational theorists, think relations to one's (natural or social) environment are (at least partially) determinative of the content of mental representations are Externalists (e.g., Burge 1979, 1986, McGinn 1977, Putnam 1975), whereas those who, like some proponents of functional theories, think representational content is determined by an individual's intrinsic properties alone, are internalists (or individualists; cf. Putnam 1975, Fodor 1981)
This issue is widely taken to be of central importance, since psychological explanation, whether commonsense or scientific, is supposed to be both causal and content-based. (Beliefs and desires cause the behaviours they do because they have the contents they do. For example, the desire that one have a beer and the beliefs that there is beer in the refrigerator and that the refrigerator is in the kitchen may explain one's getting up and going to the kitchen.) If, however, a mental representation's having a particular content is due to factors extrinsic to it, it is unclear how its having that content could determine its causal powers, which, arguably, must be intrinsic. Some who accept the standard arguments for externalism have argued that internal factors determine a component of the content of a mental representation. They say that mental representations have both "narrow" content (determined by intrinsic factors) and "wide" or "broad" content (determined by narrow content plus extrinsic factors). (This distinction may be applied to the sub-personal representations of cognitive science as well as to those of commonsense psychology.
Narrow content has been variously construed. Putnam (1975), Fodor (1982)), and Block (1986), for example, seem to understand it as something like dedicto content (i.e., Frégean sense, or perhaps character, à la Kaplan 1989). On this construal, narrow content is context-independent and directly expressible. Fodor (1987) and Block (1986), however, have also characterized narrow content as radically inexpressible. On this construal, narrow content is a kind of proto-content, or content-determinant, and can be specified only indirectly, via specifications of context/wide-content pairings. On both construal, narrow contents are characterized as functions from context to (wide) content. The narrow content of a representation is determined by properties intrinsic to it or its possessor such as its syntactic structure or its intra-mental computational or inferential role (or its phenomenology).
Burge (1986) has argued that causation-based worries about externalist individuation of psychological content, and the introduction of the narrow notion, are misguided. Fodor (1994, 1998) has more recently urged that a scientific psychology might not need narrow content in order to supply naturalistic (causal) explanations of human cognition and action, since the sorts of cases they were introduced to handle, viz., Twin-Earth cases and Frigg cases, are either nomologically impossible or dismissible as exceptions to non-strict psychological laws.
The leading contemporary version of the Representational Theory of Mind, the Computational Theory of Mind (CTM), claims that the brain is a kind of computer and that mental processes are computations. According to the computational theory of mind, cognitive states are constituted by computational relations to mental representations of various kinds, and cognitive processes are sequences of such states. The computational theory of mind and the representational theory of mind, may by attempting to explain all psychological states and processes in terms of mental representation. In the course of constructing detailed empirical theories of human and animal cognition and developing models of cognitive processes implementable in artificial information processing systems, cognitive scientists have proposed a variety of types of mental representations. While some of these may be suited to be mental relata of commonsense psychological states, some-so-called "subpersonal" or "sub-doxastic" representations-are not. Though many philosophers believe that computational theory of mind can provide the best scientific explanations of cognition and behaviour, there is disagreement over whether such explanations will vindicate the commonsense psychological explanations of prescientific representational theory of mind.
According to Stich's (1983) Syntactic Theory of Mind, for example, computational theories of psychological states should concern themselves only with the formal properties of the objects those states are relations to. Commitment to the explanatory relevance of content, however, is for most cognitive scientists fundamental (Fodor 1981, Pylyshyn 1984, Von Eckardt 1993). That mental processes are computations, which computations are rule-governed sequences of semantically evaluable objects, and that the rules apply to the symbols in virtue of their content, are central tenets of mainstream cognitive science.
Explanations in cognitive science appeal to a many different kinds of mental representation, including, for example, the "mental models" of Johnson-Laird 1983, the "retinal arrays," "primal sketches" and "2½ -D sketches" of Marr 1982, the "frames" of Minsky 1974, the "sub-symbolic" structures of Smolensky 1989, the "quasi-pictures"of Kosslyn 1980, and the "interpreted symbol-filled arrays" of Tye 1991-in addition to representations that may be appropriate to the explanation of commonsense psychological states. Computational explanations have been offered of, among other mental phenomena, belief (Fodor 1975, Field 1978), visual perception (Marr 1982, Osherson, et al. 1990), rationality (Newell and Simon 1972, Fodor 1975, Johnson-Laird and Wason 1977), language learning and (Chomsky 1965, Pinker 1989), and musical comprehension (Lerdahl and Jackendoff 1983).
A fundamental disagreement among proponents of computational theory of mind concerns the realization of personal-level representations (e.g., thoughts) and processes (e.g., inferences) in the brain. The central debate here is between proponents of Classical Architectures and proponents of Conceptionist Architectures.
The classicists (e.g., Turing 1950, Fodor 1975, Fodor and Pylyshyn 1988, Marr 1982, Newell and Simon 1976) hold that mental representations are symbolic structures, which typically have semantically evaluable constituents, and that mental processes are rule-governed manipulations of them that are sensitive to their constituent structure. The connectionists (e.g., McCulloch & Pitts 1943, Rumelhart 1989, Rumelhart and McClelland 1986, Smolensky 1988) hold that mental representations are realized by patterns of activation in a network of simple processors ("nodes") and that mental processes consist of the spreading activation of such patterns. The nodes themselves are, typically, not taken to be semantically evaluable; nor do the patterns have semantically evaluable constituents. (Though there are versions of Connectionism-"localist" versions-on which individual nodes are taken to have semantic properties (e.g., Ballard 1986, Ballard & Hayes 1984).) It is arguable, however, that localist theories are neither definitive nor representative of the Conceptionist program (Smolensky 1988, 1991, Chalmers 1993).
Classicists are motivated (in part) by properties thought seems to share with language. Fodor's Language of Thought Hypothesis (LOTH) (Fodor 1975, 1987), according to which the system of mental symbols constituting the neural basis of thought is structured like a language, provides a well-worked-out version of the classical approach as applied to commonsense psychology. According to the language of thought hypothesis, the potential infinity of complex representational mental states is generated from a finite stock of primitive representational states, in accordance with recursive formation rules. This combinatorial structure accounts for the properties of productivity and systematicity of the system of mental representations. As in the case of symbolic languages, including natural languages (though Fodor does not suppose either that the language of thought hypothesis explains only linguistic capacities or that only verbal creatures have this sort of cognitive architecture), these properties of thought are explained by appeal to the content of the representational units and their combinability into contentful complexes. That is, the semantics of both language and thought is compositional: the content of a complex representation is determined by the contents of its constituents and their structural configuration.
Connectionists are motivated mainly by a consideration of the architecture of the brain, which apparently consists of layered networks of interconnected neurons. They argue that this sort of architecture is unsuited to carrying out classical serial computations. For one thing, processing in the brain is typically massively parallel. In addition, the elements whose manipulation drives computation in Conceptionist networks (principally, the connections between nodes) are neither semantically compositional nor semantically evaluable, as they are on the classical approach. This contrast with classical computationalism is often characterized by saying that representation is, with respect to computation, distributed as opposed to local: representation is local if it is computationally basic; and distributed if it is not. (Another way of putting this is to say that for classicists mental representations are computationally atomic, whereas for connectionists they are not.)
Moreover, connectionists argue that information processing as it occurs in Conceptionist networks more closely resembles some features of actual human cognitive functioning. For example, whereas on the classical view learning involves something like hypothesis formation and testing (Fodor 1981), on the Conceptionist model it is a matter of evolving distribution of "weight" (strength) on the connections between nodes, and typically does not involve the formulation of hypotheses regarding the identity conditions for the objects of knowledge. The Conceptionist network is "trained up" by repeated exposure to the objects it is to learn to distinguish; and, though networks typically require many more exposures to the objects than do humans, this seems to model at least one feature of this type of human learning quite well.
Further, degradation in the performance of such networks in response to damage is gradual, not sudden as in the case of a classical information processor, and hence more accurately models the loss of human cognitive function as it typically occurs in response to brain damage. It is also sometimes claimed that Conceptionist systems show the kind of flexibility in response to novel situations typical of human cognition-situations in which classical systems are relatively "brittle" or "fragile."
Some philosophers have maintained that Connectionism entails that there are no propositional attitudes. Ramsey, Stich and Garon (1990) have argued that if Conceptionist models of cognition are basically correct, then there are no discrete representational states as conceived in ordinary commonsense psychology and classical cognitive science. Others, however (e.g., Smolensky 1989), hold that certain types of higher-level patterns of activity in a neural network may be roughly identified with the representational states of commonsense psychology. Still others (e.g., Fodor & Pylyshyn 1988, Heil 1991, Horgan and Tienson 1996) argue that language-of-thought style representation is both necessary in general and realizable within Conceptionist architectures. (MacDonald & MacDonald 1995 collects the central contemporary papers in the classicist/Conceptionist debate, and provides useful introductory material as well.
Whereas Stich (1983) accepts that mental processes are computational, but denies that computations are sequences of mental representations, others accept the notion of mental representation, but deny that computational theory of mind provides the correct account of mental states and processes.
Van Gelder (1995) denies that psychological processes are computational. He argues that cognitive systems are dynamic, and that cognitive states are not relations to mental symbols, but quantifiable states of a complex system consisting of (in the case of human beings) a nervous system, a body and the environment in which they are embedded. Cognitive processes are not rule-governed sequences of discrete symbolic states, but continuous, evolving total states of dynamic systems determined by continuous, simultaneous and mutually determining states of the systems' components. Representation in a dynamic system is essentially information-theoretic, though the bearers of information are not symbols, but state variables or parameters.
Horst (1996), on the other hand, argues that though computational models may be useful in scientific psychology, they are of no help in achieving a philosophical understanding of the intentionality of commonsense mental states. computational theory of mind attempts to reduce the intentionality of such states to the intentionality of the mental symbols they are relations to. But, Horst claims, the relevant notion of symbolic content is essentially bound up with the notions of convention and intention. So the computational theory of mind involves itself in a vicious circularity: the very properties that are supposed to be reduced are (tacitly) appealed to in the reduction.
To say that a mental object has semantic properties is, paradigmatically, to say that it may be about, or be true or false of, an object or objects, or that it may be true or false simpliciter. Suppose I think that ocelots take snuff. I am thinking about ocelots, and if what I think of them (that they take snuff) is true of them, then my thought is true. According to representational theory of mind such states are to be explained as relations between agents and mental representations. To think that ocelots take snuff is to token in some way a mental representation whose content is that ocelots take snuff. On this view, the semantic properties of mental states are the semantic properties of the representations they are relations to.
Linguistic acts seem to share such properties with mental states. Suppose I say that ocelots take snuff. I am talking about ocelots, and if what I say of them (that they take snuff) is true of them, then my utterance is true. Now, to say that ocelots take snuff is (in part) to utter a sentence that means that ocelots take snuff. Many philosophers have thought that the semantic properties of linguistic expressions are inherited from the intentional mental states they are conventionally used to express (Grice 1957, Fodor 1978, Schiffer, 1972/1988, Searle 1983). On this view, the semantic properties of linguistic expressions are the semantic properties of the representations that are the mental relata of the states they are conventionally used to express.
It is also widely held that in addition to having such properties as reference, truth-conditions and truth-so-called extensional properties-expressions of natural languages also have intensional properties, in virtue of expressing properties or propositions-i.e., in virtue of having meanings or senses, where two expressions may have the same reference, truth-conditions or truth value, yet express different properties or propositions (Frigg 1892/1997). If the semantic properties of natural-language expressions are inherited from the thoughts and concepts they express (or vice versa, or both), then an analogous distinction may be appropriate for mental representations.
Søren Aabye Kierkegaard (1813-1855), a Danish religious philosopher, whose concern with individual existence, choice, and commitment profoundly influenced modern theology and philosophy, especially existentialism.
Søren Kierkegaard wrote of the paradoxes of Christianity and the faith required to reconcile them. In his book Fear and Trembling, Kierkegaard discusses Genesis 22, in which God commands Abraham to kill his only son, Isaac. Although God made an unreasonable and immoral demand, Abraham obeyed without trying to understand or justify it. Kierkegaard regards this “leap of faith” as the essence of Christianity.
Kierkegaard was born in Copenhagen on May 15, 1813. His father was a wealthy merchant and strict Lutheran, whose gloomy, guilt-ridden piety and vivid imagination strongly influenced Kierkegaard. Kierkegaard studied theology and philosophy at the University of Copenhagen, where he encountered Hegelian philosophy and reacted strongly against it. While at the university, he ceased to practice Lutheranism and for a time led an extravagant social life, becoming a familiar figure in the theatrical and café society of Copenhagen. After his father's death in 1838, however, he decided to resume his theological studies. In 1840 he became engaged to the 17-year-old Regine Olson, but almost immediately he began to suspect that marriage was incompatible with his own brooding, complicated nature and his growing sense of a philosophical vocation. He abruptly broke off the engagement in 1841, but the episode took on great significance for him, and he repeatedly alluded to it in his books. At the same time, he realized that he did not want to become a Lutheran pastor. An inheritance from his father allowed him to devote himself entirely to writing, and in the remaining 14 years of his life he produced more than 20 books.
Kierkegaard's work is deliberately unsystematic and consists of essays, aphorisms, parables, fictional letters and diaries, and other literary forms. Many of his works were originally published under pseudonyms. He applied the term existential to his philosophy because he regarded philosophy as the expression of an intensely examined individual life, not as the construction of a monolithic system in the manner of the 19th-century German philosopher Georg Wilhelm Friedrich Hegel, whose work he attacked in Concluding Unscientific Postscript (1846; trans. 1941). Hegel claimed to have achieved a complete rational understanding of human life and history; Kierkegaard, on the other hand, stressed the ambiguity and paradoxical nature of the human situation. The fundamental problems of life, he contended, defy rational, objective explanation; the highest truth is subjective.
Kierkegaard maintained that systematic philosophy not only imposed a false perspective on human existence but that it also, by explaining life in terms of logical necessity, becomes a means of avoiding choice and responsibility. Individuals, he believed, create their own natures through their choices, which must be made in the absence of universal, objective standards. The validity of a choice can only be determined subjectively.
In his first major work, Either/Or (2 volumes, 1843; trans. 1944), Kierkegaard described two spheres, or stages of existence, that the individual may choose: the aesthetic and the ethical. The aesthetic way of life is refined hedonism, consisting of a search for pleasure and a cultivation of a mood. The aesthetic individual constantly seeks variety and novelty in an effort to stave off boredom but eventually must confront boredom and despair. The ethical way of life involves an intense, passionate commitment to duty, to unconditional social and religious obligations. In his later works, such as Stages on Life's Way (1845; trans. 1940), Kierkegaard discerned in this submission to duty a loss of individual responsibility, and he proposed a third stage, the religious, in which one submits to the will of God but in doing so finds authentic freedom. In Fear and Trembling (1846; trans. 1941) Kierkegaard focused on God's command that Abraham sacrifice his son Isaac (Genesis 22: 1-19), an act that violates Abraham's ethical convictions. Abraham proves his faith by resolutely setting out to obey God's command, even though he cannot understand it. This “suspension of the ethical,” as Kierkegaard called it, allows Abraham to achieve an authentic commitment to God. To avoid ultimate despair, the individual must make a similar “leap of faith” into a religious life, which is inherently paradoxical, mysterious, and full of risk. One is called to it by the feeling of dread (The Concept of Dread,1844; trans. 1944), which is ultimately a fear of nothingness.
Toward the end of his life Kierkegaard was involved in bitter controversies, especially with the established Danish Lutheran church, which he regarded as worldly and corrupt. His later works, such as The Sickness Unto Death (1849; trans. 1941), reflect an increasingly somber view of Christianity, emphasizing suffering as the essence of authentic faith. He also intensified his attack on modern European society, which he denounced in The Present Age (1846; trans. 1940) for its lack of passion and for its quantitative values. The stress of his prolific writing and of the controversies in which he engaged gradually undermined his health; in October 1855 he fainted in the street, and he died in Copenhagen on November 11, 1855.
Kierkegaard's influence was at first confined to Scandinavia and to German-speaking Europe, where his work had a strong impact on Protestant Theology and on such writers as the 20th-century Austrian novelist Franz Kafka. As existentialism emerged as a general European movement after World War I, Kierkegaard's work was widely translated, and he was recognized as one of the seminal figures of modern culture.
Since scientists, during the nineteenth century were engrossed with uncovering the workings of external reality and seemingly knew of themselves that these virtually overflowing burdens of nothing, in that were about the physical substrates of human consciousness, the business of examining the distributive contribution in dynamic functionality and structural foundation of mind became the province of social scientists and humanists. Adolphe Quételet proposed a ‘social physics’ that could serve as the basis for a new discipline called sociology, and his contemporary Auguste Comte concluded that a true scientific understanding of the social reality was quite inevitable. Mind, in the view of these figures, was a separate and distinct mechanism subject to the lawful workings of a mechanical social reality.
More formal European philosophers, such as Immanuel Kant, sought to reconcile representations of external reality in mind with the motions of matter-based on the dictates of pure reason. This impulse was also apparent in the utilitarian ethics of Jerry Bentham and John Stuart Mill, in the historical materialism of Karl Marx and Friedrich Engels, and in the pragmatism of Charles Smith, William James and John Dewey. These thinkers were painfully aware, however, of the inability of reason to posit a self-consistent basis for bridging the gap between mind and matter, and each remains obliged to conclude that the realm of the mental exists only in the subjective reality of the individual.
The fatal flaw of pure reason is, of course, the absence of emotion, and purely explanations of the division between subjective reality and external reality, of which had limited appeal outside the community of intellectuals. The figure most responsible for infusing our understanding of the Cartesian dualism with contextual representation of our understanding with emotional content was the death of God theologian Friedrich Nietzsche 1844-1900. After declaring that God and ‘divine will’, did not exist, Nietzsche reified the ‘existence’ of consciousness in the domain of subjectivity as the ground for individual ‘will’ and summarily reducing all previous philosophical attempts to articulate the ‘will to truth’. The dilemma, forth in, had seemed to mean, by the validation, . . . as accredited for doing of science, in that the claim that Nietzsche’s earlier versions to the ‘will to truth’, disguises the fact that all alleged truths were arbitrarily created in the subjective reality of the individual and are expressed or manifesting the individualism of ‘will’.
In Nietzsche’s view, the separation between mind and matter is more absolute and total than previously been imagined. Based on the assumption that there is no really necessary correspondence between linguistic constructions of reality in human subjectivity and external reality, he deuced that we are all locked in ‘a prison house of language’. The prison as he concluded it, was also a ‘space’ where the philosopher can examine the ‘innermost desires of his nature’ and articulate a new message of individual existence founded on ‘will’.
Those who fail to enact their existence in this space, Nietzsche says, are enticed into sacrificing their individuality on the nonexistent altars of religious beliefs and democratic or socialists’ ideals and become, therefore, members of the anonymous and docile crowd. Nietzsche also invalidated the knowledge claims of science in the examination of human subjectivity. Science, he said. Is not exclusive to natural phenomenons and favors reductionistic examination of phenomena at the expense of mind? It also seeks to reduce the separateness and uniqueness of mind with mechanistic descriptions that disallow and basis for the free exercise of individual will.
Nietzsche’s emotionally charged defense of intellectual freedom and radial empowerment of mind as the maker and transformer of the collective fictions that shape human reality in a soulless mechanistic universe proved terribly influential on twentieth-century thought. Furthermore, Nietzsche sought to reinforce his view of the subjective character of scientific knowledge by appealing to an epistemological crisis over the foundations of logic and arithmetic that arose during the last three decades of the nineteenth century. Through a curious course of events, attempted by Edmund Husserl 1859-1938, a German mathematician and a principal founder of phenomenology, wherefor to resolve this crisis resulted in a view of the character of consciousness that closely resembled that of Nietzsche.
The best-known disciple of Husserl was Martin Heidegger, and the work of both figures greatly influenced that of the French atheistic existentialist Jean-Paul Sartre. The work of Husserl, Heidegger, and Sartre became foundational to that of the principal architects of philosophical postmodernism, and deconstructionist Jacques Lacan, Roland Barthes, Michel Foucault and Jacques Derrida. It obvious attribution of a direct linkage between the nineteenth-century crisis about the epistemological foundations of mathematical physics and the origin of philosophical postmodernism served to perpetuate the Cartesian two-world dilemma in an even more oppressive form. It also allows us better to understand the origins of cultural ambience and the ways in which they could resolve that conflict.
The mechanistic paradigms of the late in the nineteenth century where the one Einstein came to know when he studied physics. Most physicists believed that it represented an eternal truth, but Einstein was open to fresh ideas. Inspired by Mach’s critical mind, he demolished the Newtonian ideas of space and time and replaced them with new, “relativistic” notions.
Jean-Paul Sartre (1905-1980), was a French philosopher, dramatist, novelist, and political journalist, who was a leading exponent of existentialism. Jean-Paul Sartre helped to develop existential philosophy through his writings, novels, and plays. Much of Sartre’s work focuses on the dilemma of choice faced by free individuals and on the challenge of creating meaning by acting responsibly in an indifferent world. In stating that “man is condemned to be free,” Sartre reminds us of the responsibility that accompanies human decisions.
Sartre was born in Paris, June 21, 1905, and educated at the Écôle Normale Supérieure in Paris, the University of Fribourg in Switzerland, and the French Institute in Berlin. He taught philosophy at various lycées from 1929 until the outbreak of World War II, when he was called into military service. In 1940-41 he was imprisoned by the Germans; after his release, he taught in Neuilly, France, and later in Paris, and was active in the French Resistance. The German authorities, unaware of his underground activities, permitted the production of his antiauthoritarian play The Flies (1943; trans. 1946) and the publication of his major philosophic work Being and Nothingness (1943; trans. 1953). Sartre gave up teaching in 1945 and founded the political and literary magazine Les Temps Modernes, of which he became editor in chief. Sartre was active after 1947 as an independent Socialist, critical of both the USSR and the United States in the so-called cold war years. Later, he supported Soviet positions but still frequently criticized Soviet policies. Most of his writing of the 1950s deals with literary and political problems. Sartre rejected the 1964 Nobel Prize in literature, explaining that to accept such an award would compromise his integrity as a writer.
Sartre's philosophic works combine the phenomenology of the German philosopher Edmund Husserl, the metaphysics of the German philosophers Georg Wilhelm Friedrich Hegel and Martin Heidegger, and the social theory of Karl Marx into a single view called existentialism. This view, which relates philosophical theory to life, literature, psychology, and political action, stimulated so much popular interest that existentialism became a worldwide movement.
In his early philosophic work, Being and Nothingness, Sartre conceived humans as beings who create their own world by rebelling against authority and by accepting personal responsibility for their actions, unaided by society, traditional morality, or religious faith. Distinguishing between human existence and the nonhuman world, he maintained that human existence is characterized by nothingness, that is, by the capacity to negate and rebel. His theory of existential psychoanalysis asserted the inescapable responsibility of all individuals for their own decisions and made the recognition of one's absolute freedom of choice the necessary condition for authentic human existence. His plays and novels express the belief that freedom and acceptance of personal responsibility are the main values in life and that individuals must rely on their creative powers rather than on social or religious authority.
In his later philosophic work Critique of Dialectical Reason (1960; trans. 1976), Sartre's emphasis shifted from existentialist freedom and subjectivity to Marxist social determinism. Sartre argued that the influence of modern society over the individual is so great as to produce serialization, by which he meant loss of self. Individual power and freedom can only be regained through group revolutionary action. Despite this exhortation to revolutionary political activity, Sartre himself did not join the Communist Party, thus retaining the freedom to criticize the Soviet invasions of Hungary in 1956 and Czechoslovakia in 1968. He died in Paris, April 15, 1980.
The part of the theory of design or semiotics, that concerns the relationship between speakers and their signs. the study of the principles governing appropriate conversational moves is generally called pragmatics, applied pragmatics treats of special kinds of linguistic infection such as interview and speech asking, nevertheless, the philosophical movement that has had a major impact on American culture from the late 19th century to the present. Pragmatism calls for ideas and theories to be tested in practice, by assessing whether acting upon the idea or theory produces desirable or undesirable results. According to pragmatists, all claims about truth, knowledge, morality, and politics must be tested in this way. Pragmatism has been critical of traditional Western philosophy, especially the notions that there are absolute truths and absolute values. Although pragmatism was popular for a time in France, England, and Italy, most observers believe that it encapsulates an American faith in know-how and practicality and an equally American distrust of abstract theories and ideologies.
Pragmatists regard all theories and institutions as tentative hypotheses and solutions. For this reason they believed that efforts to improve society, through such means as education or politics, must be geared toward problem solving and must be ongoing. Through their emphasis on connecting theory to practice, pragmatist thinkers attempted to transform all areas of philosophy, from metaphysics to ethics and political philosophy.
Pragmatism sought a middle ground between traditional ideas about the nature of reality and radical theories of nihilism and irrationalism, which had become popular in Europe in the late 19th century. Traditional metaphysics assumed that the world has a fixed, intelligible structure and that human beings can know absolute or objective truths about the world and about what constitutes moral behavior. Nihilism and irrationalism, on the other hand, denied those very assumptions and their certitude. Pragmatists today still try to steer a middle course between contemporary offshoots of these two extremes.
The ideas of the pragmatists were considered revolutionary when they first appeared. To some critics, pragmatism’s refusal to affirm any absolutes carried negative implications for society. For example, pragmatists do not believe that a single absolute idea of goodness or justice exists, but rather than these concepts are changeable and depend on the context in which they are being discussed. The absence of these absolutes, critics feared, could result in a decline in moral standards. The pragmatists’ denial of absolutes, moreover, challenged the foundations of religion, government, and schools of thought. As a result, pragmatism influenced developments in psychology, sociology, education, semiotics (the study of signs and symbols), and scientific method, as well as philosophy, cultural criticism, and social reform movements. Various political groups have also drawn on the assumptions of pragmatism, from the progressive movements of the early 20th century to later experiments in social reform.
Pragmatism is best understood in its historical and cultural context. It arose during the late 19th century, a period of rapid scientific advancement typified by the theories of British biologist Charles Darwin, whose theories suggested too many thinkers that humanity and society are in a perpetual state of progress. During this same period a decline in traditional religious beliefs and values accompanied the industrialization and material progress of the time. In consequence it became necessary to rethink fundamental ideas about values, religion, science, community, and individuality.
The three most important pragmatists are American philosopher’s Charles Sanders Peirce, William James, and John Dewey. Peirce was primarily interested in scientific method and mathematics; his objective was to infuse scientific thinking into philosophy and society, and he believed that human comprehension of reality was becoming ever greater and that human communities were becoming increasingly progressive. Peirce developed pragmatism as a theory of meaning-in particular, the meaning of concepts used in science. The meaning of the concept “brittle,” for example, is given by the observed consequences or properties that objects called “brittle” exhibit. For Peirce, the only rational way to increase knowledge was to form mental habits that would test ideas through observation, experimentation, or what he called inquiry. Many philosophers known as logical positivist, a group of philosophers who have been influenced by Peirce, believed that our evolving species was fated to get ever closer to Truth. Logical positivist emphasize the importance of scientific verification, rejecting the assertion of positivism that personal experience is the basis of true knowledge.
James moved pragmatism in directions that Peirce strongly disliked. He generalized Peirce’s doctrines to encompass all concepts, beliefs, and actions; he also applied pragmatist ideas to truth as well as to meaning. James was primarily interested in showing how systems of morality, religion, and faith could be defended in a scientific civilization. He argued that sentiment, as well as logic, is crucial to rationality and that the great issues of life-morality and religious belief, for example-are leaps of faith. As such, they depend upon what he called “the will to believe” and not merely on scientific evidence, which can never tell us what to do or what is worthwhile. Critics charged James with relativism (the belief that values depend on specific situations) and with crass expediency for proposing that if an idea or action works the way one intends, it must be right. But James can more accurately be described as a pluralist-someone who believes the world to be far too complex for any one philosophy to explain everything.
Dewey’s philosophy can be described as a version of philosophical naturalism, which regards human experience, intelligence, and communities as ever-evolving mechanisms. Using their experience and intelligence, Dewey believed, human beings can solve problems, including social problems, through inquiry. For Dewey, naturalism led to the idea of a democratic society that allows all members to acquire social intelligence and progress both as individuals and as communities. Dewey held that traditional ideas about knowledge, truth, and values, in which absolutes are assumed, are incompatible with a broadly Darwinian world view in which individuals and society is progressing. In consequence, he felt that these traditional ideas must be discarded or revised. Indeed, for pragmatists, everything people know and do depend on a historical context and are thus tentative rather than absolute.
Many followers and critics of Dewey believe he advocated elitism and social engineering in his philosophical stance. Others think of him as a kind of romantic humanist. Both tendencies are evident in Dewey’s writings, although he aspired to synthesize the two realms.
The pragmatists’ tradition was revitalized in the 1980s by American philosopher Richard Rorty, who has faced similar charges of elitism for his belief in the relativism of values and his emphasis on the role of the individual in attaining knowledge. Interest has renewed in the classic pragmatists-Pierce, James, and Dewey-as an alternative to Rorty’s interpretation of the tradition.
In an ever-changing world, pragmatism has many benefits. It defends social experimentation as a means of improving society, accepts pluralism, and reject’s dead dogmas. But a philosophy that offers no final answers or absolutes and that appears vague as a result of trying to harmonize opposites may also be unsatisfactory to some.
One of the five branches into which semiotics is usually divided the study of meaning of words, and their relation of designed to the object studied, a semantic is provided for a formal language when an interpretation or model is specified. Nonetheless, the Semantics, the Greek semanticist, “significant,” the study of the meaning of linguistic signs- that is, words, expressions, and sentences. Scholars of semantics try to one answer such questions as “What is the meaning of (the word) X?” They do this by studying what signs are, as well as how signs possess significance-that is, how they are intended by speakers, how they designate (make reference to things and ideas), and how they are interpreted by hearers. The goal of semantics is to match the meanings of signs - especially of what they stand for with the processes of assigning those meanings.
Semantics is studied from philosophical (pure) and linguistic (descriptive and theoretical) approaches, and an approach known as general semantics. Philosophers look at the behavior that goes with the process of meaning. Linguists study the elements or features of meaning as they are related in a linguistic system. General semanticists concentrate on meaning as influencing what people think and do.
These semantic approaches also have broader application. Anthropologists, through descriptive semantics, study what people categorize as culturally important. Psychologists draw on theoretical semantic studies that attempt to describe the mental process of understanding and to identify how people acquire meaning (as well as sound and structure) in language. Animal behaviorists research how and what other species communicate. Exponents of general semantics examine the different values (or connotations) of signs that supposedly mean the same thing (such as “the victor at Jena” and “the losers at Waterloo,” both referring to Napoleon). Also in a general-semantics vein, literary critics have been influenced by studies differentiating literary language from ordinary language and describing how literary metaphors evoke feelings and attitudes.
In the late 19th century Michel Jules Alfred Breal, a French philologist, proposed a “science of significations” that would investigate how sense is attached to expressions and other signs. In 1910 the British philosopher’s Alfred North Whitehead and Bertrand Russell published Principia Mathematica, which strongly influenced the Vienna Circle, a group of philosophers who developed the rigorous philosophical approach known as logical positivism.
One of the leading figures of the Vienna Circle, the German philosopher Rudolf Carnap, made a major contribution to philosophical semantics by developing symbolic logic, a system for analyzing signs and what they designate. In logical positivism, meaning is a relationship between words and things, and its study is empirically based: Because language, ideally, is a direct reflection of reality, signs match things and facts. In symbolic logic, however, mathematical notation is used to state what signs designate and to do so more clearly and precisely than is possible in ordinary language. Symbolic logic is thus itself a language, specifically, a metalanguage (formal technical language) used to talk about an object language (the language that is the object of a given semantic study).
An object language has a speaker (for example, a French woman) using expressions (such as la plume rouge) to designate a meaning (in this case, to indicate a definite pen-plume-of the colored-rouge). The full description of an object language in symbols is called the semiotic of that language. A language's semiotic has the following aspects: (1) a semantic aspect, in which signs (words, expressions, sentences) are given specific designations; (2) a pragmatic aspect, in which the contextual relations between speakers and signs are indicated; and (3) a syntactic aspect, in which formal relations among the elements within signs (for example, among the sounds in a sentence) are indicated.
An interpreted language in symbolic logic is an object language together with rules of meaning that link signs and designations. Each interpreted sign has a truth condition-a condition that must be met in order for the sign to be true. A sign's meaning is what the sign designates when its truth condition is satisfied. For example, the expression or sign “the moon is a sphere” is understood by someone who knows English; however, although it is understood, it may or may not be true. The expression is true if the thing it is extended to-the moon-is in fact spherical. To determine the sign's truth value, one must look at the moon for oneself.
The symbolic logic of logical positivist philosophy thus represents an attempt to get at meaning by way of the empirical verifiability of signs-by whether the truth of the sign can be confirmed by observing something in the real world. This attempt at understanding meaning has been only moderately successful. The Austrian-British philosopher Ludwig Wittgenstein rejected it in favor of his “ordinary language” philosophy, in which he asserted that thought is based on everyday language. Not all signs designate things in the world, he pointed out, nor can all signs be associated with truth values. In his approach to philosophical semantics, the rules of meaning are disclosed in how speech is used.
From ordinary-language philosophy has evolved the current theory of speech-act semantics. The British philosopher J. L. Austin claimed that, by speaking, a person performs an act, or does something (such as state, predict, or warn), and that meaning is found in what an expression does, in the act it performs. The American philosopher John R. Searle extended Austin's ideas, emphasizing the need to relate the functions of signs or expressions to their social context. Searle asserted that speech encompasses at least three kinds of acts: (1) locutionary acts, in which things are said with a certain sense or reference (as in “the moon is a sphere”); (2) illocutionary acts, in which such acts as promising or commanding are performed by means of speaking; and (3) perlocutionary acts, in which the speaker, by speaking, does something to someone else (for example, angers, consoles, or persuades someone). The speaker's intentions are conveyed by the illocutionary force that is given to the signs-that is, by the actions implicit in what is said. To be successfully meant, however, the signs must also be appropriate, sincere, consistent with the speaker's general beliefs and conduct, and recognizable as meaningful by the hearer.
What has developed in philosophical semantics, then, is a distinction between truth-based semantics and speech-act semantics. Some critics of speech-act theory believe that it deals primarily with meaning in communication (as opposed to meaning in language) and thus is part of the pragmatic aspect of a language's semiotic-that it relates to signs and to the knowledge of the world shared by speakers and hearers, rather than relating to signs and their designations (semantic aspect) or to formal relations among signs (syntactic aspect). These scholars hold that semantics should be restricted to assigning interpretations to signs alone-independent of a speaker and hearer.
Researchers in descriptive semantics examine what signs mean in particular languages. They aim, for instance, to identify what constitutes nouns or noun phrases and verbs or verb phrases. For some languages, such as English, this is done with subject-predicate analysis. For languages without clear-cut distinctions between nouns, verbs, and prepositions, it is possible to say what the signs mean by analyzing the structure of what are called propositions. In such an analysis, a sign is seen as an operator that combines with one or more arguments (also signs), often nominal arguments (noun phrases) or, relate nominal arguments to other elements in the expression (such as prepositional phrases or adverbial phrases). For example, in the expression “Bill gives Mary the book,””gives” is an operator that relates the arguments “Bill,””Mary,” and “the book.”
Whether using subject-predicate analysis or propositional analysis, descriptive semanticists establish expression classes (classes of items that can substitute for one another within a sign) and classes of items within the conventional parts of speech (such as nouns and verbs). The resulting classes are thus defined in terms of syntax, and they also have semantic roles; that is, the items in these classes perform specific grammatical functions, and in so doing they establish meaning by predicating, referring, making distinctions among entities, relations, or actions. For example, “kiss” belongs to an expression class with other items such as “hit” and “see,” as well as to the conventional part of speech “verb,” in which it is part of a subclass of operators requiring two arguments (an actor and a receiver). In “Mary kissed John,” the syntactic role of “kiss” is to relate two nominal arguments (“Mary” and “John”), whereas its semantic role is to identify a type of action. Unfortunately for descriptive semantics, however, it is not always possible to find a one-to-one correlation of syntactic classes with semantic roles. For instance, “John” has the same semantic role-to identify a person-in the following two sentences: “John is easy to please” and “John is eager to please.” The syntactic role of “John” in the two sentences, however, is different: In the first, “John” is the receiver of an action; in the second, “John” is the actor.
Linguistic semantics is also used by anthropologists called ethnoscientists to conduct formal semantic analysis (componential analysis) to determine how expressed signs-usually single words as vocabulary items called lexemes-in a language are related to the perceptions and thoughts of the people who speak the language. Componential analysis tests the idea that linguistic categories influence or determine how people view the world; this idea is called the Whorf hypothesis after the American anthropological linguist Benjamin Lee Whorf, who proposed it. In componential analysis, lexemes that have a common range of meaning constitute a semantic domain. Such a domain is characterized by the distinctive semantic features (components) that differentiate individual lexemes in the domain from one another, and also by features shared by all the lexemes in the domain. Such componential analysis points out, for example, that in the domain “seat” in English, the lexemes “chair,””sofa,””loveseat,” and “bench” can be distinguished from one another according to how many people are accommodated and whether a back support is included. At the same time all these lexemes share the common component, or feature, of meaning “something on which to sit.”
Linguists pursuing such componential analysis hope to identify a universal set of such semantic features, from which are drawn the different sets of features that characterize different languages. This idea of universal semantic features has been applied to the analysis of systems of myth and kinship in various cultures by the French anthropologist Claude Lévi-Strauss. He showed that people organize their societies and interpret their place in these societies in ways that, despite apparent differences, have remarkable underlying similarities.
Linguists concerned with theoretical semantics are looking for a general theory of meaning in language. To such linguists, known as transformational-generative grammarians, meaning is part of the linguistic knowledge or competence that all humans possess. A generative grammar as a model of linguistic competence has a phonological (sound-system), a syntactic, and a semantic component. The semantic component, as part of a generative theory of meaning, is envisioned as a system of rules that govern how interpretable signs are interpreted and determine that other signs (such as “Colorless green ideas sleep furiously”), although grammatical expressions, are meaningless-semantically blocked. The rules must also account for how a sentence such as “They passed the port at midnight” can have at least two interpretations.
Generative semantics grew out of proposals to explain a speaker's ability to produce and understand new expressions where grammar or syntax fails. Its goal is to explain why and how, for example, a person understands at first hearing that the sentence “Colorless green ideas sleep furiously” has no meaning, even though it follows the rules of English grammar; or how, in hearing a sentence with two possible interpretations (such as “They passed the port at midnight”), one decides which meaning applies.
In generative semantics, the idea developed that all information needed to semantically interpret a sign (usually a sentence) is contained in the sentence's underlying grammatical or syntactic deep structure. The deep structure of a sentence involves lexemes (understood as words or vocabulary items composed of bundles of semantic features selected from the proposed universal set of semantic features). On the sentence's surface (that is, when it is spoken) these lexemes will appear as nouns, verbs, adjectives, and other parts of speech-that is, as vocabulary items. When the sentence is formulated by the speaker, semantic roles (such as subject, object, predicate) are assigned to the lexemes; the listener hears the spoken sentence and interprets the semantic features that are meant.
Whether deep structure and semantic interpretation are distinct from one another is a matter of controversy. Most generative linguists agree, however, that a grammar should generate the set of semantically well-formed expressions that are possible in a given language, and that the grammar should associate a semantic interpretation with each expression.
Another subject of debate is whether semantic interpretation should be understood as syntactically based (that is, coming from a sentence's deep structure); or whether it should be seen as semantically based. According to Noam Chomsky, an American scholar who is particularly influential in this field, it is possible-in a syntactically based theory-for surface structure and deep structure jointly to determine the semantic interpretation of an expression.
The focus of general semantics is how people evaluate words and how that evaluation influences their behavior. Begun by the Polish American linguist Alfred Korzybski and long associated with the American semanticist and politician S. I. Hayakawa, general semantics has been used in efforts to make people aware of dangers inherent in treating words as more than symbols. It has been extremely popular with writers who use language to influence people's ideas. In their work, these writers use general-semantics guidelines for avoiding loose generalizations, rigid attitudes, inappropriate finality, and imprecision. Some philosophers and linguists, however, have criticized general semantics as lacking scientific rigor, and the approach has declined in popularity.
Positivism, system of philosophy based on experience and empirical knowledge of natural phenomena, in which metaphysics and theology are regarded as inadequate and imperfect systems of knowledge. The doctrine was first called positivism by the 19th-century French mathematician and philosopher Auguste Comte (1798-1857), but some of the positivist concepts may be traced to the British philosopher David Hume, the French philosopher Duc de Saint-Simon, and the German philosopher Immanuel Kant.
Comte chose the word positivism on the ground that it indicated the “reality” and “constructive tendency” that he claimed for the theoretical aspect of the doctrine. He was, in the main, interested in a reorganization of social life for the good of humanity through scientific knowledge, and thus control of natural forces. The two primary components of positivism, the philosophy and the polity (or program of individual and social conduct), were later welded by Comte into a whole under the conception of a religion, in which humanity was the object of worship. A number of Comte's disciples refused, however, to accept this religious development of his philosophy, because it seemed to contradict the original positivist philosophy. Many of Comte's doctrines were later adapted and developed by the British social philosophers John Stuart Mill and Herbert Spencer and by the Austrian philosopher and physicist Ernst Mach.
During the early 20th century a group of philosophers who were concerned with developments in modern science rejected the traditional positivist ideas that held personal experience to be the basis of true knowledge and emphasized the importance of scientific verification. This group came to be known as logical positivist, and it included the Austrian Ludwig Wittgenstein and the British Bertrand Russell and G. E. Moore. It was Wittgenstein's Tractatus Logico-philosophicus (1921; German-English parallel text, 1922) that proved to be of decisive influence in the rejection of metaphysical doctrines for their meaninglessness and the acceptance of empiricism as a matter of logical necessity.
The positivist today, who have rejected this so-called Vienna school of philosophy, prefer to call themselves logical empiricist in order to dissociate themselves from the emphasis of the earlier thinkers on scientific verification. They maintain that the verification principle itself is philosophically unverifiable.
Bertrand Arthur William Russell (1872-1970), British philosopher, mathematician, and Nobel laureate, who was also a positivist whose emphasis on logical analysis influenced the course of 20th-century philosophy. In the early 20th century British mathematician and philosopher Bertrand Russell, along with British mathematician and philosopher Alfred North Whitehead, attempted to demonstrate that mathematics and numbers can be understood as groups of concepts, or classes. Russell and Whitehead tried to show that mathematics is closely related to logic and, in turn, that ordinary sentences can be logically analyzed using mathematical symbols for words and phrases. This idea resulted in a new symbolic language, used by Russell in a field he termed philosophical logic, in which philosophical propositions were reformulated and examined according to his symbolic logic.
Born in Trelleck, Wales, on May 18, 1872, Russell was educated at Trinity College, University of Cambridge. After graduation in 1894, he traveled in France, Germany, and the United States and was then made a fellow of Trinity College. From an early age he developed a strong sense of social consciousness; at the same time, he involved himself in the study of logical and mathematical questions, which he had made his special fields and on which he was called to lecture at many institutions throughout the world. He achieved prominence with his first major work, The Principles of Mathematics (1902), in which he attempted to remove mathematics from the realm of abstract philosophical notions and to give it a precise scientific framework.
Russell then collaborated for eight years with the British philosopher and mathematician Alfred North Whitehead to produce the monumental work Principia Mathematica (3 volumes, 1910-1913). This work showed that mathematics can be stated in terms of the concepts of general logic, such as class and membership in a class. It became a masterpiece of rational thought. Russell and Whitehead proved that numbers can be defined as classes of a certain type, and in the process they developed logic concepts and a logic notation that established symbolic logic as an important specialization within the field of philosophy. In his next major work, The Problems of Philosophy (1912), Russell borrowed from the fields of sociology, psychology, physics, and mathematics to refute the tenets of idealism, the dominant philosophical school of the period, which held that all objects and experiences are the product of the intellect. Russell, a realist, believed that objects perceived by the senses have an inherent reality independent of the mind.
Russell condemned both sides in World War I (1914-1918), and for his uncompromising stand he was fined, imprisoned, and deprived of his teaching post at Cambridge. In prison he wrote Introduction to Mathematical Philosophy (1919), combining the two areas of knowledge he regarded as inseparable. After the war he visited the Russian Soviet Federated Socialist Republic, and in his book Practice and Theory of Bolshevism (1920) he expressed his disappointment with the form of socialism practiced there. He felt that the methods used to achieve a Communist system were intolerable and that the results obtained were not worth the price paid.
Russell taught at Beijing University in China during 1921 and 1922. From 1928 to 1932, after he returned to England, he conducted the private, highly progressive Beacon Hill School for young children. From 1938 to 1944 he taught at various educational institutions in the United States. He was barred, however, from teaching at the College of the City of New York (now City College of the City University of New York) by the state supreme court because of his attacks on religion in such works as What I Believe (1925) and his advocacy of sexual freedom, expressed in Manners and Morals (1929).
Russell returned to England in 1944 and was reinstated as a fellow of Trinity College. Although he abandoned pacifism to support the Allied cause in World War II (1939-1945), he became an ardent and active opponent of nuclear weapons. In 1949 he was awarded the Order of Merit by King George VI. Russell received the 1950 Nobel Prize for Literature and was cited as “the champion of humanity and freedom of thought.” He led a movement in the late 1950s advocating unilateral nuclear disarmament by Britain, and at the age of 89 he was imprisoned after an antinuclear demonstration. He died on February 2, 1970.
In addition to his earlier work, Russell also made a major contribution to the development of logical positivism, a strong philosophical movement of the 1930s and 1940s. The major Austrian philosopher Ludwig Wittgenstein, at one time Russell's student at Cambridge, was strongly influenced by his original concept of logical atomism. In his search for the nature and limits of knowledge, Russell was a leader in the revival of the philosophy of empiricism in the larger field of epistemology. In Our Knowledge of the External World (1926) and Inquiry into Meaning and Truth (1962), he attempted to explain all factual knowledge as constructed out of immediate experiences. Among his other books are The ABC of Relativity (1925), Education and the Social Order (1932), A History of Western Philosophy (1945), The Impact of Science upon Society (1952), My Philosophical Development (1959), War Crimes in Vietnam (1967), and The Autobiography of Bertrand Russell (3 volumes, 1967-1969).
Analytic and Linguistic philosophy begins in the 20th-century as philosophical movement, it is dominant in Britain and the United States since World War II, and aims to clarify language and analyze the concepts expressed in it. The movement has been given a variety of designations, including linguistic analysis, logical empiricism, logical positivism, Cambridge analysis, and “Oxford philosophy.” The last two labels are derived from the universities in England where this philosophical method has been particularly influential. Although no specific doctrines or tenets are accepted by the movement as a whole, analytic and linguistic philosophers agree that the proper activity of philosophy is clarifying language, or, as some prefer, clarifying concepts. The aim of this activity is to settle philosophical disputes and resolve philosophical problems, which, it is argued, originate in linguistic confusion.
A considerable diversity of views exists among analytic and linguistic philosophers regarding the nature of conceptual or linguistic analysis. Some have been primarily concerned with clarifying the meaning of specific words or phrases as an essential step in making philosophical assertions clear and unambiguous. Others have been more concerned with determining the general conditions that must be met for any linguistic utterance to be meaningful; their intent is to establish a criterion that will distinguish between meaningful and nonsensical sentences. Still other analysts have been interested in creating formal, symbolic languages that are mathematical in nature. Their claim is that philosophical problems can be more effectively dealt with once they are formulated in a rigorous logical language.
By contrast, many philosophers associated with the movement have focused on the analysis of ordinary, or natural, language. Difficulties arise when concepts such as time and freedom, for example, are considered apart from the linguistic context in which they normally appear. Attention to language as it is ordinarily used is the key, it is argued, to resolving many philosophical puzzles.
Many experts believe that philosophy as an intellectual discipline originated with the work of Plato, one of the most celebrated philosophers in history. The Greek thinker had an immeasurable influence on Western thought. However, Plato’s expression of ideas in the form of dialogues-he dialectical method, used most famously by his teacher Socrates-has led to difficulties in interpreting some of the finer points of his thoughts. The issue of what exactly Plato meant to say is addressed in the following excerpt by author R. M. Hare.
Linguistic analysis as a method of philosophy is as old as the Greeks. Several of the dialogues of Plato, for example, are specifically concerned with clarifying terms and concepts. Nevertheless, this style of philosophizing has received dramatically renewed emphasis in the 20th century. Influenced by the earlier British empirical tradition of John Locke, George Berkeley, David Hume, and John Stuart Mill and by the writings of the German mathematician and philosopher Gottlob Frége, the 20th-century English philosopher's G. E. Moore and Bertrand Russell became the founders of this contemporary analytic and linguistic trend. As students together at the University of Cambridge, Moore and Russell rejected Hegelian idealism, particularly as it was reflected in the work of the English metaphysician F. H. Bradley, who held that nothing is completely real except the Absolute. In their opposition to idealism and in their commitment to the view that careful attention to language is crucial in philosophical inquiry. They set the mood and style of philosophizing for much of the 20th century English-speaking world.
For Moore, philosophy was first and foremost analysis. The philosophical task involves clarifying puzzling propositions or concepts by indicating fewer puzzling propositions or concepts to which the originals are held to be logically equivalent. Once this task has been completed, the truth or falsity of problematic philosophical assertions can be determined more adequately. Moore was noted for his careful analyses of such puzzling philosophical claims as “time is unreal,” analyses that then aided in determining the truth of such assertions.
Russell, strongly influenced by the precision of mathematics, was concerned with developing an ideal logical language that would accurately reflect the nature of the world. Complex propositions, Russell maintained, can be resolved into their simplest components, which he called atomic propositions. These propositions refer to atomic facts, the ultimate constituents of the universe. The metaphysical view based on this logical analysis of language and the insistence that meaningful propositions must correspond to facts constitute what Russell called logical atomism. His interest in the structure of language also led him to distinguish between the grammatical form of a proposition and its logical form. The statements “John is good” and “John is tall” have the same grammatical form but different logical forms. Failure to recognize this would lead one to treat the property “goodness” as if it were a characteristic of John in the same way that the property “tallness” is a characteristic of John. Such failure results in philosophical confusion.
Austrian-born philosopher Ludwig Wittgenstein was one of the most influential thinkers of the 20th century. With his fundamental work, Tractatus Logico-philosophicus, published in 1921, he became a central figure in the movement known as analytic and linguistic philosophy.
Russell’s work in mathematics attracted to Cambridge the Austrian philosopher Ludwig Wittgenstein, who became a central figure in the analytic and linguistic movement. In his first major work, Tractatus Logico-Philosophicus (1921; trans. 1922), in which he first presented his theory of language, Wittgenstein argued that “all philosophy is a ‘critique of language’” and that “philosophy aims at the logical clarification of thoughts.” The results of Wittgenstein’s analysis resembled Russell’s logical atomism. The world, he argued, is ultimately composed of simple facts, which it is the purpose of language to picture. To be meaningful, statements about the world must be reducible to linguistic utterances that have a structure similar to the simple facts pictured. In this early Wittgensteinian analysis, only propositions that picture facts—the propositions of science-are considered factually meaningful. Metaphysical, theological, and ethical sentences were judged to be factually meaningless.
Influenced by Russell, Wittgenstein, Ernst Mach, and others, a group of philosophers and mathematicians in Vienna in the 1920s initiated the movement known as logical positivism. Led by Moritz Schlick and Rudolf Carnap, the Vienna Circle initiated one of the most important chapters in the history of analytic and linguistic philosophy. According to the positivist, the task of philosophy is the clarification of meaning, not the discovery of new facts (the job of the scientists) or the construction of comprehensive accounts of reality (the misguided pursuit of traditional metaphysics).
The positivist divided all meaningful assertions into two classes: analytic propositions and empirically verifiable ones. Analytic propositions, which include the propositions of logic and mathematics, are statements the truth or falsity of which depend altogether on the meanings of the terms constituting the statement. An example would be the proposition “two plus two equals four.” The second class of meaningful propositions includes all statements about the world that can be verified, at least in principle, by sense experience. Indeed, the meaning of such propositions is identified with the empirical method of their verification. This verifiability theory of meaning, the positivist concluded, would demonstrate that scientific statements are legitimate factual claims and that metaphysical, religious, and ethical sentences are factually empty. The ideas of logical positivism were made popular in England by the publication of A. J. Ayer’s Language, Truth and Logic in 1936.
The positivist’ verifiability theory of meaning came under intense criticism by philosophers such as the Austrian-born British philosopher Karl Popper. Eventually this narrow theory of meaning yielded to a broader understanding of the nature of language. Again, an influential figure was Wittgenstein. Repudiating many of his earlier conclusions in the Tractatus, he initiated a new line of thought culminating in his posthumously published Philosophical Investigations (1953; trans. 1953). In this work, Wittgenstein argued that once attention is directed to the way language is actually used in ordinary discourse, the variety and flexibility of language become clear. Propositions do much more than simply picture facts.
This recognition led to Wittgenstein’s influential concept of language games. The scientist, the poet, and the theologian, for example, are involved in different language games. Moreover, the meaning of a proposition must be understood in its context, that is, in terms of the rules of the language game of which that proposition is a part. Philosophy, concluded Wittgenstein, is an attempt to resolve problems that arise as the result of linguistic confusion, and the key to the resolution of such problems is ordinary language analysis and the proper use of language.
Finally, for which Wittgenstein comes as a particular note for his contribution to the movement known as analytic and linguistic philosophy. He was born in Vienna on April 26, 1889, Wittgenstein was raised in a wealthy and cultured family. After attending schools in Linz and Berlin, he went to England to study engineering at the University of Manchester. His interest in pure mathematics led him to Trinity College, University of Cambridge, to study with Bertrand Russell. There he turned his attention to philosophy. By 1918 Wittgenstein had completed his Tractatus Logico-philosophicus (1921; trans. 1922), a work he then believed provided the “final solution” to philosophical problems. Subsequently, he turned from philosophy and for several years taught elementary school in an Austrian village. In 1929 he returned to Cambridge to resume his work in philosophy and was appointed to the faculty of Trinity College. Soon he began to reject certain conclusions of the Tractatus and to develop the position reflected in his Philosophical Investigations (pub. posthumously 1953; trans. 1953). Wittgenstein retired in 1947; he died in Cambridge on April 29, 1951. A sensitive, intense man who often sought solitude and was frequently depressed, Wittgenstein abhorred pretense and was noted for his simple style of life and dress. The philosopher was forceful and confident in personality, however, and he exerted considerable influence on those with whom he came in contact.
Wittgenstein’s philosophical life may be divided into two distinct phases: an early period, represented by the Tractatus, and a later period, represented by the Philosophical Investigations. Throughout most of his life, however, Wittgenstein consistently viewed philosophy as linguistic or conceptual analysis. In the Tractatus he argued that “philosophy aims at the logical clarification of thoughts.” In the Philosophical Investigations, however, he maintained that “philosophy is a battle against the bewitchment of our intelligence by means of language.”
Language, Wittgenstein argued in the Tractatus, is composed of complex propositions that can be analyzed into fewer complex propositions until one arrives at simple or elementary propositions. Correspondingly, the world is composed of complex facts that can be analyzed into fewer complex facts until one arrives at simple, or atomic, facts. The world is the totality of these facts. According to Wittgenstein’s picture theory of meaning, it is the nature of elementary propositions logically to picture atomic facts, or “states of affairs.” He claimed that the nature of language required elementary propositions, and his theory of meaning required that there be atomic facts pictured by the elementary propositions. On this analysis, only propositions that picture facts-the propositions of science-are considered cognitively meaningful. Metaphysical and ethical statements are not meaningful assertions. The logical positivist associated with the Vienna Circle were greatly influenced by this conclusion.
Wittgenstein came to believe, however, that the narrow view of language reflected in the Tractatus was mistaken. In the Philosophical Investigations he argued that if one actually looks to see how language is used, the variety of linguistic usage becomes clear. Words are like tools, and just as tools serve different functions, so linguistic expressions serve many functions. Although some propositions are used to picture facts, others are used to command, question, pray, thank, curse, and so on. This recognition of linguistic flexibility and variety led to Wittgenstein’s concept of a language game and to the conclusion that people play different language games. The scientist, for example, is involved in a different language game than the theologian. Moreover, the meaning of a proposition must be understood in terms of its context, that is, in terms of the rules of the game through which its proposition is a part. The key to the resolution of philosophical puzzles is the therapeutic process of examining and describing language in use.
Once again, the psychology proven attempts are well grounded to evolutionary principles, in which a variety of higher mental functions may be adaptations, forced in response to selection pressures on the human populations through evolutionary time. Candidates for such theorizing include material and paternal motivations, capacities for love and friendship, the development of language as a signalling system cooperative and aggressive, our emotional repertoire, our moral and reactions, including the disposition to detect and punish those who cheat on agreements or who “free-ride” on =the work of others, our cognitive structures, nd many others. Evolutionary psychology goes hand-in-hand with neurophysiological evidence about the underlying circuitry in the brain which subserves the psychological mechanisms it claims to identify. The approach was foreshadowed by Darwin himself, and William James, as well as the sociology of E.O. Wilson. The terms of use are applied, more or less aggressively, especially to explanations offered in Sociobiology and evolutionary psychology.
Another assumption that is frequently used to legitimate the real existence of forces associated with the invisible hand in neoclassical economics derives from Darwin”s view of natural selection as a war-like competing between atomized organisms in the struggle for survival. In natural selection as we now understand it, cooperation appears to exist in complementary relation to competition. Complementary relationships between such results are emergent self-regulating properties that are greater than the sum of parts and that serve to perpetuate the existence of the whole.
According to E.O Wilson, the “human mind evolved to believe in the gods” and people “need a sacred narrative” to have a sense of higher purpose. Yet it is also clear that the “gods” in his view are merely human constructs and, therefore, there is no basis for dialogue between the world-view of science and religion. “Science for its part,” said Wilson, “will test relentlessly every assumption about the human condition and in time uncover the bedrock of the moral a religious sentiment. The eventual result of the competition between the other, will be the secularization of the human epic and of religion itself.
Man has come to the threshold of a state of consciousness, regarding his nature and his relationship to te Cosmos, in terms that reflects “reality.” By using the processes of nature as metaphor, to describe the forces by which it operates upon and within Man, we come as close to describing “reality” as we can within the limits of our comprehension. Men will be very uneven in their capacity for such understanding, which, naturally, differs for different ages and cultures, and develops and changes over the course of time. For these reasons it will always be necessary to use metaphor and myth to provide “comprehensible” guides to living, and in thus way, Man’s imagination and intellect play vital roles on his survival and evolution.
Since so much of life both inside and outside the study is concerned with finding explanations of things, it would be desirable to have a concept of what counts as a good explanation from bad. Under the influence of “logical positivist” approaches to the structure of science, it was felt that the criterion ought to be found in a definite logical relationship between the “exlanans” (that which does the explaining) and the explanandum (that which is to be explained). The approach culminated in the covering law model of explanation, or the view that an event is explained when it is subsumed under a law of nature, that is, its occurrence is deducible from the law plus a set of initial conditions. A law would itself be explained by being deduced from a higher-order or covering law, in the way that Johannes Kepler(or Keppler, 1571-1630), was by way of planetary motion that the laws were deducible from Newton”s laws of motion. The covering law model may be adapted to include explanation by showing that something is probable, given a statistical law. Questions for the covering law model include querying for the covering laws are necessary to explanation (we explain whether everyday events without overtly citing laws): Querying whether they are sufficient (it may not explain an event just to say that it is an example of the kind of thing that always happens). And querying whether a purely logical relationship is adapted to capturing the requirements, we make of explanations. These may include, for instance, that we have a “feel” for what is happening, or that the explanation proceeds in terms of things that are familiar to us or unsurprising, or that we can give a model of what is going on, and none of these notions is captured in a purely logical approach. Recent work, therefore, has tended to stress the contextual and pragmatic elements in requirements for explanation, so that what counts as good explanation given one set of concerns may not do so given another.
The argument to the best explanation is the view that once we can select the best of any in something in explanations of an event, then we are justified in accepting it, or even believing it. The principle needs qualification, since something it is unwise to ignore the antecedent improbability of a hypothesis which would explain the data better than others, e.g., the best explanation of a coin falling heads 530 times in 1,000 tosses might be that it is biassed to give a probability of heads of 0.53 but it might be more sensible to suppose that it is fair, or to suspend judgement.
In a philosophy of language is considered as the general attempt to understand the components of a working language, the relationship that understanding the speaker has to its elements, and the relationship they bear to the world. The subject therefore embraces the traditional division of semiotic into syntax, semantics, and pragmatics. The philosophy of language thus mingles with the philosophy of mind, since it needs an account of what it is in our understanding that enables us to use language. It so mingles with the metaphysics of truth and the relationship between sign and object. Much as much is that the philosophy in the 20th century, has been informed by the belief that philosophy of language is the fundamental basis of all philosophical problems, in that language is the distinctive exercise of mind, and the distinctive way in which we give shape to metaphysical beliefs. Particular topics will include the problems of logical form? And the basis of the division between syntax and semantics, as well as problems of understanding the number and nature of specifically semantic relationships such as meaning, reference, predication, and quantification. Pragmatics includes that of speech acts, while problems of rule following and the indeterminacy of translation infect philosophies of both pragmatics and semantics.
On this conception, to understand a sentence is to know its truth-conditions, and, yet, in a distinctive way the conception has remained central that those who offer opposing theories characteristically define their position by reference to it. The Concepcion of meaning s truth-conditions needs not and should not be advanced for being in itself as complete account of meaning. For instance, one who understands a language must have some idea of the range of speech acts contextually performed by the various types of a sentence in the language, and must have some idea of the insufficiencies of various kinds of speech acts. The claim of the theorist of truth-conditions should rather be targeted on the notion of content: If an indicative sentence differs in what they strictly and literally say, then this difference is fully accounted for by the difference in the truth-conditions.
The meaning of a complex expression is a function of the meaning of its constituent. This is just as a sentence of what it is for an expression to be semantically complex. It is one of the initial attractions of the conception of meaning truth-conditions tat it permits a smooth and satisfying account of the way in which the meaning of s complex expression is a function of the meaning of its constituents. On the truth-conditional conception, to give the meaning of an expression is to state the contribution it makes to the truth-conditions of sentences in which it occurs. For singular terms-proper names, indexical, and certain pronouns-this is done by stating the reference of the terms in question. For predicates, it is done either by stating the conditions under which the predicate is true of arbitrary objects, or by stating the conditions under which arbitrary atomic sentences containing it is true. The meaning of a sentence-forming operator is given by stating its contribution to the truth-conditions of as complex sentence, as a function of the semantic values of the sentences on which it operates.
The theorist of truth conditions should insist that not every true statement about the reference of an expression is fit to be an axiom in a meaning-giving theory of truth for a language, such is the axiom: “London” refers to the city in which there was a huge fire in 1666, is a true statement about the reference of “London”. It is a consequent of a theory which substitutes this axiom for no different a term than of our simple truth theory that “London is beautiful” is true if and only if the city in which there was a huge fire in 1666 is beautiful. Since a subject can understand the name “London” without knowing that last-mentioned truth condition, this replacement axiom is not fit to be an axiom in a meaning-specifying truth theory. It is, of course, incumbent on a theorised meaning of truth conditions, to state in a way which does not presuppose any previous, non-truth conditional conception of meaning
Among the many challenges facing the theorist of truth conditions, two are particularly salient and fundamental. First, the theorist has to answer the charge of triviality or vacuity, second, the theorist must offer an account of what it is for a person”s language to be truly describable by as semantic theory containing a given semantic axiom.
Since the content of a claim that the sentence “Paris is beautiful” are true amounts to no more than the claim that Paris is beautiful, we can trivially describers understanding a sentence, if we wish, as knowing its truth-conditions, but this gives us no substantive account of understanding whatsoever. Something other than a grasp of the truth conditions that must provide the substantive account. The charge rests upon what has been called the redundancy theory of truth, the theory which, somewhat more discriminatingly. Horwich calls the minimal theory of truth. It”s conceptual representation that the concept of truth is exhausted by the fact that it conforms to the equivalence principle, the principle that for any proposition “p,” it is true that “p” if and only if “P.” Many different philosophical theories of truth will, with suitable qualifications, accept that equivalence principle. The distinguishing feature of the minimal theory is its claim that the equivalence principle exhausts the notion of truth. It is now widely accepted, both by opponents and supporters of truth conditional theories of meaning, that it is inconsistent to accept both minimal theory of truth and a truth conditional account of meaning. If the claim that the sentence “Paris is beautiful” is true is exhausted by its equivalence to the claim that Paris is beautiful, it is circular to try of its truth conditions. The minimal theory of truth has been endorsed by the Cambridge mathematician and philosopher Plumpton Ramsey (1903-30), and the English philosopher Jules Ayer, the later Wittgenstein, Quine, Strawson. Horwich and-confusing and inconsistently if this article is correct-Frége himself. But is the minimal theory correct?
The minimal theory treats instances of the equivalence principle as definitional of truth for a given sentence, but in fact, it seems that each instance of the equivalence principle can itself be explained. The truths from which such an instance as: “London is beautiful” is true if and only if London is beautiful. This would be a pseudo-explanation if the fact that “London” refers to London consists in part in the fact that “London is beautiful” has the truth-condition it does. But it is very implausible, it is, after all, possible to understand the name “London” without understanding the predicate “is beautiful.”
Sometimes, however, the counterfactual conditional is known as subjunctive conditionals, insofar as a counterfactual conditional is a conditional of the form “if 'p' were to happen 'q' would,” or “if p’s were to have happened 'q' would have happened,” where the supposition of “p” is contrary to the known fact that “not-p.” Such assertions are nevertheless, useful “if you broke the bone, the X-ray would have looked different,” or “if the reactors were to fail, this mechanism wold clicks in” are important truths, even when we know that the bone is not broken or are certain that the reactor will not fail. It is arguably distinctive of laws of nature that yield counterfactual (“if the metal were to be heated, it would expand”), whereas accidentally true generalizations may not. It is clear that counterfactuals cannot be represented by the material implication of the propositional calculus, since that conditionals come out true whenever “p” is false, so there would be no division between true and false counterfactual.
Although the subjunctive form indicates some counterfactual, in many contexts it does not seem to matter whether we use a subjunctive form, or a simple conditional form: “If you run out of water, you will be in trouble” seems equivalent to “if you were to run out of water, you would be in trouble,” in other contexts there is a big difference: “If Oswald did not kill Kennedy, someone else did” is clearly true, whereas “if Oswald had not killed Kennedy, someone would have” is most probably false.
The best-known modern treatment of counterfactuals is that of David Lewis, which evaluates them as true or false according to whether “q” is true in the “most similar” possible worlds to ours in which “p” is true. The similarity-ranking this approach needs have proved controversial, particularly since it may need to presuppose some notion of the same laws of nature, whereas art of the interest in counterfactuals is that they promise to illuminate that notion. There is a growing comprehension of understanding that the classification of conditionals is an extremely tricky business, and categorizing them as counterfactuals or does not be of limited use.
The pronouncing of any conditional; prepositions of the form “if p then Q.” The condition hypothesizes, “P.” It”s called the antecedent of the conditional, and “q” the consequent. Various kinds of conditional have been distinguished. The weakening of material implications, are merely telling us that with ‘not-p’, or ‘q’, in the stronger conditionals include elements of modality, corresponding to the thought that “if ‘p’ is true then ‘q’ must be true.” Ordinary language is very flexible in its use of the conditional form, and there is controversy whether, yielding different kinds of conditionals with different meanings, or pragmatically, in which case there should be one basic meaning which case there should be one basic meaning, with surface differences arising from other implicatures.
We now turn to a philosophy of meaning and truth, for which it is especially associated with the American philosopher of science and of language (1839-1914), and the American psychologist philosopher William James (1842-1910), wherefore the study in Pragmatism is given to various formulations by both writers, but the core is the belief that the meaning of a doctrine is the same as the practical effects of adapting it. Peirce interpreted of theocratical sentences ids only that of a corresponding practical maxim (telling us what to do in some circumstance). In James the position issues in a theory of truth, notoriously allowing that belief, including for an example, belief in God, is the widest sense of the works satisfactorily in the widest sense of the word. On James”s view almost any belief might be respectable, and even rue, provided it works (but working is no simple matter for James). The apparent subjectivist consequences of tis were wildly assailed by Russell (1872-1970), Moore (1873-1958), and others in the early years of the 20th-century. This led to a division within pragmatism between those such as the American educator John Dewey (1859-1952), whose humanistic conception of practice remains inspired by science, and the more idealistic route that especially by the English writer F.C.S. Schiller (1864-1937), embracing the doctrine that our cognitive efforts and human needs actually transform the reality that we seek to describe. James often writes as if he sympathizes with this development. For instance, in The Meaning of Truth (1909), he considers the hypothesis that other people have no minds (dramatized in the sexist idea of an “automatic sweetheart” or female zombie) and remarks' that the hypothesis would not work because it would not satisfy our egoistic craving for the recognition and admiration of others. The implication that this is what makes it true that the other persons have minds in the disturbing part.
Modern pragmatists such as the American philosopher and critic Richard Rorty (1931-) and some writings of the philosopher Hilary Putnam (1925-) who has usually tried to dispense with an account of truth and concentrate, as perhaps James should have done, upon the nature of belief and its relations with human attitude, emotion, and need. The driving motivation of pragmatism is the idea that belief in the truth on te one hand must have a close connection with success in action on the other. One way of cementing the connection is found in the idea that natural selection must have adapted us to be cognitive creatures because beliefs have effects, as they work. Pragmatism can be found in Kant”s doctrine of the primary of practical over pure reason, and continued to play an influential role in the theory of meaning and of truth.
In case of fact, the philosophy of mind is the modern successor to behaviourism, as do the functionalism that its early advocates were Putnam (1926-) and Sellars (1912-89), and its guiding principle is that we can define mental states by a triplet of relations they have on other mental stares, what effects they have on behaviour. The definition need not take the form of a simple analysis, but if w could write down the totality of axioms, or postdate, or platitudes that govern our theories about what things of other mental states, and our theories about what things are apt to cause (for example), a belief state, what effects it would have on a variety of other mental states, and what affects it is likely to have on behaviour, then we would have done all that is needed to make the state a proper theoretical notion. It could be implicitly defied by these theses. Functionalism is often compared with descriptions of a computer, since according to mental descriptions correspond to a description of a machine in terms of software, that remains silent about the underlaying hardware or “realization” of the program the machine is running. The principal advantage of functionalism includes its fit with the way we know of mental states both of ourselves and others, which is via their effects on behaviour and other mental states. As with behaviourism, critics charge that structurally complex items that do not bear mental states might nevertheless, imitate the functions that are cited. According to this criticism functionalism is too generous and would count too many things as having minds. It is also queried whether functionalism is too paradoxical, able to see mental similarities only when there is causal similarity, when our actual practices of interpretations enable us to ascribe thoughts and desires to differently from our own, it may then seem as though beliefs and desires can be “variably realized” causal architecture, just as much as they can be in different neurophysiological states.
The philosophical movement of Pragmatism had a major impact on American culture from the late 19th century to the present. Pragmatism calls for ideas and theories to be tested in practice, by assessing whether acting upon the idea or theory produces desirable or undesirable results. According to pragmatists, all claims about truth, knowledge, morality, and politics must be tested in this way. Pragmatism has been critical of traditional Western philosophy, especially the notions that there are absolute truths and absolute values. Although pragmatism was popular for a time in France, England, and Italy, most observers believe that it encapsulates an American faith in know-how and practicality and an equally American distrust of abstract theories and ideologies.
In mentioning the American psychologist and philosopher we find William James, who helped to popularize the philosophy of pragmatism with his book Pragmatism: A New Name for Old Ways of Thinking (1907). Influenced by a theory of meaning and verification developed for scientific hypotheses by American philosopher C. S. Peirce, James held that truths are what works, or has good experimental results. In a related theory, James argued the existence of God is partly verifiable because many people derive benefits from believing.
The Association for International Conciliation first published a William James”s pacifist statement, “The Moral Equivalent of War,” in 1910. James, a highly respected philosopher and psychologist, was one of the founders of pragmatism-a philosophical movement holding that ideas and theories must be tested in practice to assess their worth. James hoped to find a way to convince men with a long-standing history of pride and glory in war to evolve beyond the need for bloodshed and to develop other avenues for conflict resolution. Spelling and grammars represent standards of the time.
Pragmatists regard all theories and institutions as tentative hypotheses and solutions. For this reason they believed that efforts to improve society, through such means as education or politics, must be geared toward problem solving and must be ongoing. Through their emphasis on connecting theory to practice, pragmatist thinkers attempted to transform all areas of philosophy, from metaphysics to ethics and political philosophy.
Pragmatism sought a middle ground between traditional ideas about the nature of reality and radical theories of nihilism and irrationalism, which had become popular in Europe in the late 19th century. Traditional metaphysics assumed that the world has a fixed, intelligible structure and that human beings can know absolute or objective truths about the world and about what constitutes moral behavior. Nihilism and irrationalism, on the other hand, denied those very assumptions and their certitude. Pragmatists today still try to steer a middle course between contemporary offshoots of these two extremes.
The ideas of the pragmatists were considered revolutionary when they first appeared. To some critics, pragmatism”s refusal to affirm any absolutes carried negative implications for society. For example, pragmatists do not believe that a single absolute idea of goodness or justice exists, but rather than these concepts are changeable and depend on the context in which they are being discussed. The absence of these absolutes, critics feared, could result in a decline in moral standards. The pragmatists” denial of absolutes, moreover, challenged the foundations of religion, government, and schools of thought. As a result, pragmatism influenced developments in psychology, sociology, education, semiotics (the study of signs and symbols), and scientific method, as well as philosophy, cultural criticism, and social reform movements. Various political groups have also drawn on the assumptions of pragmatism, from the progressive movements of the early 20th century to later experiments in social reform.
Pragmatism is best understood in its historical and cultural context. It arose during the late 19th century, a period of rapid scientific advancement typified by the theories of British biologist Charles Darwin, whose theories suggested too many thinkers that humanity and society are in a perpetual state of progress. During this same period a decline in traditional religious beliefs and values accompanied the industrialization and material progress of the time. In consequence it became necessary to rethink fundamental ideas about values, religion, science, community, and individuality.
The three most important pragmatists are American philosophers” Charles Sanders Peirce, William James, and John Dewey. Peirce was primarily interested in scientific method and mathematics; his objective was to infuse scientific thinking into philosophy and society, and he believed that human comprehension of reality was becoming ever greater and that human communities were becoming increasingly progressive. Peirce developed pragmatism as a theory of meaning-in particular, the meaning of concepts used in science. The meaning of the concept “brittle”, for example, is given by the observed consequences or properties that objects called “brittle” exhibit. For Peirce, the only rational way to increase knowledge was to form mental habits that would test ideas through observation, experimentation, or what he called inquiry. Many philosophers known as logical positivist, a group of philosophers who have been influenced by Peirce, believed that our evolving species was fated to get ever closer to Truth. Logical positivist emphasize the importance of scientific verification, rejecting the assertion of positivism that personal experience is the basis of true knowledge.
James moved pragmatism in directions that Peirce strongly disliked. He generalized Peirce”s doctrines to encompass all concepts, beliefs, and actions; he also applied pragmatist ideas to truth as well as to meaning. James was primarily interested in showing how systems of morality, religion, and faith could be defended in a scientific civilization. He argued that sentiment, as well as logic, is crucial to rationality and that the great issues of life-morality and religious belief, for example-are leaps of faith. As such, they depend upon what he called “the will to believe” and not merely on scientific evidence, which can never tell us what to do or what is worthwhile. Critics charged James with relativism (the belief that values depend on specific situations) and with crass expediency for proposing that if an idea or action works the way one intends, it must be right. But James can more accurately be described as a pluralist-someone who believes the world to be far too complex for any one philosophy to explain everything.
Dewey”s philosophy can be described as a version of philosophical naturalism, which regards human experience, intelligence, and communities as ever-evolving mechanisms. Using their experience and intelligence, Dewey believed, human beings can solve problems, including social problems, through inquiry. For Dewey, naturalism led to the idea of a democratic society that allows all members to acquire social intelligence and progress both as individuals and as communities. Dewey held that traditional ideas about knowledge, truth, and values, in which absolutes are assumed, are incompatible with a broadly Darwinian world-view in which individuals and society is progressing. In consequence, he felt that these traditional ideas must be discarded or revised. Indeed, for pragmatists, everything people know and do depend on a historical context and are thus tentative rather than absolute.
Many followers and critics of Dewey believe he advocated elitism and social engineering in his philosophical stance. Others think of him as a kind of romantic humanist. Both tendencies are evident in Dewey”s writings, although he aspired to synthesize the two realms.
The pragmatists” tradition was revitalized in the 1980s by American philosopher Richard Rorty, who has faced similar charges of elitism for his belief in the relativism of values and his emphasis on the role of the individual in attaining knowledge. Interest has renewed in the classic pragmatists-Pierce, James, and Dewey-have an alternative to Rorty”s interpretation of the tradition.
The Philosophy of Mind, is the branch of philosophy that considers mental phenomena such as sensation, perception, thought, belief, desire, intention, memory, emotion, imagination, and purposeful action. These phenomena, which can be broadly grouped as thoughts and experiences, are features of human beings; many of them are also found in other animals. Philosophers are interested in the nature of each of these phenomena as well as their relationships to one another and to physical phenomena, such as motion.
The most famous exponent of dualism was the French philosopher René Descartes, who maintained that body and mind are radically different entities and that they are the only fundamental substances in the universe. Dualism, however, does not show how these basic entities are connected.
In the work of the German philosopher Gottfried Wilhelm Leibniz, the universe is held to consist of an infinite number of distinct substances, or monad. This view is pluralistic in the sense that it proposes the existence of many separate entities, and it is monistic in its assertion that each monad reflects within itself the entire universe.
Other philosophers have held that knowledge of reality is not derived from a priori principles, but is obtained only from experience. This type of metaphysic is called empiricism. Still another school of philosophy has maintained that, although an ultimate reality does exist, it is altogether inaccessible to human knowledge, which is necessarily subjective because it is confined to states of mind. Knowledge is therefore not a representation of external reality, but merely a reflection of human perceptions. This view is known as skepticism or agnosticism in respect to the soul and the reality of God.
The 18th-century German philosopher Immanuel Kant published his influential work The Critique of Pure Reason in 1781. Three years later, he expanded on his study of the modes of thinking with an essay entitled “What is Enlightenment?” In this 1784 essay, Kant challenged readers to “dare to know,” arguing that it was not only a civic but also a moral duty to exercise the fundamental freedoms of thought and expression.
Several major viewpoints were combined in the work of Kant, who developed a distinctive critical philosophy called transcendentalism. His philosophy is agnostic in that it denies the possibility of a strict knowledge of ultimate reality; it is empirical in that it affirms that all knowledge arises from experience and is true of objects of actual and possible experience; and it is rationalistic in that it maintains the a priori character of the structural principles of this empirical knowledge.
These principles are held to be necessary and universal in their application to experience, for in Kant”s view the mind furnishes the archetypal forms and categories (space, time, causality, substance, and relation) to its sensations, and these categories are logically anterior to experience, although manifested only in experience. Their logical anteriority to experience makes these categories or structural principle”s transcendental; they transcend all experience, both actual and possible. Although these principles determine all experience, they do not in any way affect the nature of things in themselves. The knowledge of which these principles are the necessary conditions must not be considered, therefore, as constituting a revelation of things as they are in themselves. This knowledge concerns things only insofar as they appear to human perception or as they can be apprehended by the senses. The argument by which Kant sought to fix the limits of human knowledge within the framework of experience and to demonstrate the inability of the human mind to penetrate beyond experience strictly by knowledge to the realm of ultimate reality constitutes the critical feature of his philosophy, giving the key word to the titles of his three leading treatises, Critique of Pure Reason, Critique of Practical Reason, and Critique of Judgment. In the system propounded in these works, Kant sought also to reconcile science and religion in a world of two levels, comprising noumena, objects conceived by reason although not perceived by the senses, and phenomena, things as they appear to the senses and are accessible to material study. He maintained that, because God, freedom, and human immortality are noumenal realities, these concepts are understood through moral faith rather than through scientific knowledge. With the continuous development of science, the expansion of metaphysics to include scientific knowledge and methods became one of the major objectives of metaphysicians.
Some of Kant”s most distinguished followers, notably Johann Gottlieb Fichte, Friedrich Schelling, Georg Wilhelm Friedrich Hegel, and Friedrich Schleiermacher, negated Kant”s criticism in their elaborations of his transcendental metaphysics by denying the Kantian conception of the thing-in-itself. They thus developed an absolute idealism in opposition to Kant”s critical transcendentalism.
Since the formation of the hypothesis of absolute idealism, the development of metaphysics has resulted in as many types of metaphysical theory as existed in pre-Kantian philosophy, despite Kant”s contention that he had fixed definitely the limits of philosophical speculation. Notable among these later metaphysical theories is radical empiricism, or pragmatism, a native American form of metaphysics expounded by Charles Sanders Peirce, developed by William James, and adapted as instrumentalism by John Dewey; voluntarism, the foremost exponents of which are the German philosopher Arthur Schopenhauer and the American philosopher Josiah Royce; phenomenalism, as it is exemplified in the writings of the French philosopher Auguste Comte and the British philosopher Herbert Spencer; emergent evolution, or creative evolution, originated by the French philosopher Henri Bergson; and the philosophy of the organism, elaborated by the British mathematician and philosopher Alfred North Whitehead. The salient doctrines of pragmatism are that the chief function of thought is to guide action, that the meaning of concepts is to be sought in their practical applications, and that truth should be tested by the practical effects of belief; according to instrumentalism, ideas are instruments of action, and their truth is determined by their role in human experience. In the theory of voluntarism the ‘will’ postulated as the supreme manifestation of reality. The exponents of phenomenalism, who are sometimes called positivist, contend that everything can be analyzed in terms of actual or possible occurrences, or phenomena, and that anything that cannot be analyzed in this manner cannot be understood. In emergent or creative evolution, the evolutionary process is characterized as spontaneous and unpredictable rather than mechanistically determined. The philosophy of the organism combines an evolutionary stress on constant process with a metaphysical theory of God, the eternal objects, and creativity.
In the 20th century the validity of metaphysical thinking has been disputed by the logical positivist and by the so-called dialectical materialism of the Marxists. The basic principle maintained by the logical positivist is the verifiability theory of meaning. According to this theory a sentence has factual meaning only if it meets the test of observation. Logical positivist argue that metaphysical expressions such as “Nothing exists except material particles” and “Everything is part of one all-encompassing spirit” cannot be tested empirically. Therefore, according to the verifiability theory of meaning, these expressions have no factual cognitive meaning, although they can have an emotive meaning relevant to human hopes and feelings.
The dialectical materialists assert that the mind is conditioned by and reflects material reality. Therefore, speculations that conceive of constructs of the mind as having any other than material reality are themselves unreal and can result only in delusion. To these assertions metaphysicians reply by denying the adequacy of the verifiability theory of meaning and of material perception as the standard of reality. Both logical positivism and dialectical materialism, they argue, conceal metaphysical assumptions, for example, that everything is observable or at least connected with something observable and that the mind has no distinctive life of its own. In the philosophical movement known as existentialism, thinkers have contended that the questions of the nature of being and of the individual”s relationships to it are extremely important and meaningful in terms of human life. The investigation of these questions is therefore considered valid whether its results can be verified objectively.
Since the 1950s the problems of systematic analytical metaphysics have been studied in Britain by Stuart Newton Hampshire and Peter Frederick Strawson, the former concerned, in the manner of Spinoza, with the relationship between thought and action, and the latter, in the manner of Kant, with describing the major categories of experience as they are embedded in language. In the U.S. metaphysics has been pursued much in the spirit of positivism by Wilfred Stalker Sellars and Willard Van Orman Quine. Sellars have sought to express metaphysical questions in linguistic terms, and Quine has attempted to determine whether the structure of language commits the philosopher to asserting the existence of any entities whatever and, if so, what kind. In these new formulations the issues of metaphysics and ontology remain vital.
In the 17th century, French philosopher René Descartes proposed that only two substances ultimately exist; mind and body. Yet, if the two are entirely distinct, as Descartes believed, how can one substance interact with the other? How, for example, is the intention of a human mind able to cause movement in the person”s limbs? The issue of the interaction between mind and body is known in philosophy as the mind-body problem.
Many fields other than philosophy shares an interest in the nature of mind. In religion, the nature of mind is connected with various conceptions of the soul and the possibility of life after death. In many abstract theories of mind there is considerable overlap between philosophy and the science of psychology. Once part of philosophy, psychology split off and formed a separate branch of knowledge in the 19th century. While psychology used scientific experiments to study mental states and events, philosophy uses reasoned arguments and thought experiments in seeking to understand the concepts that underlie mental phenomena. Also influenced by philosophy of mind is the field of artificial intelligence (AI), which endeavors to develop computers that can mimic what the human mind can do. Cognitive science attempts to integrate the understanding of mind provided by philosophy, psychology, AI, and other disciplines. Finally, all of these fields benefit from the detailed understanding of the brain that has emerged through neuroscience in the late 20th century.
Philosophers use the characteristics of inward accessibility, subjectivity, intentionality, goal-directedness, creativity and freedom, and consciousness to distinguish mental phenomena from physical phenomena.
Perhaps the most important characteristic of mental phenomena is that they are inwardly accessible, or available to us through introspection. We each know our own minds-our sensations, thoughts, memories, desires, and fantasies-in a direct sense, by internal reflection. We also know our mental states and mental events in a way that no one else can. In other words, we have privileged access to our own mental states.
Certain mental phenomena, those we generally call experiences, have a subjective nature-that is, they have certain characteristics we become aware of when we reflect. For instance, there is “something it is like” to feel pain, or have an itch, or see something red. These characteristics are subjective in that they are accessible to the subject of the experience, the person who has the experience, but not to others.
Other mental phenomena, which we broadly refer to as thoughts, have a characteristic philosophers call intentionality. Intentional thoughts are about other thoughts or objects, which are represented as having certain properties or for being related to one another in a certain way. The belief that California is west of Nevada, for example, is about California and Nevada and represents the former for being west of the latter. Although we have privileged access to our intentional states, many of them do not seem to have a subjective nature, at least not in the way that experiences do.
A number of mental phenomena appear to be connected to one another as elements in an intelligent, goal-directed system. The system works as follows: First, our sense organs are stimulated by events in our environment; next, by virtue of these stimulations, we perceive things about the external world; finally, we use this information, as well as information we have remembered or inferred, to guide our actions in ways that further our goals. Goal-directedness seems to accompany only mental phenomena.
Another important characteristic of mind, especially of human minds, is the capacity for choice and imagination. Rather than automatically converting past influences into future actions, individual minds are capable of exhibiting creativity and freedom. For instance, we can imagine things we have not experienced and can act in ways that no one expects or could predict.
Mental phenomena are conscious, and consciousness may be the closest term we have for describing what is special about mental phenomena. Minds are sometimes referred to as consciousness, yet it is difficult to describe exactly what consciousness is. Although consciousness is closely related to inward accessibility and subjectivity, these very characteristics seem to hinder us in reaching an objective scientific understanding of it.
Although philosophers have written about mental phenomena since ancient times, the philosophy of mind did not garner much attention until the work of French philosopher René Descartes in the 17th century. Descartes”s work represented a turning point in thinking about mind by making a strong distinction between bodies and minds, or the physical and the mental. This duality between mind and body, known as Cartesian dualism, has posed significant problems for philosophy ever since.
Descartes believed there are two basic kinds of things in the world, a belief known as substance dualism. For Descartes, the principles of existence for these two groups of things-bodies and minds-are completely different from one another: Bodies exist by being extended in space, while minds exist by being conscious. According to Descartes, nothing can be done to give a body thought and consciousness. No matter how we shape a body or combine it with other bodies, we cannot turn the body into a mind, a thing that is conscious, because being conscious is not a way of being extended.
For Descartes, a person consists of a human body and a human mind causally interacting with one another. For example, the intentions of a human being may cause that person's limbs to move. In this way, the mind can affect the body. In addition, the sense organs of a human being maybe affected by light, pressure, or sound, external sources, which in turn affect the brain, affecting mental states. Thus, the body may affect the mind. Exactly how mind can affect body, and vice versa, is a central issue in the philosophy of mind, and is known as the mind-body problem. According to Descartes, this interaction of mind and body is peculiarly intimate. Unlike the interaction between a pilot and his ship, the connection between mind and body more closely resembles two substances that have been thoroughly mixed together.
In response to the mind-body problem arising from Descartes”s theory of substance dualism, a number of philosophers have advocated various forms of substance monism, the doctrine that there is ultimately just one kind of thing in reality. In the 18th century, Irish philosopher George Berkeley claimed there were no material objects in the world, only minds and their ideas. Berkeley thought that talk about physical objects was simply a way of organizing the flow of experience. Near the turn of the 20th century, American psychologist and philosopher William James proposed another form of substance monism. James claimed that experience is the basic stuff from which both bodies and minds are constructed.
Most philosophers of mind today are substance monists of a third type: They are materialists who believe that everything in the world is basically material, or a physical object. Among materialists, there is still considerable disagreement about the status of mental properties, which are conceived as properties of bodies or brains. Materialists who are property diarists believe that mental properties are an additional kind of property or attribute, not reducible to physical properties. Property dualists have the problem of explaining how such properties can fit into the world envisaged by modern physical science, according to which there are physical explanations for all things.
Materialists who are property monists believe that there is ultimately only one type of property, although they disagree on whether or not mental properties exist in material form. Some property monists, known as reductive materialists, hold that mental properties exist simply as a subset of relatively complex and non-basic physical properties of the brain. Reductive materialists have the problem of explaining how the physical states of the brain can be inwardly accessible and have a subjective character, as mental states do. Other property monists, known as eliminative materialists, consider the whole category of mental properties to be a mistake. According to them, mental properties should be treated as discredited postulates of an outmoded theory. Eliminative materialism is difficult for most people to accept, since we seem to have direct knowledge of our own mental phenomena by introspection and because we use the general principles we understand about mental phenomena to predict and explain the behavior of others.
Philosophy of mind concerns itself with a number of specialized problems. In addition to the mind-body problem, important issues include those of personal identity, immortality, and artificial intelligence.
During much of Western history, the mind has been identified with the soul as presented in Christian Theology. According to Christianity, the soul is the source of a person”s identity and is usually regarded as immaterial; thus, it is capable of enduring after the death of the body. Descartes”s conception of the mind as a separate, nonmaterial substance fits well with this understanding of the soul. In Descartes”s view, we are aware of our bodies only as the cause of sensations and other mental phenomena. Consequently our personal essence is composed more fundamentally of mind and the preservation of the mind after death would constitute our continued existence.
The mind conceived by materialist forms of substance monism does not fit as neatly with this traditional concept of the soul. With materialism, once a physical body is destroyed, nothing enduring remains. Some philosophers think that a concept of personal identity can be constructed that permits the possibility of life after death without appealing to separate immaterial substances. Following in the tradition of 17th-century British philosopher John Locke, these philosophers propose that a person consists of a stream of mental events linked by memory. It is these links of memory, rather than a single underlying substance, that provides the unity of a single consciousness through time. Immortality is conceivable if we think of these memory links as connecting a later consciousness in heaven with an earlier one on earth.
The field of artificial intelligence also raises interesting questions for the philosophy of mind. People have designed machines that mimic or model many aspects of human intelligence, and there are robots currently in use whose behavior is described in terms of goals, beliefs, and perceptions. Such machines are capable of behavior that, were it exhibited by a human being, would surely be taken to be free and creative. As an example, in 1996 an IBM computer named Deep Blue won a chess game against Russian world champion Garry Kasparov under international match regulations. Moreover, it is possible to design robots that have some sort of privileged access to their internal states. Philosophers disagree over whether such robots truly think or simply appear to think and whether such robots should be considered to be conscious
Dualism, in philosophy, the theory that the universe is explicable only as a whole composed of two distinct and mutually irreducible elements. In Platonic philosophy the ultimate dualism is between “being” and “nonbeing”-that is, between ideas and matter. In the 17th century, dualism took the form of belief in two fundamental substances: mind and matter. French philosopher René Descartes, whose interpretation of the universe exemplifies this belief, was the first to emphasize the irreconcilable difference between thinking substance (mind) and extended substance (matter). The difficulty created by this view was to explain how mind and matter interact, as they apparently do in human experience. This perplexity caused some Cartesians to deny entirely any interaction between the two. They asserted that mind and matter are inherently incapable of affecting each other, and that any reciprocal action between the two is caused by God, who, on the occasion of a change in one, produces a corresponding change in the other. Other followers of Descartes abandoned dualism in favor of monism.
In the 20th century, reaction against the monistic aspects of the philosophy of idealism has to some degree revived dualism. One of the most interesting defenses of dualism is that of Anglo-American psychologist William McDougall, who divided the universe into spirit and matter and maintained that good evidence, both psychological and biological, indicates the spiritual basis of physiological processes. French philosopher Henri Bergson in his great philosophic work Matter and Memory likewise took a dualistic position, defining matter as what we perceive with our senses and possessing in itself the qualities that we perceive in it, such as color and resistance. Mind, on the other hand, reveals itself as memory, the faculty of storing up the past and utilizing it for modifying our present actions, which otherwise would be merely mechanical. In his later writings, however, Bergson abandoned dualism and came to regard matter as an arrested manifestation of the same vital impulse that composes life and mind.
Dualism, in philosophy, the theory that the universe is explicable only as a whole composed of two distinct and mutually irreducible elements. In Platonic philosophy the ultimate dualism is between “being” and “nonbeing-that is, between ideas and matter. In the 17th century, dualism took the form of belief in two fundamental substances: mind and matter. French philosopher René Descartes, whose interpretation of the universe exemplifies this belief, was the first to emphasize the irreconcilable difference between thinking substance (mind) and extended substance (matter). The difficulty created by this view was to explain how mind and matter interact, as they apparently do in human experience. This perplexity caused some Cartesians to deny entirely any interaction between the two. They asserted that mind and matter are inherently incapable of affecting each other, and that any reciprocal action between the two is caused by God, who, on the occasion of a change in one, produces a corresponding change in the other. Other followers of Descartes abandoned dualism in favor of monism.
In the 20th century, reaction against the monistic aspects of the philosophy of idealism has to some degree revived dualism. One of the most interesting defenses of dualism is that of Anglo-American psychologist William McDougall, who divided the universe into spirit and matter and maintained that good evidence, both psychological and biological, indicates the spiritual basis of physiological processes. French philosopher Henri Bergson in his great philosophic work Matter and Memory likewise took a dualistic position, defining matter as what we perceive with our senses and possessing in itself the qualities that we perceive in it, such as color and resistance. Mind, on the other hand, reveals itself as memory, the faculty of storing up the past and utilizing it for modifying our present actions, which otherwise would be merely mechanical. In his later writings, however, Bergson abandoned dualism and came to regard matter as an arrested manifestation of the same vital impulse that composes life and mind.
For many people understanding the place of mind in nature is the greatest philosophical problem. Mind is often though to be the last domain that stubbornly resists scientific understanding and philosophers defer over whether they find that cause for celebration or scandal. The mind-body problem in the modern era was given its definitive shape by Descartes, although the dualism that he espoused is in some form whatever there is a religious or philosophical tradition there is a religious or philosophical tradition whereby the soul may have an existence apart from the body. While most modern philosophers of mind would reject the imaginings that lead us to think that this makes sense, there is no consensus over the best way to integrate our understanding of people as bearers of physical properties lives on the other
According to the occasionalists, the action of the mind is not, and cannot be, the cause of the corresponding action of the body. Whenever any action of the mind takes place, God directly produces in connection with that action, and by reason of it, a corresponding action of the body; the converse process is likewise true. This theory did not solve the problem, for if the mind cannot act on the body (matter), then God, conceived as mind, cannot act on matter. Conversely, if God is conceived as other than mind, then he cannot act on mind. A proposed solution to this problem was furnished by exponents of radical empiricism such as the American philosopher and psychologist William James. This theory disposed of the dualism of the occasionalists by denying the fundamental difference between mind and matter.
Generally, along with consciousness, that experience of an external world or similar scream or other possessions, takes upon itself the visual experience or deprived by some normative sense through the data of sensationalistically viewed visual experience, that this, however, does not perceive the world accurately. In its frontal experiment, as researchers reared kittens in total darkness, except that for five hours a day the kittens were placed in an environment with only vertical lines. When the animals were later exposed to horizontal lines and forms, they had trouble perceiving these forms.
Philosophers have long debated the role of experience in human perception. In the late 17th century, Irish philosopher William Molyneux wrote to his friend, English philosopher John Locke, and asked him to consider the following scenario: Suppose that you could restore sight to a person who was blind. Using only vision, would that person be able to tell the difference between a cube and a sphere, which she or he had previously experienced only through touch? Locke, who emphasized the role of experience in perception, thought the answer was no. Modern science actually allows us to address this philosophical question, because a very small number of people who were blind have had their vision restored with the aid of medical technology.
Two researchers, British psychologist Richard Gregory and British-born neurologists” Oliver Sacks, have written about their experiences with men who were blind for a long time due to cataracts and then had their vision restored late in life. When their vision was restored, they were often confused by visual input and were unable to see the world accurately. For instance, they could detect motion and perceive colors, but they had great difficulty with complex stimuli, such as faces. Much of their poor perceptual ability was probably due to the fact that the synapses in the visual areas of their brains had received little or no stimulation throughout their lives. Thus, without visual experience, the visual system does not develop properly.
Visual experience is useful because it creates memories of past stimuli that can later serve as a context for perceiving new stimuli. Thus, you can think of experience as a form of context that you carry around with you. A visual illusion occurs when your perceptual experience of a stimulus is substantially different from the actual stimulus you are viewing. In the previous example, you saw the green circles as different sizes, even though they were actually the same size. To experience another illusion, look at the illustration entitled “Zöllner Illusion.” What shape do you see? You may see a trapezoid that is wider at the top, but the actual shape is a square. Such illusions are natural artifacts of the way our visual systems work. As a result, illusions provide important insights into the functioning of the visual system. In addition, visual illusions are fun to experience.
Consider the pair of illusions in the accompanying illustration, “Illusions of Length.” These illusions are called geometrical illusions, because they use simple geometrical relationships to produce the illusory effects. The first illusion, the Müller-Lyer illusion, is one of the most famous illusions in psychology. Which of the two horizontal lines is longer? Although your visual system tells you that the lines are not equal, a ruler would tell you that they are equal. The second illusion is called the Ponzo illusion. Once again, the two lines do not appear to be equal in length, but they are.
Prevailing states of consciousness, are not as simple, or agreed-upon by any steadfast and held definition of itself, in so, that, consciousness exists. Attempted definitions tend to be tautological (for example, consciousness defined s awareness) or merely descriptive (for example, consciousness described as sensations, thoughts, or feelings). Despite this problem of definition, the subject of consciousness has had a remarkable history. At one time the primary subject matter of psychology, consciousness as an area of study suffered an almost total demise, later reemerging to become a topic of current interest.
René Descartes applied rigorous scientific methods of deduction to his exploration of philosophical questions. Descartes is probably best known for his pioneering work in philosophical skepticism. Author Tom Sorell examines the concepts behind Descartes”s work Meditationes de Prima Philosophia (1641; Meditations on First Philosophy), focusing on its unconventional use of logic and the reactions it aroused. Most of the philosophical discussions of consciousness arose from the mind-body issues posed by the French philosopher and mathematician René Descartes in the 17th century. Descartes asked: Is the mind, or consciousness, independent of matter? Is consciousness extended (physical) or unextended (nonphysical)? Is consciousness determinative, or is it determined? English philosophers such as John Locke equated consciousness with physical sensations and the information they provide, whereas European philosophers such as Gottfried Wilhelm Leibniz and Immanuel Kant gave a more central and active role to consciousness.
The philosopher who most directly influenced subsequent exploration of the subject of consciousness was the 19th-century German educator Johann Friedrich Herbart, who wrote that ideas had quality and intensity and that they may inhibit or facilitate one another. Thus, ideas may pass from “states of reality” (consciousness) to “states of a tendency” (unconsciousness), with the dividing line between the two states being described as the threshold of consciousness. This formulation of Herbart clearly presages the development, by the German psychologist and physiologist Gustav Theodor Fechner, of the psychophysical measurement of sensation thresholds, and the later development by Sigmund Freud of the concept of the unconscious.
The experimental analysis of consciousness dates from 1879, when the German psychologist Wilhelm Max Wundt started his research laboratory. For Wundt, the task of psychology was the study of the structure of consciousness, which extended well beyond sensations and included feelings, images, memory, attention, duration, and movement. Because early interest focused on the content and dynamics of consciousness, it is not surprising that the central methodology of such studies was introspection; that is, subjects reported on the mental contents of their own consciousness. This introspective approach was developed most fully by the American psychologist Edward Bradford Titchener at Cornell University. Setting his task as that of describing the structure of the mind, Titchener attempted to detail, from introspective self-reports, the dimensions of the elements of consciousness. For example, taste was “dimensionalized” into four basic categories: sweet, sour, salt, and bitter. This approach was known as structuralism.
By the 1920s, however, a remarkable revolution had occurred in psychology that was to essentially remove considerations of consciousness from psychological research for some 50 years: Behaviorism captured the field of psychology. The main initiator of this movement was the American psychologist John Broadus Watson. In a 1913 article, Watson stated, “I believe that we can write a psychology and never use the term”s consciousness, mental states, mind . . . imagery and the like.” Psychologists then turned almost exclusively to behavior, as described in terms of stimulus and response, and consciousness was totally bypassed as a subject. A survey of eight leading introductory psychology texts published between 1930 and the 1950s found no mention of the topic of consciousness in five texts, and in two it was treated as a historical curiosity.
Beginning in the late 1950s, however, interest in the subject of consciousness returned, specifically in those subjects and techniques relating to altered states of consciousness: sleep and dreams, meditation, biofeedback, hypnosis, and drug-induced states. The course in sleep and dream research was directly fueled by a discovery relevant to the nature of consciousness. A physiological indicator of the dream state was found: At roughly 90-minute intervals, the eyes of sleepers were observed to move rapidly, and at the same time the sleepers” brain waves would show a pattern resembling the waking state. When people were awakened during these periods of rapid eye movement, they almost always reported dreams, whereas if awakened at other times they did not. This and other research clearly indicated that sleep, once considered a passive state, were instead an active state of consciousness.
During the 1960s, an increased search for “higher levels” of consciousness through meditation resulted in a growing interest in the practices of Zen Buddhism and Yoga from Eastern cultures. A full flowering of this movement in the United States was seen in the development of training programs, such as Transcendental Meditation, that were self-directed procedures of physical relaxation and focused attention. Biofeedback techniques also were developed to bring body systems involving factors such as blood pressure or temperature under voluntary control by providing feedback from the body, so that subjects could learn to control their responses. For example, researchers found that persons could control their brain-wave patterns to some extent, particularly the so-called alpha rhythms generally associated with a relaxed, meditative state. This finding was especially relevant to those interested in consciousness and meditation, and a number of “alpha training” programs emerged.
Another subject that led to increased interest in altered states of consciousness was hypnosis, which involves a transfer of conscious control from the subject to another person. Hypnotism has had a long and intricate history in medicine and folklore and has been intensively studied by psychologists. Much has become known about the hypnotic state, relative to individual suggestibility and personality traits; the subject has now largely been demythologized, and the limitations of the hypnotic state are fairly well known. Despite the increasing use of hypnosis, however, much remains to be learned about this unusual state of focused attention.
Finally, many people in the 1960s experimented with the psychoactive drugs known as hallucinogens, which produce disorders of consciousness. The most prominent of these drugs are lysergic acid diethylamide, or LSD; mescaline, and psilocybin; the latter two have long been associated with religious ceremonies in various cultures. LSD, because of its radical thought-modifying properties, was initially explored for its so-called mind-expanding potential and for its psychotomimetic effects (imitating psychoses). Little positive use, however, has been found for these drugs, and their use is highly restricted.
As the concept of a direct, simple linkage between environment and behavior became unsatisfactory in recent decades, the interest in altered states of consciousness may be taken as a visible sign of renewed interest in the topic of consciousness. That persons are active and intervening participants in their behavior has become increasingly clear. Environments, rewards, and punishments are not simply defined by their physical character. Memories are organized, not simply stored. An entirely new area called cognitive psychologies have emerged that centers on these concerns. In the study of children, increased attention is being paid to how they understand, or perceive, the world at different ages. In the field of animal behavior, researchers increasingly emphasize the inherent characteristics resulting from the way a species has been shaped to respond adaptively to the environment. Humanistic psychologists, with a concern for self-actualization and growth, have emerged after a long period of silence. Throughout the development of clinical and industrial psychology, the conscious states of persons in terms of their current feelings and thoughts were of obvious importance. The role of consciousness, however, was often de-emphasizing in favor of unconscious needs and motivations. Trends can be seen, however, toward a new emphasis on the nature of states of consciousness.
Perception (psychology), spreads of a process by which organisms interpret and organize sensation to produce a meaningful experience of the world. Sensation usually refers to the immediate, relatively unprocessed result of stimulation of sensory receptors in the eyes, ears, nose, tongue, or skin. Perception, on the other hand, better describes one”s ultimate experience of the world and typically involves further processing of sensory input. In practice, sensation and perception are virtually impossible to separate, because they are part of one continuous process.
Our sense organs translate physical energy from the environment into electrical impulses processed by the brain. For example, light, in the form of electromagnetic radiation, causes receptor cells in our eyes to activate and send signals to the brain. But we do not understand these signals as pure energy. The process of perception allows us to interpret them as objects, events, people, and situations.
Without the ability to organize and interpret sensations, life would seem like a meaningless jumble of colors, shapes, and sounds. A person without any perceptual ability would not be able to recognize faces, understand language, or avoid threats. Such a person would not survive for long. In fact, many species of animals have evolved exquisite sensory and perceptual systems that aid their survival.
Organizing raw sensory stimuli into meaningful experiences involves cognition, a set of mental activities that includes thinking, knowing, and remembering. Knowledge and experience are extremely important for perception, because they help us make sense of the input to our sensory systems. To understand these ideas, try to read the following passage:
You could probably read the text, but not as easily as when you read letters in their usual orientation. Knowledge and experience allowed you to understand the text. You could read the words because of your knowledge of letter shapes, and maybe you even have some prior experience in reading text upside down. Without knowledge of letter shapes, you would perceive the text as meaningless shapes, just as people who do not know Chinese or Japanese see the characters of those languages as meaningless shapes. Reading, then, is a form of visual perception.
Note that as above, whereby you did not stop to read every single letter carefully. Instead, you probably perceived whole words and phrases. You may have also used context to help you figure out what some of the words must be. For example, recognizing the upside may have helped you predict down, because the two words often occur together. For these reasons, you probably overlooked problems with the individual letters-some of them, such as the n in down, are mirror images of normal letters. You would have noticed these errors immediately if the letters were right side up, because you have much more experience seeing letters in that orientation.
How people perceive a well-organized pattern or whole, instead of many separate parts, is a topic of interest in Gestalt psychology. According to Gestalt psychologists, the whole is different from the sum of its parts. The gestalt is a German word meaning configuration or pattern.
The three founders of Gestalt psychology were German researchers’ Max Wertheimer, Kurt Koffka, and Wolfgang Köhler. These men identified a number of principles by which people organize isolated parts of a visual stimulus into groups or whole objects. There are five main laws of grouping: proximity, similarity, continuity, closure, and common fate. A sixth law, that of simplicity, encompasses all of these laws.
Although most often applied to visual perception, the Gestalt laws also apply to perception in other senses. When we listen to music, for example, we do not hear a series of disconnected or random tones. We interpret the music as a whole, relating the sounds to each other based on how similar they are in pitch, how close together they are in time, and other factors. We can perceive melodies, patterns, and form in music. When a song is transposed to another key, we still recognize it, even though all of the notes have changed.
The law of proximity states that the closer objects are to one another, the more likely we are to mentally group them together. In the illustration below, we perceive as groups the boxes that are closest to one another. Note that we do not see the second and third boxes from the left as a pair, because they are spaced farther apart.
The law of similarity leads us to link together parts of the visual field that are similar in color, lightness, texture, shape, or any other quality. That is why, in the following illustration, we perceive rows of objects instead of columns or other arrangements.
The law of continuity leads us to see a line as continuing in a particular direction, rather than making an abrupt turn. In the drawing on the left below, we see a straight line with a curved line running through it. Notice that we do not see the drawing as consisting of the two pieces in the drawing on the right.
According to the law of closure, we prefer complete forms to incomplete forms. Thus, in the drawing below, we mentally close the gaps and perceive a picture of a duck. This tendency allows us to perceive whole objects from incomplete and imperfect forms.
The law of common fate leads us to group together objects that move in the same direction. In the following illustration, imagine that three of the balls are moving in one direction, and two of the balls are moving in the opposite direction. If you saw these in actual motion, you would mentally group the balls that moved in the same direction. Because of this principle, we often see flocks of birds or schools of fish as one unit.
Central to the approach of Gestalt psychologists is the law of prägnanz, or simplicity. This general notions, which encompass all other Gestalt laws, states that people intuitively prefer the simplest, most stable of possible organizations. For example, look at the illustration below. You could perceive this in a variety of ways: as three overlapping disks; as one whole disk and two partial disks with slices cut out of their right sides; or even as a top view of three-dimensional, cylindrical objects. The law of simplicity states that you will see the illustration as three overlapping disks, because that is the simplest interpretation.
Not only does perception involve organization and grouping, it also involves distinguishing an object from its surroundings. Notice that once you perceive an object, the area around that object becomes the background. For example, when you look at your computer monitor, the wall behind it becomes the background. The object, or figure, is closer to you, and the background, or ground, is farther away.
Gestalt psychologists have devised ambiguous figure-ground relationships-that is, drawings in which the figure and ground can be reversed-to illustrate their point that the whole is different from the sum of its parts. Consider the accompanying illustration entitled “Figure and Ground.” You may see a white vase as the figure, in which case you will see it displayed on a dark ground. However, you may also see two dark faces that point toward one another. Notice that when you do so, the white area of the figure becomes the ground. Even though your perception may alternate between these two possible interpretations, the parts of the illustration are constant. Thus, the illustration supports the Gestalt position that the whole is not determined solely by its parts. The Dutch artist M. C. Escher was intrigued by ambiguous figure-ground relationships.
Although such illustrations may fool our visual systems, people are rarely confused about what they see. In the real world, vases do not change into faces as we look at them. Instead, our perceptions are remarkably stable. Considering that we all experience rapidly changing visual input, the stability of our perceptions is more amazing than the occasional tricks that fool our perceptual systems. How we perceive, a stable world is due, in part, to a number of factors that maintain perceptual constancy.
As we view an object, the image it projects on the retinas of our eyes changes with our viewing distance and angle, the level of ambient light, the orientation of the object, and other factors. Perceptual constancy allows us to perceive an object as roughly the same in spite of changes in the retinal image. Psychologists have identified a number of perceptual consistencies, including lightness constancy, color constancy, shape constancy, and size constancy.
Lightness constancy means that our perception of an object”s lightness or darkness remains constant despite changes in illumination. To understand lightness constancy, try the following demonstration. First, take a plain white sheet of paper into a brightly lit room and note that the paper appears to be white. Then, turn out a few of the lights in the room. Note that the paper continues to appear white. Next, if it will not make the room pitch black, turn out some more lights. Note that the paper appears to be white regardless of the actual amount of light energy that enters the eye.
Lightness constancy illustrates an important perceptual principle: Perception is relative. Lightness constancy may occur because the white piece of paper reflects more light than any of the other objects in the room-regardless of the different lighting conditions. That is, you may have determined the lightness or darkness of the paper relative to the other objects in the room. Another explanation, proposed by 19th-century German physiologist Hermann von Helmholtz, is that we unconsciously take the lighting of the room into consideration when judging the lightness of objects.
Color constancy is closely related to lightness constancy. Color constancy means that we perceive the color of an object as the same despite changes in lighting conditions. You have experienced color constancy if you have ever worn a pair of sunglasses with colored lenses. In spite of the fact that the colored lenses change the color of light reaching your retina, you still perceive white objects as white and red objects as red. The explanations for color constancy parallel those for lightness constancy. One proposed explanation is that because the lenses tint everything with the same color, we unconsciously “subtract” that color from the scene, leaving the original colors.
Another perceptual constancy is shape constancy, which means that you perceive objects as retaining the same shape despite changes in their orientation. To understand shape constancy, hold a book in front of your face so that you are looking directly at the cover. The rectangular nature of the book should be very clear. Now, rotate the book away from you so that the bottom edge of the cover is much closer to you than the top edge. The image of the book on your retina will now be quite different. In fact, the image will now be trapezoidal, with the bottom edge of the book larger on your retina than the top edge. (Try to see the trapezoid by closing one eye and imagining the cover as a two-dimensional shape.) In spite of this trapezoidal retinal image, you will continue to see the book as rectangular. In large measure, shape constancy occurs because your visual system takes depth into consideration.
Depth perception also plays a major role in size constancy, the tendency to perceive objects as staying the same size despite changes in our distance from them. When an object is near to us, its image on the retina is large. When that same objects are far away, its image on the retina is small. In spite of the changes in the size of the retinal image, we perceive the object as the same size. For example, when you see a person at a great distance from you, you do not perceive that person as very small. Instead, you think that the person is of normal size and far away. Similarly, when we view a skyscraper from far away, its image on our retina is very small-yet we perceive the building as very large.
Psychologists have proposed several explanations for the phenomenon of size constancy. First, people learn the general size of objects through experience and use this knowledge to help judges size. For example, we know that insects are smaller than people and that people are smaller than elephants. In addition, people take distance into consideration when judging the size of an object. Thus, if two objects have the same retinal image size, the object that seems farther away will be judged as larger. Even infants seem to possess size constancy.
Another explanation for size constancy involves the relative sizes of objects. According to this explanation, we see objects as the same size at different distances because they stay the same size relative to surrounding objects. For example, as we drive toward a stop sign, the retinal image sizes of the stop sign relative to a nearby tree remain constant-both images grow larger at the same rate.
Depth perception is the ability to see the world in three dimensions and to perceive distance. Although this ability may seem simple, depth perception is remarkable when you consider that the images projected on each retina are two-dimensional. From these flat images, we construct a vivid three-dimensional world. To perceive depth, we depend on two main sources of information: binocular disparity, a depth cue that requires both eyes; and monocular cues, which allow us to perceive depth with just one eye.
An autostereogram is a remarkable kind of two-dimensional image that appears three-dimensional (3-D) when viewed in the right way. To see the 3-D image, first make sure you are viewing the expanded version of this picture. Then try to focus your eyes on a point in space behind the picture, keeping your gaze steady. An image of a person playing a piano will appear.
Because our eyes are spaced about 7 cm. (about 3 in.) apart, the left and right retinas receive slightly different images. This difference in the left and right images is called binocular disparity. The brain integrates these two images into a single three-dimensional image, allowing us to perceive depth and distance.
When we look out over vast distances, faraway points look hazy or blurry. This effect is known as atmospheric perspective, and it helps us to judge distances. In this picture, the ridges that are farther away appear hazier and less detailed than the closer ridges.
The air contains microscopic particles of dust and moisture that make distant objects look hazy or blurry. This effect is called atmospheric perspective or aerial perspective, and we use it to judge distance. In the anthem, “Oh Canada” it draws reference to the effect of atmospheric perspectives, which make’s distant mountains appear bluish or purple. When you are standing on a mountain, you see brown earth, gray rocks, and green trees and grass-but little that is purple. When you are looking at a mountain from a distance, however, atmospheric particles bend the light so that the rays that reach your eyes lie in the blue or purple part of the color spectrum. This same effect makes the sky appear blue.
An influential American psychologist, James J. Gibson, was among the first people to recognize the importance of a texture gradient in perceiving depth. A texture gradient arises whenever we view a surface from a slant, rather than directly from above. Most surfaces-such as the ground, a road, or a field of flowers-have a texture. The texture becomes denser and less detailed as the surface recedes into the background, and this information helps us to judge depth. For example, look at the floor or ground around you. Notice that the apparent texture of the floor changes over distance. The texture of the floor near you appears more detailed than the texture of the floor farther away. When objects are placed at different locations along a texture gradient, judging their distance from you becomes fairly easy.
Linear perspectives mean that parallel lines, such as the white lines of this road, appear to converge with greater distance and reach a vanishing point at the horizon. We use our knowledge of linear perspective to help us judge distances.
Artists have learned to make great use of linear perspective in representing a three-dimensional world on a two-dimensional canvas. Linear perspective refers to the fact that parallels lines, such as railroad tracks, appears to converge with distance, eventually reaching a vanishing point at the horizon. The more the lines converge, the farther away they appear.
When estimating an object”s distance from us, we take into account the size of its image relative to other objects. This depth cue is known as relative size. In this photograph, because we assume that the airplanes are the same size, we judge the airplanes that take up less of the image for being farther away from the camera.
Another visual cue to apparent depth is closely related to size constancy. According to size constancy, even though the size of the retinal image may change as an object moves closer to us or farther from us, we perceive that object as staying about the same size. We are able to do so because we take distance into consideration. Thus, if we assume that two objects are the same size, we perceive the object that casts a smaller retinal image as farther away than the object that casts a larger retinal image. This depth cue is known as relative size, because we consider the size of an object”s retinal images relative to other objects when estimating its distance.
Another depth cue involves the familiar size of objects. Through experience, we become familiar with the standard size of certain objects, such as houses, cars, airplanes, people, animals, books, and chairs. Knowing the size of these objects helps us judge our distance from them and from objects around them.
When judging an object”s distance, we consider its height in our visual field relative to other objects. The closer an object is to the horizon in our visual field, the farther away we perceive it to be. For example, the wildebeest that are higher in this photograph appear farther away than those that are lower.
We perceive points nearer to the horizon as more distant than points that are farther away from the horizon. This means that below the horizon, objects higher in the visual field appear farther away than those that are lower. Above the horizon, objects lower in the visual field appear farther away than those that are higher. For example, in the accompanying picture entitled “Relative Height,” the animals higher in the photo appear farther away than the animals lower in the photo. But above the horizon, the clouds lower in the photo appear farther away than the clouds higher in the photo. This depth cue is called relative elevation or relative height, because when judging an object”s distance, we consider its height in our visual field relative to other objects.
The monocular cues discussed so far-interposition, atmospheric perspective, texture gradient, linear perspective, size cues, and height cues-are sometimes called pictorial cues, because artists can use them to convey three-dimensional information. Another monocular cue cannot be represented on a canvas. Motion parallax occurs when objects at different distances from you appear to move at different rates when you are in motion. The next time you are driving along in a car, pay attention to the rate of movement of nearby and distant objects. The fence near the road appears to whiz past you, while the more distant hills or mountains appear to stay in virtually the same position as you move. The rate of an object”s movement provides a cue to its distance.
Although motion plays an important role in depth perception, the perception of motion is an important phenomenon in its own right. It allows a baseball outfielder to calculate the speed and trajectory of a ball with extraordinary accuracy. Automobile drivers rely on motion perception to judge the speeds of other cars and avoid collisions. A cheetah must be able to detect and respond to the motion of antelopes, its chief prey, in order to survive.
Initially, you might think that you perceive motion when an object”s image moves from one part of your retina to another part of your retina. In fact, which is what occurs if you are staring straight ahead and a person walks in front of you. Motion perception, however, is not that simple-if it was, the world would appear to move every time we moved our eyes. Keep in mind that you are almost always in motion. As you walk along a path, or simply move your head or your eyes, images from many stationary objects move around on your retina. How does your brain know which movement on the retina is due to your own motion and which is due to motion in the world? Understanding that distinction is the problem that faces psychologists who want to explain motion perception.
One explanation of motion perception involves a form of unconscious inference. That is, when we walk around or move our head in a particular way, we unconsciously expect that images of stationary objects will move on our retina. We discount such movement on the retina as due to our own bodily motion and perceive the objects as stationary.
In contrast, when we are moving and the image of an object does not move on our retina, we perceive that object as moving. Consider what happens as a person moves in front of you and you track that person”s motion with your eyes. You move your head and your eyes to follow the person”s movement, with the result that the image of the person does not move on your retina. The fact that the person”s image stays in roughly the same part of the retina lead you to perceive the person as moving.
Psychologist James J. Gibson thought that this explanation of motion perception was too complicated. He reasoned that perception does not depend on internal thought processes. He thought, instead, that the objects in our environment contain all the information necessary for perception. Think of the aerial acrobatics of a fly. Clearly, the fly is a master of motion and depth perception, yet few people would say the fly makes unconscious inferences. Gibson identified a number of cues for motion detection, including the covering and uncovering of background. Research has shown that motion detection is, in fact, much easier against a background. Thus, as a person moves in front of you, that person first covers and then uncovers portions of the background.
People may perceive motion when none actually exists. For example, motion pictures are really a series of slightly different still pictures flashed on a screen at a rate of 24 pictures, or frames, per second. From this rapid succession of still images, our brain perceives a fluid motion-a phenomenon known as stroboscopic movement. For more information about illusions of emotion.
Experience in interacting with the world is vital to perception. For instance, kittens raised without visual experience or deprived of normal visual experience do not perceive the world accurately. In one experiment, researchers reared kittens in total darkness, except that for five hours a day the kittens were placed in an environment with only vertical lines. When the animals were later exposed to horizontal lines and forms, they had trouble perceiving these forms.
Philosophers have long debated the role of experience in human perception. In the late 17th century, Irish philosopher William Molyneux wrote to his friend, English philosopher John Locke, and asked him to consider the following scenario: Suppose that you could restore sight to a person who was blind. Using only vision, would that person be able to tell the difference between a cube and a sphere, which she or he had previously experienced only through touch? Locke, who emphasized the role of experience in perception, thought the answer was no. Modern science actually allows us to address this philosophical question, because a very small number of people who were blind have had their vision restored with the aid of medical technology.
Two researchers, British psychologist Richard Gregory and British-born neurologists' Oliver Sacks, have written about their experiences with men who were blind for a long time due to cataracts and then had their vision restored late in life. When their vision was restored, they were often confused by visual input and were unable to see the world accurately. For instance, they could detect motion and perceive colors, but they had great difficulty with complex stimuli, such as faces. Much of their poor perceptual ability was probably due to the fact that the synapses in the visual areas of their brains had received little or no stimulation throughout their lives. Thus, without visual experience, the visual system does not develop properly.
Visual experience is useful because it creates memories of past stimuli that can later serve as a context for perceiving new stimuli. Thus, you can think of experience as a form of context that you carry around with you.
Ordinarily, when you read, you use the context of your prior experience with words to process the words you are reading. Context may also occur outside of you, as in the surrounding elements in a visual scene. When you are reading and you encounter an unusual word, you may be able to determine the meaning of the word by its context. Your perception depends on the context.
Although context is useful most of the time, on some rare occasions context can lead you to misperceive a stimulus. Look at Example B in the “Context Effects” illustration. Which of the green circles is larger? You may have guessed that the green circle on the right is larger. In fact, the two circles are the same size. Your perceptual system was fooled by the context of the surrounding red circles.
Against a background of slanted lines, a perfect square appears trapezoidal-that is, wider at the top than at the bottom. This illusion may occur because the lines create a sense of depth, making the top of the square seems farther away and larger.
A visual illusion occurs when your perceptual experience of a stimulus is substantially different from the actual stimulus you are viewing. In the previous example, you saw the green circles as different sizes, even though they were actually the same size. To experience another illusion, look at the illustration entitled “Zöllner Illusion.” What shape do you see? You may see a trapezoid that is wider at the top, but the actual shape is a square. Such illusions are natural artifacts of the way our visual systems work. As a result, illusions provide important insights into the functioning of the visual system. In addition, visual illusions are fun to experience.
An ascribing notion to awaiting the idea that something debated finds to its intent of meaning the explicit significance of the same psychology that is immeasurably the scientific study of behavior and the mind. This definition contains three elements. The first is that psychology is a scientific enterprise that obtains knowledge through systematic and objective methods of observation and experimentation. Second is that psychologists study behavior, which refers to any action or reaction that can be measured or observed-such as the blink of an eye, an increase in heart rate, or the unruly violence that often erupts in a mob. Third is that psychologists study the mind, which refers to both conscious and unconscious mental states. These states cannot actually be seen, only inferred from observable behavior.
Many people think of psychologists as individuals who dispense advice, analyze personality, and help those who are troubled or mentally ill. But psychology is far more than the treatment of personal problems. Psychologists strive to understand the mysteries of human nature-why people think, feel, and act as they do. Some psychologists also study animal behavior, using their findings to determine laws of behavior that apply to all organisms and to formulate theories about how humans behave and think.
With its broad scope, psychology investigates an enormous range of phenomena: learning and memory, sensation and perception, motivation and emotion, thinking and language, personality and social behavior, intelligence, infancy and child development, mental illness, and much more. Furthermore, psychologists examine these topics from a variety of complementary perspectives. Some conduct detailed biological studies of the brain, others explore how we process information; others analyze the role of evolution, and still others study the influence of culture and society.
Psychologists seek to answer a wide range of important questions about human nature: Are individuals genetically predisposed at birth to develop certain traits or abilities? How accurate are people at remembering faces, places, or conversations from the past? What motivates us to seek out friends and sexual partners? Why do so many people become depressed and behave in ways that seem self-destructive? Do intelligence test scores predict success in school, or later in a career? What causes prejudice, and why is it so widespread? Can the mind be used to heal the body? Discoveries from psychology can help people understand themselves, relate better to others, and solve the problems that confront them.
The term psychology comes from two Greek words: psyche, which means “soul,” and logos, "the study of." These root words were first combined in the 16th century, at a time when the human soul, spirit, or mind was seen as distinct from the body.
Psychology overlaps with other sciences that investigate behavior and mental processes. Certain parts of the field share much with the biological sciences, especially physiology, the biological study of the functions of living organisms and their parts. Like physiologists, many psychologists study the inner workings of the body from a biological perspective. However, psychologists usually focus on the activity of the brain and nervous system.
The social sciences of sociology and anthropology, which study human societies and cultures, also intersect with psychology. For example, both psychology and sociology explore how people behave when they are in groups. However, psychologists try to understand behavior from the vantage point of the individual, whereas sociologists focus on how behavior is shaped by social forces and social institutions. Anthropologists investigate behavior as well, paying particular attention to the similarities and differences between human cultures around the world.
Psychology is closely connected with psychiatry, which is the branch of medicine specializing in mental illnesses. The study of mental illness is one of the largest areas of research in psychology. Psychiatrists and psychologists differ in their training. A person seeking to become a psychiatrist first obtains a medical degree and then engages in further formal medical education in psychiatry. Most psychologists have a doctoral graduate degree in psychology.
The study of psychology draws on two kinds of research: basic and applied. Basic researchers seek to test general theories and build a foundation of knowledge, while applied psychologists study people in real-world settings and use the results to solve practical human problems. There are five major areas of research: biopsychology, clinical psychology, cognitive psychology, developmental psychology, and social psychology. Both basic and applied research is conducted in each of these fields of psychology.
This section describes basic research and other activities of psychologists in the five major fields of psychology. Applied research is discussed in the Practical Applications of Psychology section of this article.
Magnetic resonance imaging (MRI) reveals structural differences between a normal adult brain, left, and the brain of a person with schizophrenia, right. The schizophrenic brain has enlarged ventricles (fluid-filled cavities), shown in light gray. However, not all people with schizophrenia show this abnormality.
How do body and mind interacts? Are body and mind fundamentally different parts of a human being, or are they one and the same, interconnected in important ways? Inspired by this classic philosophical debate, many psychologists specialize in biopsychology, the scientific study of the biological underpinnings of behavior and mental processes.
At the heart of this perspective is the notion that human beings, like other animals, have an evolutionary history that predisposes them to behave in ways that are uniquely adaptive for survival and reproduction. Biopsychologists work in a variety of subfields. Researchers in the field of ethology observe fish, reptiles, birds, insects, primates, and other animal species in their natural habitats. Comparative psychologists study animal behavior and make comparisons among different species, including humans. Researchers in evolutionary psychology theorize about the origins of human aggression, altruism, mate selection, and other behaviors. Those in behavioral genetics seek to estimate the extent to which human characteristics such as personality, intelligence, and mental illness are inherited.
Particularly important to biopsychology is a growing body of research in behavioral neuroscience, the study of the links between behavior and the brain and nervous system. Facilitated by computer-assisted imaging techniques that enable researchers to observe the living human brain in action, this area is generating great excitement. In the related area of cognitive neuroscience, researchers record physical activity in different regions of the brain as the subject reads, speaks, solves math problems, or engages in other mental tasks. Their goal is to pinpoint activities in the brain that correspond to different operations of the mind. In addition, many Biopsychologists are involved in psychopharmacology, the study of how drugs affect mental and behavioral functions.
This chart illustrates the percentage of people in the United States who experience a particular mental illness at some point during their lives. The figures are derived from the National Comorbidity Survey, in which researchers interviewed more than 8000 people aged 15 to 54 years. Homeless people and those living in prisons, nursing homes, or other institutions were not included in the survey.
Clinical psychology is dedicated to the study, diagnosis, and treatment of mental illnesses and other emotional or behavioral disorders. More psychologists work in this field than in any other branch of psychology. In hospitals, community clinics, schools, and in private practice, they use interviews and tests to diagnose depression, anxiety disorders, schizophrenia, and other mental illnesses. People with these psychological disorders often suffer terribly. They experience disturbing symptoms that make it difficult for them to work, relate to others, and cope with the demands of everyday life.
Over the years, scientists and mental health professionals have made great strides in the treatment of psychological disorders. For example, advances in psychopharmacology have led to the development of drugs that relieve severe symptoms of mental illness. Clinical psychologists usually cannot prescribe drugs, but they often work in collaboration with a patient”s physicians. Drug treatment is often combined with psychotherapy, a form of intervention that relies primarily on verbal communication to treat emotional or behavioral problems. Over the years, psychologists have developed many different forms of psychotherapy. Some forms, such as a psychoanalysis, focus on resolving internal, unconscious conflicts stemming from childhood and past experiences. Other forms, such as cognitive and behavioral therapies, focus more on the person”s current level of functioning and try to help the individual change distressing thoughts, feelings, or behaviors.
In addition to studying and treating mental disorders, many clinical psychologists study the normal human personality and the ways in which individuals differ from one another. Still, others administer a variety of psychological tests, including intelligence tests and personality tests. These tests are commonly given to individuals in the workplace or in school to assess their interests, skills, and level of functioning. Clinical psychologists also use tests to help them diagnose people with different types of psychological disorders.
An incredibly complex array of influences, including families, acquaintances, mass media, and society as a whole, help determine the moral development of children. Although a rash of violent incidents in American schools in the late 1990s focused attention on deviant youth behavior, the vast majority of children seem to function harmoniously with others. In this August 1999 article from Scientific American, William Damon, director of the Center on Adolescence at Stanford University in California, explores recent findings on how young people develop morality.
Developmental psychology focuses on the changes that come with age. By comparing people of different ages, and by tracking individuals over time, researchers in this area study the ways in which people mature and change over the life span. Within this area, those who specialize in child development or child psychology study physical, intellectual, and social development in fetuses, infants, children, and adolescents. Recognizing that human development is a lifelong process, other developmental psychologists study the changes that occur throughout adulthood. Still others specialize in the study of old age, even the process of dying.
A “shock generator,” top, was used by American psychologist Stanley Milgram in experiments designed to test the obedience of people to authority. An experimenter instructed subjects to administer what they believed were painful electric shocks to Mr. Wallace, bottom, an accomplice of the experimenter who was strapped into a chair and connected to the generator by electrodes on his skin. No actual shocks occurred. The experimenter ordered the subjects to continue as the shocks increased to a level the subjects believed were dangerous or even lethal. In Milgram”s initial study, 65 percent of people obeyed the experimenter and delivered the maximum shock of 450 volts. Milgram discusses his conclusions in this sound clip.
Social psychology is the scientific study of how people think, feel, and behave in social situations. Researchers in this field ask questions such as, How do we form impressions of others? How are people persuaded to change their attitudes or beliefs? What causes people to conform with group situations? What leads someone to help or ignore a person in need? Under what circumstances do people obey or resist orders?
By observing people in real-world social settings, and by carefully devising experiments to test people”s social behavior, social psychologists learn about the ways people influence, perceives, and interact with one another. The study of social influence includes topics such as conformity, obedience to authority, the formation of attitudes, and the principles of persuasion. Researchers interested in social perception study how people come to know and evaluate one another, how people form group stereotypes, and the origins of prejudice. Other topics of particular interest to social psychologists include physical attraction, love and intimacy, aggression, altruism, and group processes. Many social psychologists are also interested in cultural influences on interpersonal behavior.
Whereas basic researchers test theories about mind and behavior, applied psychologists are motivated by a desire to solve practical human problems. Four particularly active areas of application are health, education, business, and law.
Today, many psychologists work in the emerging area of health psychology, the application of psychology to the promotion of physical health and the prevention and treatment of illness. Researchers in this area have shown that human health and well-being depends on both biological and psychological factors.
Many psychologists in this area study psychophysiological disorders (also called psychosomatic disorders), conditions that are brought on or influenced by psychological states, most often stress. These disorders include high blood pressure, headaches, asthma, and ulcers. Researchers have discovered that chronic stress is associated with an increased risk of coronary heart disease. In addition, stress can compromise the body”s immune system and increase susceptibility to illness.
Health psychologists also study how people cope with stress. They have found that people who have family, friends, and other forms of social support are healthier and live longer than those who are more isolated. Other researchers in this field examine the psychological factors that underlie smoking, drinking, drug abuse, risky sexual practices, and other behaviors harmful to health.
Psychologists in all branches of the discipline contribute to our understanding of teaching, learning, and education. Some help develops standardized tests used to measure academic aptitude and achievement. Others study the ages at which children become capable of attaining various cognitive skills, the effects of rewards on their motivation to learn, computerized instruction, bilingual education, learning disabilities, and other relevant topics. Perhaps the best-known application of psychology to the field of education occurred in 1954 when, in the case of Brown v Board of Education, the Supreme Court of the United States outlawed the segregation of public schools by race. In its ruling, the Court cited psychological studies suggesting that segregation had a damaging effect on black students and tended to encourage prejudice.
In addition to the contributions of psychology as a whole, two fields within psychology focus exclusively on education: educational psychology and school psychology. Educational psychologists seek to understand and improve the teaching and learning process within the classroom and other educational settings. Educational psychologists study topics such as intelligence and ability testing, student motivation, discipline and classroom management, curriculum plans, and grading. They also test general theories about how students learn most effectively. School psychologists work in elementary and secondary school systems administering tests, making placement recommendations, and counseling children with academic or emotional problems.
The business world, psychology is applied in the workplace and in the marketplace. Industrial-organizational (I-O) psychology focuses on human behavior in the workplace and other organizations. I-O psychologists conduct research, teach in business schools or universities, and work in private industry. Many I-O psychologists study the factors that influence worker motivation, satisfaction, and productivity. Others study the personal traits and situations that foster great leadership. Still, others focus on the processes of personnel selection, training, and evaluation. Studies have shown, for example, that face-to-face interviews sometimes result in poor hiring decisions and may be biased by the applicant's gender, race, and physical attractiveness. Studies have also shown that certain standardized tests can help to predict on-the-job performance.
Consumer psychology is the study of human decision making and behavior in the marketplace. In this area, researchers analyze the effects of advertising on consumers’ attitudes and buying habits. Consumer psychologists also study various aspects of marketing, such as the effects of packaging, price, and other factors that lead people to purchase one product rather than another.
Many psychologists today work in the legal system. They consult with attorneys, testify in court as expert witnesses, counsel prisoners, teach in law schools, and research various justice-related issues. Sometimes referred to as forensic psychologists, those who apply psychology to the law study a range of issues, including jury selection, eyewitness testimony, confessions to police, lie-detector tests, the death penalty, criminal profiling, and the insanity defense.
Studies in forensic psychology have helped to illuminate weaknesses in the legal system. For example, based on trial-simulation experiments, researchers have found that jurors are often biased by various facts not in evidence-that is, facts the judge tells them to disregard. In studying eyewitness testimony, researchers have staged mock crimes and asked witnesses to identify the assailant or recall other details. These studies have revealed that under certain condition’s eyewitnesses are highly prone to error.
Psychologists in this area often testify in court as expert witnesses. In cases involving the insanity defense, forensic clinical psychologists are often called to court to give their opinion about whether individual defendants are sane or insane. Used as a legal defense, insanity means that defendants, because of a mental disorder, cannot appreciate the wrongfulness of their conduct or control it. Defendants who are legally insane at the time of the offense may be absolved of criminal responsibility for their conduct and judged not guiltily. Psychologists are often called to testify in court on other controversial matters as well, including the accuracy of eyewitness testimony, the mental competence (fitness) of defendants to stand trial, and the reliability of early childhood memories.
Psychology has applications in many other domains of human life. Environmental psychologists focus on the relationship between people and their physical surroundings. They study how street noise, heat, architectural design, population density, and crowding affect people”s behavior and mental health. In a related field, human factors' psychologists work on the design of appliances, furniture, tools, and other manufactured items in order to maximize their comfort, safety, and convenience. Sports psychologists advise athletes and study the physiological, perceptual-motor, motivational, developmental, and social aspects of athletic performance. Other psychologists specialize in the study of political behavior, religion, sexuality, or behavior in the military.
Psychologists from all areas of specialization use the scientific method to test their theories about behavior and mental processes. A theory is an organized set of principles that is designed to explain and predict some phenomenon. Good theories also provide specific testable predictions, or hypotheses, about the relation between two or more variables. Formulating a hypothesis to be tested is the first important step in conducting research.
Over the years, psychologists have devised numerous ways to test their hypotheses and theories. Many studies are conducted in a laboratory, usually located at a university. The laboratory setting allows researchers to control what happens to their subjects and make careful and precise observations of behavior. For example, a psychologist who studies memory can bring volunteers into the lab, ask them to memorize a list of words or pictures, and then test their recall of that material seconds, minutes, or days later.
As indicated by the term field research, studies may also be conducted in real-world locations. For example, a psychologist investigating the reliability of eyewitness testimony might stage phony crimes in the street and then ask unsuspecting bystanders to identify the culprit from a set of photographs. Psychologists observe people in a wide variety of other locations outside the laboratory, including classrooms, offices, hospitals, college dormitories, bars, restaurants, and prisons.
In both laboratory and field settings, psychologists conduct their research using a variety of methods. Among the most common methods are archival studies, case studies, surveys, naturalistic observations, correlational studies, experiments, literature reviews, and measures of brain activity.
One way to learn about people is through archival studies, an examination of existing records of human activities. Psychological researchers often examine old newspaper stories, medical records, birth certificates, crime reports, popular books, and artwork. They may also examine statistical trends of the past, such as crime rates, birth rates, marriage and divorce rates, and employment rates. The strength of such measures is that by observing people only secondhand, researchers cannot unwittingly influence the subjects by their presence. However, available records of human activity are not always complete or detailed enough to be useful.
Archival studies are particularly valuable for examining cultural or historical trends. For example, in one study of physical attractiveness, researchers wanted to know if American standards of female beauty have changed over several generations. These researchers looked through two popular women”s magazines between 1901 and 1981 and examined the measurements of the female models. They found that “curvaceousness” (as measured by the bust-to-waist ratio) varied over time, with a boyish, slender look considered desirably in some time periods but not in others.
Sometimes psychologists interview, test, observe, and investigate the backgrounds of specific individuals in detail. Such case studies are conducted when researchers believe that an in-depth look at one individual will reveal something important about people in general.
Case studies often takes a great deal of time to complete, and the results may be limited by the fact that the subject is atypical. Yet case studies have played a prominent role in the development of psychology. Austrian physician Sigmund Freud based his theory of psychoanalysis on his experiences with troubled patients. Swiss psychologist Jean Piaget first began to formulate a theory of intellectual development by questioning his own children. Neuroscientists learn about how the human brain works by testing patients who have suffered brain damage. Cognitive psychologists learn about human intelligence by studying child prodigies and other gifted individuals. Social psychologists learn about group decision making by analyzing the policy decisions of government and business groups. When an individual is exceptional in some way, or when a hypothesis can be tested only through intensive, long-term observation, the case study is a valuable method.
An electroencephalogram, or EEG, is a recording of the action potential, or electrical, activity of the cerebral cortex of the brain. An EEG is made by attaching electrodes to the scalp, then collecting, amplifying, and recording the electrical impulses of the brain.
Biopsychologists interested in the links between brain and behaviors use a variety of specialized techniques in their research. One approach is to observe and test patients who have suffered damage to a specific region of the brain to determine what mental functions and behaviors were affected by that damage. British-born neurologist Oliver Sacks has written several books in which he describes case studies of brain-damaged patients who exhibited specific deficits in their speech, memory, sleep, and even in their personalities.
This positron emission tomography (PET) scans of the brain shows the activity of brain cells in the resting state and during three types of auditory stimulation. PET uses radioactive substances introduced into the brain to measure such brain functions as cerebral metabolism, blood flow and volume, oxygen use, and the formation of neurotransmitters. This imaging method collects data from many different angles, feeding the information into a computer that produces a series of cross-sectional images.
A second approach is too physically altars the brain and measures the effects of that change on behavior. The alteration can be achieved in different ways. For example, animal researchers often damage or destroy a specific region of a laboratory animal's brain through surgery. Other researchers might spark or inhibit activity in the brain through the use of drugs or electrical stimulation.
This magnetic resonance imaging (MRI) scans of a normal adult head shows the brain, airways, and soft tissues of the face. The large cerebral cortex, appearing in yellow and green, forms the bulk of the brain tissue; the circular cerebellum, center left, in red, and the elongated brainstem, centers, in red, are also prominently shown.
Another way to study the relationship between the brain and behavior is to record the activity of the brain with machines while a subject engages in certain behaviors or activities. One such instrument is the electroencephalograph, a device that can detect, amplifies, and record the level of electrical activity in the brain by means of metal electrodes taped to the scalp.
Advances in technology in the early 1970s allowed psychologists to see inside the living human brain for the first time without physically cutting into it. Today, psychologists use a variety of sophisticated brain-imaging techniques. The computerized axial tomography (CT or CAT) scan provides a computer-enhanced X-ray image of the brain. The more advanced positron emission tomography (PET) scans tracks the level of activity in specific parts of the brain by measuring the amount of glucose being used there. These measurements are then fed to a computer, which produces a color-coded image of brain activity. Another technique is magnetic resonance imaging (MRI), which produces high-resolution cross-sectional images of the brain. A high-speed version of MRI known as functional MRI produces moving images of the brain as its activity changes in real time. These relatively new brain imaging techniques have generated great excitement, because they allow researchers to identify parts of the brain that are active while people read, speak, listen to music, solve math problems, and engage in other mental activities.
In contrast with the in-depth study of one person, surveys describe a specific population or group of people. Surveys involve asking people a series of questions about their behaviors, thoughts, or opinions. Surveys can be conducted in person, over the phone, or through the mail. Most surveys study a specific group-for example, college students, working mothers, men, or homeowners. Rather than questioning every person in the group, survey researchers choose a representative sample of people and generalize the findings to the larger population.
Surveys may pertain to almost any topic. Often surveys ask people to report their feelings about various social and political issues, the TV shows they watch, or the consumers’ products they purchase. Surveys are also used to learn about people”s sexual practices; to estimate the use of cigarettes, alcohol, and other drugs; and to approximate the proportion of people who experience feelings of life satisfaction, loneliness, and other psychological states that cannot be directly observed.
Surveys must be carefully designed and conducted to ensure their accuracy. The results can be influenced, and biased, by two factors: who the respondents are and how the questions are asked. For a survey to be accurate, the sample being questioned must be representative of the population on key characteristics such as sex, race, age, region, and cultural background. To ensure similarity to the larger population, survey researchers usually try to make sure that they have a random sample, a method of selection in which everyone in the population has an equal chance of being chosen.
When the sample is not random, the results can be misleading. For example, prior to the 1936 United States presidential election, pollsters for the magazine Literary Digest mailed postcards to more than 10 million people who were listed in telephone directories or as registered owners of automobiles. The cards asked for whom they intended to vote. Based on the more than 2 million ballots that were returned, the Literary Digest predicted that Republican candidate Alfred M. Landon would win in a landslide over Democrat Franklin D. Roosevelt. At the time, however, more Republicans than Democrats owned telephones and automobiles, skewing the poll results. In the election, Landon won only two states.
The results of survey research can also be influenced by the way that questions are asked. For example, when asked about “welfare,” a majority of Americans in one survey said that the government spends too much money. But when asked about “assistance to the poor,” significantly fewer people gave this response.
In naturalistic observation, the researcher observes people as they behave in the real world. The researcher simply records what occurs and does not intervene in the situation. Psychologists use naturalistic observation to study the interactions between parents and children, doctors and patients, police and citizens, and managers and workers.
Naturalistic observation is common in anthropology, in which field workers seek to understand the everyday life of a culture. Ethologists, who study the behavior of animals in their natural habitat, also use this method. For example, British ethologist Jane Goodall spent many years in African jungles observing chimpanzees-their social structure, courting rituals, struggles for dominance, eating habits, and other behaviors. Naturalistic observation is also common among developmental psychologists who study social play, parent-child attachments, and other aspects of child development. These researchers observe children at home, in school, on the playground, and in other settings.
Case studies, surveys, and naturalistic observations are used to describe behavior. Correlational studies are further designed to find statistical connections, or correlations, between variables so that some factors can be used to predict others.
A correlation is a statistical measure of the extent to which two variables are associated. A positive correlation exists when two variables increase or decrease together. For example, frustration and aggression are positively correlated, meaning that as frustration rises, so do acts of aggression. More of one means more of the other. A negative correlation exists when increases in one variable are accompanied by decreases in the other, and vice versa. For example, friendships and stress-induced illness are negatively correlated, meaning that the more close friends a person has, the fewer stress-related illnesses the person suffers. More of one means less of the other.
Based on correlational evidence, researchers can use one variable to make predictions about another variable. But researchers must use caution when drawing conclusions from correlations. It is nature-but incorrect-to assume that because one variable predicts another, the first must have caused the second. For example, one might assume that frustration triggers aggression, or that friendships foster health. Regardless of how intuitive or accurate these conclusions may be, correlation does not prove causation. Thus, although it is possible that frustration causes aggression, there are other ways to interpret the correlation. For example, it is possible that aggressive people are more likely to suffer social rejection and become frustrated as a result.
Correlations enable researchers to predict one variable from another. But to determine if one variable actually causes another, psychologists must conduct experiments. In an experiment, the psychologist manipulates one factor in a situation-keeping other aspects of the situation constant-and then observes the effect of the manipulation on behavior. The people whose behavior is being observed are the subjects of the experiment. The factor that an experimenter varies (the proposed cause) is known as the independent variable, and the behavior being measured (the proposed effect) is called the dependent variable. In a test of the hypothesis that frustration triggers aggression, frustration would be the independent variable, and aggression the dependent variable.
There are three requirements for conducting a valid scientific experiment: (1) control over the independent variable, (2) the use of a comparison group, and (3) the random assignment of subjects to conditions. In its most basic form, then, a typical experiment compares a large number of subjects who are randomly assigned to experience one condition with a group of similar subjects who are not. Those who experience the condition compose the experimental group, and those who do not make up the control group. If the two groups differ significantly in their behavior during the experiment, that difference can be attributed to the presence of the condition, or independent variable. For example, to test the hypothesis that frustration triggers aggression, one group of researchers brought subjects into a laboratory, impeded their efforts to complete an important task (other subjects in the experiment were not impeded), and measured their aggressiveness toward another person. These researchers found that subjects who had been frustrated were more aggressive than those who had not been frustrated.
Psychologists use many different methods in their research. Yet no single experiment can fully prove a hypothesis, so the science of psychology builds slowly over time. First, a new discovery must be replicated. Replication refers to the process of conducting a second, nearly identical study to see if the initial findings can be repeated. If so, then researchers try to determine if these findings can be applied, transferred, or generalized to other settings. Generalizability refers to the extent to which a finding obtained less than one set of conditions can also be obtained at another time, in another place, and in other populations.
Because the science of psychology proceeds in small increments, many studies must be conducted before clear patterns emerge. To summarize and interpret an entire body of research, psychologists rely on two methods. One method is a narrative review of the literature, in which a reviewer subjectively evaluates the strengths and weaknesses of the various studies on a topic and argues for certain conclusions. Another method is meta-analysis, a statistical procedure used to combine the results from many different studies. By meta-analyzing a body of research, psychologists can often draw precise conclusions concerning the strength and breadth of support for a hypothesis.
Psychological research involving human subject raises ethical concerns about the subject”s right to privacy, the possible harm or discomfort caused by experimental procedures, and the use of deception. Over the years, psychologists have established various ethical guidelines. The American Psychological Association recommends that researchers (1) tell prospective subjects what they will experience so they can give informed consent to participate; (2) instruct subjects that they may withdraw from the study at any time; (3) minimize all harm and discomfort; (4) keep the subjects” responses and behavior’s confidential; and (5) debrief subjects who were deceived in some way by fully explaining the research after they have participated. Some psychologists argue that such rules should never be broken. Others say that some degree of flexibility is needed in order to study certain important issues, such as the effects of stress on test performance.
Laboratory experiments that use rats, mice, rabbits, pigeons, monkeys, and other animals are an important part of psychology, just as in medicine. Animal research serves three purposes in psychology: to learn more about certain types of animals, to discover general principles of behavior that pertain to all species, and to study variables that cannot ethically be tested with human beings. But is it ethical to experiment on animals?
Some animal rights activists believe that it is wrong to use animals in experiments, particularly in those that involve surgery, drugs, social isolation, food deprivation, electric shock, and other potentially harmful procedures. These activists see animal experimentation as unnecessary and question whether results from such research can be applied to humans. Many activists also argue that like humans, animals have the capacity to suffer and feel pain. In response to these criticisms, many researchers point out that animal experimentation has helped to improve the quality of human life. They note that animal studies have contributed to the treatment of anxiety, depression, and other mental disorders. Animal studies have also contributed to our understanding of conditions such as Alzheimer's disease, obesity, alcoholism, and the effects of stress on the immune system. Most researchers follow strict ethical guidelines that require them to minimize pain and discomfort to animals and to use the least invasive procedures possible. In addition, federal animal-protection laws in the United States require researchers to provide humane care and housing of animals and to tend to the psychological well-being of primates used in research.
One of the youngest sciences, psychology did not emerge as a formal discipline until the late 19th century. But its roots extend to the ancient past. For centuries, philosophers and religious scholars have wondered about the nature of the mind and the soul. Thus, the history of psychological thought begins in philosophy.
From about 600 to 300 Bc, Greek philosophers inquired about a wide range of psychological topics. They were especially interested in the nature of knowledge and how human beings come to know the world, a field of philosophy known as epistemology. The Greek philosopher Socrates and his followers, Plato and Aristotle, wrote about pleasure and pain, knowledge, beauty, desire, free will, motivation, common sense, rationality, memory, and the subjective nature of perception. They also theorized about whether human traits are innate or the product of experience. In the field of ethics, philosophers of the ancient world probed a variety of psychological questions: Are people inherently good? How can people attain happiness? What motives or drives do people have? Are human beings naturally social?
Second-century physician Galen was one of the most influential figures in ancient medicine, second in importance only to Hippocrates. Using animal dissection and other means, Galen proposed numerous theories about the function of different parts of the human body, most notably the brain, heart, and liver. He also derived an impressive understanding of the differences between veins and arteries. In the selection below, Galen discusses his idea that the optimal state, or “constitution,” of the body should be a perfect balance of various internal and external components.
Early thinkers also considered the causes of mental illness. Many ancient societies thought that mental illness resulted from supernatural causes, such as the anger of gods or possession by evil spirits. Both Socrates and Plato focused on psychological forces as the cause of mental disturbance. For example, Plato thought madness results when a person's irrational, animal-like psyche (mind or soul) overwhelms the intellectual, rational psyche. The Greek physician Hippocrates viewed mental disorders as stemming from natural causes, and he developed the first classification system for mental disorders. Galen, a Greek physician who lived in the 2nd century ad, echoed this belief in a physiological basis for mental disorders. He thought they resulted from an imbalance of the four bodily humors: black bile, yellow bile, blood, and phlegm. For example, Galen thought that melancholia (depression) resulted from a person having too much black bile.
More recently, many other men and women contributed to the birth of modern psychology. In the 1600s French mathematician and philosopher René Descartes theorized that the body and mind are separate entities. He regarded the body as a physical entity and the mind as a spiritual entity, and believed the two interacted only through the pineal gland, a tiny structure at the base of the brain. This position became known as dualism. According to dualism, the behavior of the body is determined by mechanistic laws and can be measured in a scientific manner. But the mind, which transcends the material world, cannot be similarly studied.
English philosophers' Thomas Hobbes and John Locke disagreed. They argued that all human experiences-including sensations, images, thoughts, and feelings-are physical processes occurring within the brain and nervous system. Therefore, these experiences are valid subjects of study. In this view, which later became known as monism, the mind and body are one and the same. Today, in light of years of research indicating that the physical and mental aspects of the human experience are intertwined, most psychologists reject a rigid dualists position.
Many philosophers of the past also debated the question of whether human knowledge is inborn or the product of experience. Nativists believed that certain elementary truths are innate to the human mind and need not be gained through experience. In contrast, empiricist believed that at birth, a person”s mind is like a tabula rasa, or blank slate, and that all human knowledge ultimately comes from sensory experience. Today, all psychologists agree that both types of factors are important in the acquisition of knowledge.
Modern psychology can also be traced to the study of physiology (a branch of biology that studies living organisms and their parts) and medicine. In the 19th century, physiologists began studying the human brain and nervous system, paying particular attention to the topic of sensation. For example, in the 1850s and 1860s German scientist Hermann von Helmholtz studied sensory receptors in the eye and ear, investigating topics such as the speed of neural impulses, color vision, hearing, and space perception. Another important German scientist, Gustav Fechner, founded psychophysics, the study of the relationship between physical stimuli and our subjective sensations of those stimuli. Building on the work of his compatriot Ernst Weber, Fechner developed a technique for measuring people”s subjective sensations of various physical stimuli. He sought to determine the minimum intensity level of a stimulus that is needed to produce a sensation.
English naturalist Charles Darwin was particularly influential in the development of psychology. In 1859 Darwin published. On the Origin of Species, in which he proposed that all living forms were a product of the evolutionary process of natural selection. Darwin had based his theory on plants and nonhuman animals, but he later asserted that people had evolved through similar processes, and that human anatomy and behavior could be analyzed in the same way. Darwin's theory of evolution invited comparisons between humans and other animals, and scientists soon began using animals in psychological research.
French neurologist Jean Martin Charcot shows colleagues a female patient with hysteria at La Salpêtrière, a Paris hospital. Charcot gained renown throughout Europe for his method of treating hysteria and other “nervous disorders” through hypnosis. Charcot”s belief that hysteria had psychological rather than physical origins influenced Austrian neurologist Sigmund Freud, who studied under Charcot.
In medicine, physicians were discovering new links between the brain and language. For example, French surgeon Paul Broca discovered that people who suffer damage to a specific part of the brain’s left hemisphere lose the ability to produce fluent speech. This area of the brain became known as Broca's area. A German neurologist, Carl Wernicke, reported in 1874 that people with damage to a different area of the left hemisphere lose their ability to comprehend speech. This region became known as Wernicke's area.
Other physicians focused on the study of mental disorders. In the late 19th century, French neurologist Jean Charcot discovered that some of the patients he was treating for so-called nervous disorders could be cured through hypnosis, a psychological-not medical-form of intervention. Charcot”s work had a profound impact on Sigmund Freud, an Austrian neurologist whose theories would later revolutionize psychology.
Austrian physician Franz Fredrich Anton Mesmer pioneered the induction of trance-like states to cure medical ailments. Mesmer”s work sparked interest among some of his scientific colleagues but was later dismissed as charlatanism. Today, however, Mesmer is considered a pioneer in hypnosis, which is widely believed to be helpful in managing certain medical conditions.
Psychology was predated and somewhat influenced by various pseudoscientific schools of thought-that is, theories that had no scientific foundation. In the late 18th and early 19th centuries, Viennese physician Franz Joseph Gall developed phrenology, the theory that psychological traits and abilities reside in certain parts of the brain and can be measured by the bumps and indentations in the skull. Although phrenology found popular acceptance among the lay public in western Europe and the United States, most scientists ridiculed Gall's ideas. However, research later confirmed the more general point that certain mental activities can be traced to specific parts of the brain.
Physicians in the 18th and 19th centuries used crude devices to treat mental illness, none of which offered any real relief. The circulating swing, top left, was used to spin depressed patients at high speed. American physician Benjamin Rush devised the tranquilizing chair, top right, to calm people with mania. The crib, bottom, was widely used to restrain violent patients.
Another Viennese physician of the 18th century, Franz Anton Mesmer, believed that illness was caused by an imbalance of magnetic fluids in the body. He believed he could restore the balance by passing his hands across the patient”s body and waving a magnetic wand over the infected area. Mesmer claimed that his patients would fall into a trance and awaken from it feeling better. The medical community, however, soundly rejected the claim. Today, Mesmer”s technique, known as mesmerism, is regarded as an early forerunner of modern hypnosis.
Modern psychology is deeply rooted in the older disciplines of philosophy and physiology. But the official birth of psychology is often traced to 1879, at the University of Leipzig, in Leipzig, Germany. There, physiologist Wilhelm Wundt established the first laboratory dedicated to the scientific study of the mind. Wundt”s laboratory soon attracted leading scientists and students from Europe and the United States. Among these were James McKeen Cattell, one of the first psychologists to study individual differences through the administration of “mental tests,” Emil Kraepelin, a German psychiatrist who postulated a physical cause for mental illnesses and in 1883 published the first classification system for mental disorders; and Hugo Münsterberg, the first to apply psychology to industry and the law. Wundt was extraordinarily productive over the course of his career. He supervised a total of 186 doctoral dissertations, taught thousands of students, founded the first scholarly psychological journal, and published innumerable scientific studies. His goal, which he stated in the preface of a book he wrote, was “to mark out a new domain of science.”
Compared to the philosophers who preceded him, Wundt”s approach to the study of mind was based on systematic and rigorous observation. His primary method of research was introspection. This technique involved training people to concentrate and report on their conscious experiences as they reacted to visual displays and other stimuli. In his laboratory, Wundt systematically studied topics such as attention span, reaction time, vision, emotion, and time perception. By recruiting people to serve as subjects, varying the conditions of their experience, and then rigorously repeating all observations, Wundt laid the foundation for the modern psychology experiment.
In the United States, Harvard University professor William James observed the emergence of psychology with great interest. Although trained in physiology and medicine, James was fascinated by psychology and philosophy. In 1875 he offered his first course in psychology. In 1890 James published a two-volume book entitled Principles of Psychology. It immediately became the leading psychology text in the United States, and it brought James a worldwide reputation as a man of great ideas and inspiration. In 28 chapters, James wrote about the stream of consciousness, the formation of habits, individuality, the link between mind and body, emotions, the self, and other topics that inspired generations of psychologists. Today, historians consider James the founder of American psychology.
James's students also made lasting contributions to the field. In 1883 G. Stanley Hall (who also studied with Wundt) established the first true American psychology laboratory in the United States at Johns Hopkins University, and in 1892 he founded and became the first president of the American Psychological Association. Mary Whiton Calkins created an important technique for studying memory and conducted one of the first studies of dreams. In 1905 she was elected the first female president of the American Psychological Association. Edward Lee Thorndike conducted some of the first experiments on animal learning and wrote a pioneering textbook on educational psychology.
During the first decades of psychology, two main schools of thought dominated the field: structuralism and functionalism. Structuralism was a system of psychology developed by Edward Bradford Titchener, an American psychologist who studied under Wilhelm Wundt. Structuralists believed that the task of psychology is to identify the basic elements of consciousness in much the same way that physicists break down the basic particles of matter. For example, Titchener identified four elements in the sensation of taste: sweet, sour, salty, and bitter. The main method of investigation in structuralism was introspection. The influence of structuralism in psychology faded after Titchener's death in 1927.
In contradiction to the Structuralists movement, William James promoted a school of thought known as functionalism, the belief that the real task of psychology is to investigate the function, or purpose, of consciousness rather than its structure. James was highly influenced by Darwin”s evolutionary theory that all characteristics of a species must serve some adaptive purpose. Functionalism enjoyed widespread appeal in the United States. Its three main leaders were James Rowland Angell, a student of James; John Dewey, who was also one of the foremost American philosophers and educators; and Harvey A. Carr, a psychologist at the University of Chicago.
In their efforts to understand human behavioral processes, the functional psychologists developed the technique of longitudinal research, which consists of interviewing, testing, and observing one person over a long period of time. Such a system permits the psychologist to observe and record the person”s development and how he or she reacts to different circumstances.
In the late 19th century Viennese neurologist Sigmund Freud developed a theory of personality and a system of psychotherapy known as psychoanalysis. According to this theory, people are strongly influenced by unconscious forces, including innate sexual and aggressive drives. In this 1938 British Broadcasting Corporation interview, Freud recounts the early resistance to his ideas and later acceptance of his work. Freud's speech is slurred because he was suffering from cancer of the jaw. He died the following year.
Alongside Wundt and James, a third prominent leader of the new psychology was Sigmund Freud, a Viennese neurologist of the late 19th and early 20th century. Through his clinical practice, Freud developed a very different approach to psychology. After graduating from medical school, Freud treated patients who appeared to suffer from certain ailments but had nothing physically wrong with them. These patients were not consciously faking their symptoms, and often the symptoms would disappear through hypnosis, or even just by talking. On the basis of these observations, Freud formulated a theory of personality and a form of psychotherapy known as psychoanalysis. It became one of the most influential schools of Western thought of the 20th century.
Freud introduced his new theory in The Interpretation of Dreams (1889), the first of 24 books he would write. The theory is summarized in Freud's last book, An Outline of Psychoanalysis, published in 1940, after his death. In contrast to Wundt and James, for whom psychology was the study of conscious experience, Freud believed that people are motivated largely by unconscious forces, including strong sexual and aggressive drives. He likened the human mind to an iceberg: The small tip that floats on the water is the conscious part, and the vast region beneath the surface comprises the unconscious. Freud believed that although unconscious motives can be temporarily suppressed, they must find a suitable outlet in order for a person to maintain a healthy personality.
To probe the unconscious mind, Freud developed the psychotherapy technique of free association. In free association, the patient reclines and talks about thoughts, wishes, memories, and whatever else comes to mind. The analyst tries to interpret these verbalizations to determine their psychological significance. In particular, Freud encouraged patients to free associate about their dreams, which he believed were the “royal road to the unconscious.” According to Freud, dreams are disguised expressions of deep, hidden impulses. Thus, as patients recount the conscious manifest content of dreams, the psychoanalyst tries to unmask the underlying latent content-what the dreams really mean.
From the start of psychoanalysis, Freud attracted followers, many of whom later proposed competing theories. As a group, these neo-Freudians shared the assumption that the unconscious plays an important role in a person”s thoughts and behaviors. Most parted company with Freud, however, over his emphasis on sex as a driving force. For example, Swiss psychiatrist Carl Jung theorized that all humans inherit a collective unconscious that contains universal symbols and memories from their ancestral past. Austrian physician Alfred Adler theorized that people are primarily motivated to overcome inherent feelings of inferiority. He wrote about the effects of birth order in the family and coined the term sibling rivalry. Karen Horney, a German-born American psychiatrist, argued that humans have a basic need for love and security, and become anxious when they feel isolated and alone.
Motivated by a desire to uncover unconscious aspects of the psyche, psychoanalytic researchers devised what are known as projective tests. A projective test asks people to respond to an ambiguous stimulus such as a word, an incomplete sentence, an inkblot, or an ambiguous picture. These tests are based on the assumption that if a stimulus is vague enough to accommodate different interpretations, then people will use it to project their unconscious needs, wishes, fears, and conflicts. The most popular of these tests are the Rorschach Inkblot Test, which consists of ten inkblots, and the Thematic Apperception Test, which consists of drawings of people in ambiguous situations.
Psychoanalysis has been criticized on various grounds and is not as popular as in the past. However, Freud's overall influence on the field has been deep and lasting, particularly his ideas about the unconscious. Today, most psychologists agree that people can be profoundly influenced by unconscious forces, and that people often have a limited awareness of why they think, feel, and behave as they do.
In 1885 German philosopher Hermann Ebbinghaus conducted one of the first studies on memory, using himself as a subject. He memorized lists of nonsense syllables and then tested his memory of the syllables at intervals ranging from 20 minutes to 31 days. As shown in this curve, he found that he remembered less than 40 percent of the items after nine hours, but that the rate of forgetting leveled off over time.
In addition to Wundt, James, and Freud, many other scholars helped to define the science of psychology. In 1885 German philosopher Hermann Ebbinghaus conducted a series of classic experiments on memory, using nonsense syllables to establish principles of retention and forgetting. In 1896 American psychologist Lightner Witmer opened the first psychological clinic, which initially treated children with learning disorders. He later founded the first journal and training program in a new helping profession that he named clinical psychology. In 1905 French psychologist Alfred Binet devised the first major intelligence test in order to assess the academic potential of schoolchildren in Paris. The test was later translated and revised by Stanford University psychologist Lewis Terman and is now known as the Stanford-Binet intelligence test. In 1908 American psychologist Margaret Floy Washburn (who later became the second female president of the American Psychological Association) wrote an influential book called The Animal Mind, in which she synthesized animal research to that time.
In 1912 German psychologist Max Wertheimer discovered that when two stationary lights flash in succession, people see the display as a single light moving back and forth. This illusion inspired the Gestalt psychology movement, which was based on the notion that people tend to perceive a well-organized whole or pattern that is different from the sum of isolated sensations. Other leaders of Gestalt psychology included Wertheimer”s close associates Wolfgang Köhler and Kurt Koffka. Later, German American psychologist Kurt Lewin extended Gestalt psychology to studies of motivation, personality, social psychology, and conflict resolution. German American psychologist Fritz Heider then extended this approach to the study of how people perceive themselves and others.
In the late 19th century, American psychologist Edward L. Thorndike conducted some of the first experiments on animal learning. Thorndike formulated the law of effect, which states that behaviors that are followed by pleasant consequences will be more likely to be repeated in the future.
William James had defined psychology as “the science of mental life.” But in the early 1900s, growing numbers of psychologists voiced criticism of the approach used by scholars to explore conscious and unconscious mental processes. These critics doubted the reliability and usefulness of the method of introspection, in which subjects are asked to describe their own mental processes during various tasks. They were also critical of Freud's emphasis on unconscious motives. In search of more-scientific methods, psychologists gradually turned away from research on invisible mental processes and began to study only behavior that could be observed directly. This approach, known as behaviorism, ultimately revolutionized psychology and remained the dominant school of thought for nearly 50 years.
Russian physiologist Ivan Pavlov discovered a major type of learning, classical conditioning, by accident while conducting experiments on digestion in the early 1900s. He devoted the rest of his life to discovering the underlying principles of classical conditioning.
Among the first to lay the foundation for the new behaviorism was American psychologist Edward Lee Thorndike. In 1898 Thorndike conducted a series of experiments on animal learning. In one study, he put cats into a cage, put food just outside the cage, and timed how long it took the cats to learn how to open an escape door that led to the food. Placing the animals in the same cage again and again, Thorndike found that the cats would repeat behaviors that worked and would escape more and more quickly with successive trials. Thorndike thereafter proposed the law of effect, which states that behaviors that are followed by a positive outcome are repeated, while those followed by a negative outcome or none at all are extinguished.
In 1906 Russian physiologist Ivan Pavlov-who had won a Nobel Prize two years earlier for his studies of digestion-stumbled onto one of the most important principles of learning and behavior. Pavlov was investigating the digestive process in dogs by putting food in their mouths and measuring the flow of saliva. He found that after repeated testing, the dogs would salivate in anticipation of the food, even before he put it in their mouth. He soon discovered that if he rang a bell just before the food was presented each time, the dogs would eventually salivate at the mere sound of the bell. Pavlov had discovered a basic form of learning called classical conditioning (also referred to as Pavlovian conditioning) in which an organism comes to associate one stimulus with another. Later research showed that this basic process can account for how people form certain preferences and fears.
American psychologist John B. Watson believed psychologists should study observable behavior instead of speculating about a person”s inner thoughts and feelings. Watson's approach, which he termed behaviorism, dominated psychology for the first half of the 20th century.
Although Thorndike and Pavlov set the stage for behaviorism, it was not until 1913 that a psychologist set forward a clear vision for behaviorist psychology. In that year John Watson, a well-known animal psychologist at Johns Hopkins University, published a landmark paper entitled “Psychology as the Behaviorist Views It.” Watson”s goal was nothing less than a complete redefinition of psychology. “Psychology as the behaviorist views it.” Watson wrote, “is a purely objective experimental branch of natural science. Its theoretical goal is the prediction and control of behavior.” Watson narrowly defined psychology as the scientific study of behavior. He urged his colleagues to abandon both introspection and speculative theories about the unconscious. Instead he stressed the importance of observing and quantifying behavior. In light of Darwin”s theory of evolution, he also advocated the use of animals in psychological research, convinced that the principles of behavior would generalize across all species.
American psychologist B. F. Skinner became famous for his pioneering research on learning and behavior. During his 60-year career, Skinner discovered important principles of operant conditioning, a type of learning that involves reinforcement and punishment. A strict behaviorist, Skinner believed that operant conditioning could explain even the most complex of human behaviors.
Many American psychologists were quick to adopt behaviorism, and animal laboratories were set up all over the country. Aiming to predict and control behavior, the "behaviorists” strategy was to vary a stimulus in the environment and observe an organism's response. They saw no need to speculate about mental processes inside the head. For example, Watson argued that thinking was simply talking to oneself silently. He believed that thinking could be studied by recording the movement of certain muscles in the throat.
American psychologist B. F. Skinner designed an apparatus, now called a Skinner box, that allowed him to formulate important principles of animal learning. An animal placed inside the box is rewarded with a small bit of food each time it makes the desired response, such as pressing a lever or pecking a key. A device outside the box records the animal”s responses.
The most forceful leader of behaviorism was B. F. Skinner, an American psychologist who began studying animal learning in the 1930s. Skinner coined the term reinforcement and invented a new research apparatus called the Skinner box for use in testing animals. Based on his experiments with rats and pigeons, Skinner identified a number of basic principles of learning. He claimed that these principles explained not only the behavior of laboratory animals, but also accounted for how human beings learn new behaviors or change existing behaviors. He concluded that nearly all behavior is shaped by complex patterns of reinforcement in a person”s environment, a process that he called operant conditioning (also referred to as instrumental conditioning). Skinner”s views on the causes of human behavior made him one of the most famous and controversial psychologists of the 20th century.
Operant conditioning, pioneered by American psychologist B. F. Skinner, is the process of shaping behavior by means of reinforcement and punishment. This illustration shows how a mouse can learn to maneuver through a maze. The mouse is rewarded with food when it reaches the first turn in the maze (A). Once the first behavior becomes ingrained, the mouse is not rewarded until it makes the second turn (B). After many times through the maze, the mouse must reach the end of the maze to receive its reward ©.
Skinner and others applied his findings to modify behavior in the workplace, the classroom, the clinic, and other settings. In World War II (1939-1945), for example, he worked for the U.S. government on a top-secret project in which he trained pigeons to guide an armed glider plane toward enemy ships. He also invented the first teaching machine, which allowed students to learn at their own pace by solving a series of problems and receiving immediate feedback. In his popular book Walden Two (1948), Skinner presented his vision of a behaviorist utopia, in which socially adaptive behaviors are maintained by rewards, or positive reinforcements. Throughout his career, Skinner held firm to his belief that psychologists should focus on the prediction and control of behavior.
Faced with a choice between psychoanalysis and behaviorism, many psychologists in the 1950s and 1960s sensed a void in psychology”s conception of human nature. Freud had drawn attention to the darker forces of the unconscious, and Skinner was interested only in the effects of reinforcement on observable behavior. Humanistic psychology was born out of a desire to understand the conscious mind, free will, human dignity, and the capacity for self-reflection and growth. An alternative to psychoanalysis and behaviorism, humanistic psychology became known as “the third force.”
The humanistic movement was led by American psychologists Carl Rogers and Abraham Maslow. According to Rogers, all humans are born with a drive to achieve their full capacity and to behave in ways that are consistent with their true selves. Rogers, a psychotherapist, developed person-centered therapy, a nonjudgmental, nondirective approach that helped clients clarify their sense of whom they are in an effort to facilitate their own healing process. At about the same time, Maslow theorized that all people are motivated to fulfill a hierarchy of needs. At the bottom of the hierarchy are basic physiological needs, such as hunger, thirst, and sleep. Further up the hierarchy are needs for safety and security, needs for belonging and love, and esteem-related needs for status and achievement. Once these needs are met, Maslow believed, people strive for self-actualization, the ultimate state of personal fulfillment. As Maslow put it: A musician must make music, an artist must paint, a poet must write, if he is ultimately to be at peace with himself. "What a man can be, he must be.
Swiss psychologist Jean Piaget based his early theories of intellectual development on his questioning and observation of his own children. From these and later studies, Piaget concluded that all children pass through a predictable series of cognitive stages.
From the 1920s through the 1960s, behaviorism dominated psychology in the United States. Eventually, however, psychologists began to move away from strict behaviorism. Many became increasingly interested in cognition, a term used to describe all the mental processes involved in acquiring, storing, and using knowledge. Such processes include perception, memory, thinking, problem solving, imagining, and language. This shift in emphasis toward cognition had such a profound influence on psychology that it has often been called the cognitive revolution. The psychological study of cognition became known as cognitive psychology.
One reason for psychologists” renewed interest in mental processes was the invention of the computer, which provided an intriguing metaphor for the human mind. The hardware of the computer was likened to the brain, and computer programs provided a step-by-step model of how information from the environment is input, stored, and retrieved to produce a response. Based on the computer metaphor, psychologists began to formulate information-processing models of human thought and behavior.
In the 1950s American linguist Noam Chomsky proposed that the human brain is especially constructed to detect and reproduce language and that the ability to form and understand language is innate to all human beings. According to Chomsky, young children learn and apply grammatical rules and vocabulary as they are exposed to them and do not require initial formal teaching.
The pioneering work of Swiss psychologist Jean Piaget also inspired psychologists to study cognition. During the 1920s, while administering intelligence tests in schools, Piaget became interested in how children think. He designed various tasks and interview questions to reveal how children of different ages reason about time, nature, numbers, causality, morality, and other concepts. Based on his many studies, Piaget theorized that from infancy to adolescence, children advance through a predictable series of cognitive stages.
The cognitive revolution also gained momentum from developments in the study of language. Behaviorist B. F. Skinner had claimed that language is acquired according to the laws of operant conditioning, in much the same way that rats learn to press a bar for food pellets. In 1959, however, American linguist Noam Chomsky charged that Skinner's account of language development was wrong. Chomsky noted that children all over the world start to speak at roughly the same age and proceed through roughly the same stages without being explicitly taught or rewarded for the effort. According to Chomsky, the human capacity for learning language is innate. He theorized that the human brain is “hardwired” for language as a product of evolution. By pointing to the primary importance of biological dispositions in the development of language, Chomsky”s theory dealt a serious blow to the behaviorist assumption that all human behaviors are formed and maintained by reinforcement.
Before psychology became established in science, it was popularly associated with extrasensory perception (ESP) and other paranormal phenomena (phenomena beyond the laws of science). Today, these topics lie outside the traditional scope of scientific psychology and fall within the domain of parapsychology. Psychologists note that thousands of studies have failed to demonstrate the existence of paranormal phenomena.
Grounded in the conviction that mind and behavior must be studied using statistical and scientific methods, psychology has become a highly respected and socially useful discipline. Psychologists now study important and sensitive topics such as the similarities and differences between men and women, racial and ethnic diversity, sexual orientation, marriage and divorce, abortion, adoption, intelligence testing, sleep and sleep disorders, obesity and dieting, and the effects of psychoactive drugs such as methylphenidate (Reptilian) and fluoxetine (Prozac).
In the last few decades, researchers have made significant breakthroughs in understanding the brain, mental processes, and behavior. This section of the article provides examples of contemporary research in psychology: the plasticity of the brain and nervous system, the nature of consciousness, memory distortions, competence and rationality, genetic influences on behavior, infancy, the nature of intelligence, human motivation, prejudice and discrimination, the benefits of psychotherapy, and the psychological influences on the immune system.
Psychologists once believed that the neural circuits of the adult brain and nervous system were fully developed and no longer subject to change. Then in the 1980s and 1990s a series of provocative experiments showed that the adult brain has flexibility, or plasticity-a capacity to change as a result of usage and experience.
These experiments showed that adult rats flooded with visual stimulation formed new neural connections in the brain's visual cortex, where visual signals are interpreted. Likewise, those trained to run an obstacle course formed new connections in the cerebellum, where balance and motor skills are coordinated. Similar results with birds, mice, and monkeys have confirmed the point: Experience can stimulate the growth of new connections and mold the brain”s neural architecture.
Once the brain reaches maturity, the number of neurons does not increase, and any neurons that are damaged are permanently disabled. But the plasticity of the brain can greatly benefit people with damage to the brain and nervous system. Organisms can compensate for loss by strengthening old neural connections and sprouting new ones. That is why people who suffer strokes are often able to recover their lost speech and motor abilities.
In 1860 German physicist Gustav Fechner theorized that if the human brain were divided into right and left halves, each side would have its own stream of consciousness. Modern medicine has actually allowed scientists to investigate this hypothesis. People who suffer from life-threatening epileptic seizures sometimes undergo a radical surgery that severs the corpus callosum, a bridge of nerve tissue that connects the right and left hemispheres of the brain. After the surgery, the two hemispheres can no longer communicate with each other.
Scientists have long considered the nature of consciousness without producing a fully satisfactory definition. In the early 20th century American philosopher and psychologist William James suggested that consciousness is a mental process involving both attention to external stimuli and short-term memory. Later scientific explorations of consciousness mostly expanded upon James”s work. In this article from a 1997 special issue of Scientific American, Nobel laureate Francis Crick, who helped determine the structure of DNA, and fellow biophysicist Christof Koch explain how experiments on vision might deepen our understanding of consciousness.
Beginning in the 1960s American neurologist Roger Sperry and others tested such split-brain patients in carefully designed experiments. The researchers found that the hemispheres of these patients seemed to function independently, almost as if the subjects had two brains. In addition, they discovered that the left hemisphere was capable of speech and language, but not the right hemisphere. For example, when split-brain patients saw the image of an object flashed in their left visual field (thus sending the visual information to the right hemisphere), they were incapable of naming or describing the object. Yet they could easily point to the correct object with their left hand (which is controlled by the right hemisphere). As Sperry”s colleague Michael Gazzaniga stated, “Each half brain seemed to work and function outside of the conscious realm of the other.”
Other psychologists interested in consciousness have examined how people are influenced without their awareness. For example, research has demonstrated that under certain conditions in the laboratory, people can be fleetingly affected by subliminal stimuli, sensory information presented so rapidly or faintly that it falls below the threshold of awareness. (Note, however, that scientists have discredited claims that people can be importantly influenced by subliminal messages in advertising, rock music, or other media.) Other evidence for influence without awareness comes from studies of people with a type of amnesia that prevents them from forming new memories. In experiments, these subjects are unable to recognize words they previously viewed in a list, but they are more likely to use those words later in an unrelated task. In fact, memory without awareness is normal, as when people come up with an idea they think is original, only later to realize that they had inadvertently borrowed it from another source.
Cognitive psychologists have often likened human memory to a computer that encodes, stores, and retrieves information. It is now clear, however, that remembering is an active process and that people construct and alter memories according to their beliefs, wishes, needs, and information received from outside sources.
Without realizing it, people sometimes create memories that are false. In one study, for example, subjects watched a slide show depicting a car accident. They saw either a “STOP” sign or a “YIELD” sign in the slides, but afterward they were asked a question about the accident that implied the presence of the other sign. Influenced by this suggestion, many subjects recalled the wrong traffic sign. In another study, people who heard a list of sleep-related words (bed, yawn) or music-related words (jazz, instrument) were often convinced moments later that they had also heard the words sleep or music-words that fit the category but were not on the list. In a third study, researchers asked college students to recall their high-school grades. Then the researchers checked those memories against the students” actual transcripts. The students recalled most grades correctly, but most of the errors inflated their grades, particularly when the actual grades were low. When scientists distinguish between human beings and other animals, they point to our larger cerebral cortex (the outer part of the brain) and to our superior intellect-as seen in the abilities to acquire and store large amounts of information, solve problems, and communicate through the use of language.
In recent years, however, those studying human cognition have found that people are often less than rational and accurate in their performance. Some researchers have found that people are prone to forgetting, and worse, that memories of past events are often highly distorted. Others have observed that people often violate the rules of logic and probability when reasoning about real events, as when gamblers overestimate the odds of winning in games of chance. One reason for these mistakes is that we commonly rely on cognitive heuristics, mental shortcuts that allow us to make judgments that are quick but often in error. To understand how heuristics can lead to mistaken assumptions, imagine offering people a lottery ticket containing six numbers out of a pool of the numbers 1 through 40. If given a choice between the tickets 6-39-2-10-24-30 or 1-2-3-4-5-6, most people select the first ticket, because it has the appearance of randomness. Yet out of the 3,838,380 possible winning combinations, both sequences are equally likely.
One of the oldest debates in psychology, and in philosophy, concerns whether individual human traits and abilities are predetermined from birth or due to one's upbringing and experiences. This debate is often termed the nature-nurture debate. A strict genetic (nature) position states that people are predisposed to become sociable, smart, cheerful, or depressed according to their genetic blueprint. In contrast, a strict environmental (nurture) position says that people are shaped by parents, peers, cultural institutions, and life experiences.
Research shows that the more genetically related a person is to someone with schizophrenia, the greater the risk that person has of developing the illness. For example, children of one parent with schizophrenia have a 13 percent chance of developing the illness, whereas children of two parents with schizophrenia have a 46 percent chance of developing the disorder.
Researchers can estimate the role of genetic factors in two ways: (1) twin studies and (2) adoption studies. Twin studies compare identical twins with fraternal twins of the same sex. If identical twins (who share all the same genes) are more similar to each other on a given trait than are same-sex fraternal twins (who share only about half of the same genes), then genetic factors are assumed to influence the trait. Other studies compare identical twins who are raised together with identical twins who are separated at birth and raised in different families. If the twins raised together are more similar to each other than the twins raised apart, childhood experiences are presumed to influence the trait. Sometimes researchers conduct adoption studies, in which they compare adopted children to their biological and adoptive parents. If these children display traits that resemble those of their biological relatives more than their adoptive relatives, genetic factors are assumed to play a role in the trait.
In recent years, several twin and adoption studies have shown that genetic factors play a role in the development of intellectual abilities, temperament and personality, vocational interests, and various psychological disorders. Interestingly, however, this same research indicates that at least 50 percent of the variation in these characteristics within the population is attributable to factors in the environment. Today, most researchers agree that psychological characteristics spring from a combination of the forces of nature and nurture.
Helpless to survive on their own, newborn babies nevertheless possess a remarkable range of skills that aid in their survival. Newborns can see, hear, taste, smell, and feel pain; vision is the least developed sense at birth but improves rapidly in the first months. Crying communicates their need for food, comfort, or stimulation. Newborns also have reflexes for sucking, swallowing, grasping, and turning their head in search of their mother”s nipple.
In 1890 William James described the newborn”s experience as “one great blooming, buzzing confusion.” However, with the aid of sophisticated research methods, psychologists have discovered that infants are smarter than was previously known.
A period of dramatic growth, infancy lasts from birth to around 18 months of age. Researchers have found that infants are born with certain abilities designed to aid their survival. For example, newborns show a distinct preference for human faces over other visual stimuli.
To learn about the perceptual world of infants, researchers measure infants” head movements, eye movements, facial expressions, brain waves, heart rate, and respiration. Using these indicators, psychologists have found that shortly after birth, infants show a distinct preference for the human face over other visual stimuli. Also suggesting that newborns are tuned into the face as a social object is the fact that within 72 hours of birth, they can mimic adults who purse the lips or stick out the tongue-a rudimentary form of imitation. Newborns can distinguish between their mother”s voice and that of another woman. And at two weeks old, nursing infants are more attracted to the body odor of their mother and other breast-feeding females than to that of other women. Taken together, these findings show that infants are equipped at birth with certain senses and reflexes designed to aid their survival.
In 1905 French psychologist Alfred Binet and colleague Théodore Simon devised one of the first tests of general intelligence. The test sought to identify French children likely to have difficulty in school so that they could receive special education. An American version of Binet”s test, the Stanford-Binet Intelligence Scale, is still used today.
In 1905 French psychologist Alfred Binet devised the first major intelligence test for the purpose of identifying slow learners in school. In doing so, Binet assumed that intelligence could be measured as a general intellectual capacity and summarized in a numerical score, or intelligence quotient (IQ). Consistently, testing has revealed that although each of us is more skilled in some areas than in others, a general intelligence underlies our more specific abilities.
Intelligence tests often play a decisive role in determining whether a person is admitted to college, graduate school, or professional school. Thousands of people take intelligence tests every year, but many psychologists and education experts question whether these tests are an accurate way of measuring who will succeed or fail in school and later in life. In this 1998 Scientific American article, psychology and education professor Robert J. Sternberg of Yale University in New Haven, Connecticut, presents evidence against conventional intelligence tests and proposes several ways to improve testing.
Today, many psychologists believe that there is more than one type of intelligence. American psychologist Howard Gardner proposed the existence of multiple intelligence, each linked to a separate system within the brain. He theorized that there are seven types of intelligence: linguistic, logical-mathematical, spatial, musical, bodily-kinesthetic, interpersonal, and intrapersonal. American psychologist Robert Sternberg suggested a different model of intelligence, consisting of three components: analytic (“school smarts,” as measured in academic tests), creative (a capacity for insight), and practical (“street smarts,” or the ability to size up and adapt to situations). Psychologists from all branches of the discipline study the topic of motivation, an inner state that moves an organism toward the fulfillment of some goal. Over the years, different theories of motivation have been proposed. Some theories state that people are motivated by the need to satisfy physiological needs, whereas others state that people seek to maintain an optimum level of bodily arousal (not too little and not too much). Still other theories focus on the ways in which people respond to external incentives such as money, grades in school, and recognition. Motivation researchers study a wide range of topics, including hunger and obesity, sexual desire, the effects of reward and punishment, and the needs for power, achievement, social acceptance, love, and self-esteem.
In 1954 American psychologist Abraham Maslow proposed that all people are motivated to fulfill a hierarchical pyramid of needs. At the bottom of Maslow”s pyramid are needs essential to survival, such as the needs for food, water, and sleep. The need for safety follows these physiological needs. According to Maslow, higher-level needs become important to us only after our more basic needs are satisfied. These higher needs include the need for love and belongingness, the need for esteem, and the need for self-actualization (in Maslow”s theory, a state in which people realize their greatest potential).
The view that the role of sentences in inference gives a more important key to their meaning than their “external” relations to things in the world. The meaning of a sentence becomes its place in a network of inferences that it legitimates. Also, known as its functional role semantics, procedural semantic or conceptual role semantics. As these view bear to some relation to the coherence theory of truth, and suffer from the same suspicion that divorces meaning from any clear association with things in the world.
Paradoxes rest upon the assumption that analysis is a relation with concept, then are involving entities of other sorts, such as linguistic expressions, and that in true analysis, analysand and analysandum are one and the same concept. However, these assumptions are explicit in the British philosopher George Edward Moore, but some of Moore”s remarks hint at a solution that a statement of an analysis is a statement partially taken about the concept involved and partly about the verbal expression used to express it. Moore is to suggest that he thinks of a solution of this sort is bound to be right, however, facts to suggest one because he dismisses to uncover of any way in which the analysis can be as part of the expression.
Elsewhere, the possibility clearly does set of apparent incontrovertible premises giving unacceptable or contradictory conclusions. To solve a paradox will involve showing that either these hidden flaws in the premises, or what the reasoning is erroneous, or that the apparently unacceptable conclusion can, in fact, be tolerable. Paradoxes are therefore important in philosophy, for until one is solved it shows that there is something about our reasoning and our concepts that we do not understand. Famous families of paradoxes include the semantic paradoxes and Zeno's paradoxes. At the beginning of the 20th century, Russell's paradox and other set-theoretic paradoxes of set theory, while the Sorites paradox has led to the investigation of the semantics of vagueness, and fuzzy logic. Paradoxes are under their other titles. Much as there is as much as a puzzle arising when someone says 'p', but I do not believe that ‘p’. What is said is not contradictory, since (for many instances of p) both parts of it could br. true. But the person nevertheless violates one presupposition of normal practice, namely that you assert something only if you believe it: By adding that you do not believe what you just said you undo the natural significance of the original act in saying it.
Furthermore, the moral philosopher and epistemologist Bernard Bolzano (1781-1848), whose logical work was based on a strong sense of there being an ontological underpinning of science and epistemology, lying in a theory of the objective entailments masking up the structure of scientific theories. His ability to challenge wisdom and come up with startling new ideas, as a Christian philosopher whether than from any position of mathematical authority, that for considerations of infinity, Bolzano”s significant work was Paradoxin des Unenndlichen, written in retirement and translated into the English as Paradoxes of the Infinite. Here, Bolzano considered directly the points that had concerned Galileo-the conflicting results that seem to emerge when infinity is studied. Certainly most of the paradoxical statements encountered in the mathematical domain . . . are propositions which either immediately contain the idea of the infinite, or at least in some way or other depend upon that idea for their attempted proof.
Continuing, Bolzano looks at two possible approaches to infinity. One is simply the case of setting up a sequence of numbers, such as the whole numbers, and saying that as it cannot conceivably be said to have a last term, it is inherently infinite-not finite. It is easy enough to show that the whole numbers do not have a point at which they stop, giving a name to that last number whatever it might have been to call it “ultimate.” Then what”s wrong with ultimate + 1? Why is that not a whole number?
The second approach to infinity, which Bolzano ascribes in Paradoses of the Infinite to “some philosophers . . . Taking this approach describes his first conception of infinity as the 'bad infinity'. Although the German philosopher Friedrich George Hegal (1770-1831) applies the conceptual form of infinity and points that it is, rather, the basis for a substandard infinity that merely reaches toward the absolute, but never reaches it. In Paradoses of the Infinite, he calls this form of potential infinity as a variable quantity knowing no limit to its growth (a definition adopted, even by many mathematicians) . . . always growing in the infinite and never reaching it. As far as Hegel and his colleagues were concerned, using this uprush, there was no need for a real infinity beyond some unreachable absolute. Instead we deal with a variable quality that is as big as we need it to be, or often in calculus as small as we need it to be, without ever reaching the absolute, ultimate, truly infinite.
Bolzano argues, though, that there is something else, an infinity that does not have this “whatever you need it to be” elasticity. In fact a truly infinite quantity (for example, the length of a straight line unbounded in either direction, meaning: The magnitude of the spatial entity containing all the points determined solely by their abstractly conceivable relation to two fixed points) does not by any means need to be variable, and in an adduced example it is in fact not variable. Conversely, it is quite possible for a quantity merely capable of being taken greater than we have already taken it, and of becoming larger than any pre-assigned (finite) quantity, nevertheless to mean at all times merely finitely, which holds in particular of every numerical quantity 1, 2, 3, 4, 5.
In other words, for Bolzano there could be a true infinity that was not a variable “something” that was only bigger than anything you might specify. Such a true infinity was the result of joining two pints together and extending that line in both directions without stopping. And what is more, the fact, that he could separate off the demands of calculus, using a finite quality without ever bothering with the slippery potential infinity. Here was both a deeper understanding of the nature of infinity and the basis on which are built in his “safe” infinity free calculus.
This use of the inexhaustible follows on directly from most Bolzano”s criticism of the way that ∞we used as a variable something that would be bigger than anything you could specify, but never quite reached the true, absolute infinity. In Paradoxes of the Infinity Bolzano points out that is possible for a quantity merely capable of becoming larger than any other one pre-assigned (finite) quantity, nevertheless to remain at all times merely finite.
Bolzano intended tis as a criticism of the way infinity was treated, but Professor Jacquette sees it instead of a way of masking use of practical applications like calculus without the need for weasel words about infinity.
By replacing ∞with ¤ we do away with one of the most common requirements for infinity, but is there anything left that map out to the real world? Can we confine infinity to that pure mathematical other world, where anything, however unreal, can be constructed, and forget about it elsewhere? Surprisingly, this seems to have been the view, at least at one point in time, even of the German mathematician and founder of set-theory Georg Cantor (1845-1918), himself, whose comments in 1883, that only the finite numbers are real.
Keeping within the lines of reason, both the Cambridge mathematician and philosopher Frank Plumpton Ramsey (1903-30) and the Italian mathematician G. Peano (1858-1932) have been to distinguish logical paradoxes and that depends upon the notion of reference or truth (semantic notions), such are the postulates justifying mathematical induction. It ensures that a numerical series is closed, in the sense that nothing but zero and its successors can be numbers. In that any series satisfying a set of axioms can be conceived as the sequence of natural numbers. Candidates from set theory include the Zermelo numbers, where the empty set is zero, and the successor of each number is its unit set, and the von Neuman numbers, where each number is the set of all smaller numbers. A similar and equally fundamental complementarity exists in the relation between zero and infinity. Although the fullness of infinity is logically antithetical to the emptiness of zero, infinity can be obtained from zero with a simple mathematical operation. The division of many numbers by zero is infinity, while the multiplication of any number by zero is zero.
With the set theory developed by the German mathematician and logician Georg Cantor. From 1878 to 1807, Cantor created a theory of abstract sets of entities that eventually became a mathematical discipline. A set, as he defined it, is a collection of definite and distinguished objects in thought or perception conceived as a whole.
Cantors attempted to prove that the process of counting and the definition of integers could be placed on a solid mathematical foundation. His method was to repeatedly place the elements in one set into “one-to-one” correspondence with those in another. In the case of integers, Cantor showed that each integer (1, 2, 3, . . . n) could be paired with an even integer (2, 4, 6, . . . n), and, therefore, that the set of all integers was equal to the set of all even numbers.
Amazingly, Cantor discovered that some infinite sets were large than others and that infinite sets formed a hierarchy of greater infinities. After this failed attempts to save the classical view of logical foundations and internal consistency of mathematical systems, it soon became obvious that a major crack had appeared in the seemingly sold foundations of number and mathematics. Meanwhile, an impressive number of mathematicians began to see that everything from functional analysis to the theory of real numbers depended on the problematic character of number itself.
While, in the theory of probability Ramsey was the first to show how a personalised theory could be developed, based on precise behavioural notions of preference and expectation. In the philosophy of language, Ramsey was one of the first thinkers to accept a “redundancy theory of truth,” which hr combined with radical views of the function of many kinds of propositions. Neither generalizations nor causal propositions, nor those treating probability or ethics, describe facts, but each has a different specific function in our intellectual economy.
Ramsey advocates that of a sentence generated by taking all the sentences affirmed in a scientific theory that use some term, e.g., “quark.” Replacing the term by a variable, and existentially quantifying into the result. Instead of saying quarks have such-and-such properties, Ramsey postdated that the sentence as saying that there is something that has those properties. If the process is repeated, the sentence gives the “topic-neutral” structure of the theory, but removes any implications that we know what the term so treated denote. I t leaves open the possibility of identifying the theoretical item, with whatever it is that best fits the description provided. Nonetheless, it was pointed out by the Cambridge mathematician Newman that if the process is carried out for all except the logical bones of the theory, then by the Löwenheim-Skolem theorem, the result will be interpretable in any domain of sufficient cardinality, and the content of the theory may reasonably be felt to have been lost.
It seems, that the most taken of paradoxes in the foundations of “set theory” as discovered by Russell in 1901. Some classes have themselves as members: The class of all abstract objects, for example, is an abstract object, whereby, others do not: The class of donkeys is not itself a donkey. Now consider the class of all classes that are not members of themselves, is this class a member of itself, that, if it is, then it is not, and if it is not, then it is.
The paradox is structurally similar to easier examples, such as the paradox of the barber. Such one like a village having a barber in it, who shaves all and only the people who do not have in themselves. Who shaves the barber? If he shaves himself, then he does not, but if he does not shave himself, then he does not. The paradox is actually just a proof that there is no such barber or in other words, that the condition is inconsistent. All the same, it is no too easy to say why there is no such class as the one Russell defines. It seems that there must be some restriction on the kind of definitions that are allowed to define classes and the difficulty that of finding a well-motivated principle behind any such restriction.
The French mathematician and philosopher Henri Jules Poincaré (1854-1912) believed those paradoses like those of Russell and the “barber” were due to such as the im predicative definitions, and therefore proposed banning them. But, it tuns out that classical mathematics required such definitions at too many points for the ban to be easily absolved. Having, in turn, as forwarded by Poincaré and Russell, was that in order to solve the logical and semantic paradoxes it would have to ban any collection (set) containing members that can only be defined by means of the collection taken as a whole. It is, effectively by all occurring principles into which have an adopting vicious regress, as to mark the definition for which involves no such failure. There is frequently room for dispute about whether regresses are benign or vicious, since the issue will hinge on whether it is necessary to reapply the procedure. The cosmological argument is an attempt to find a stopping point for what is otherwise seen as an infinite regress, and, to ban of the predicative definitions.
The investigation of questions that arise from reflection upon sciences and scientific inquiry, are such as called of a philosophy of science. Such questions include, what distinctions in the methods of science? Is there a clear demarcation between scenes and other disciplines, and how do we place such enquires as history, economics or sociology? And scientific theories probable or more in the nature of provisional conjecture? Can the be verified or falsified? What distinguished well from bad explanations? Might there be one unified since, embracing all the special science? For much of the 20th century their questions were pursued in a high abstract and logical framework it being supposed that as general logic of scientific discovery that a general logic of scientific discovery a justification might be found. However, many now take interests in a more historical, contextual and sometimes sociological approach, in which the methods and successes of a science at a particular time are regarded less in terms of universal logical principles and procedure, and more in terms of their availability to methods and paradigms as well as the social context.
In addition, to general questions of methodology, there are specific problems within particular sciences, giving subjects as biology, mathematics and physics.
The intuitive certainty that sparks aflame the dialectic awarenesses for its immediate concerns is either of the truths or by some other in an object of apprehensions, such as a concept. Awareness as such, has to its amounting quality value the place where philosophically understanding of the source of our knowledge is, however, in covering the sensible apprehension of things and pure intuition it is that which stricture sensation into the experience of things’ accent of its direction that orchestrates the celestial overture into measures in space and time.
The notion that determines how something is seen or evaluated of the status of law and morality especially associated with St. Thomas Aquinas and the subsequent scholastic tradition. More widely, any attempt to cement the moral and legal order together with the nature of the cosmos or how the nature of human beings, for which sense it is also found in some Protestant writers, and arguably derivative from a Platonic view of ethics, and is implicit in ancient Stoicism. Law stands above and apart from the activities of a human lawmaker, it constitutes an objective set of principles that can be seen true by “natural light” or reason, and (in religion versions of the theory) that express God”s will for creation. Non-religious versions of the theory substitute objective conditions for human flourishing as the source of constraints upon permissible actions and social arrangements. Within the natural law tradition, different views have been held about the relationship between the rule of law about God' s will, for instance the Dutch philosopher Hugo Grothius (1583-1645), similarly takes upon the view that the content of natural law is independent of any will, including that of God, while the German theorist and historian Samuel von Pufendorf (1632-94) takes the opposite view, thereby facing the problem of one horn of the Euthyphro dilemma, that simply states, that its dilemma arises from whatever the source of authority is supposed to be, for in which do we care about the good in general, because it is good, or do we just call good things that we care about. Wherefore, by facing the problem that may be to assume of a strong form, in which it is claimed that various facts entail values, or a weaker form, from which it confines itself to holding that reason by itself is capable of discerning moral requirements that are supped of binding to all human bings regardless of their desires
Although the morality of people sends the ethical amount, from which the same thing, is that there is a usage that restricts morality to systems such as that of the German philosopher and founder of ethical philosophy Immanuel Kant (1724-1804), based on notions such as duty, obligation, and principles of conduct, reserving ethics for more than the Aristotelian approach to practical reasoning based on the notion of a virtue, and generally avoiding the separation of ‘moral’ considerations from other practical considerations. The scholarly issues are complex, with some writers seeing Kant as more Aristotelian and Aristotle as, ore involved in a separate sphere of responsibility and duty, than the simple contrast suggests. Some theorists see the subject in terms of a number of laws (as in the Ten Commandments). The status of these laws may be a test that they are the edicts of a divine lawmaker, or that they are truths of reason, knowable deductively. Other approaches to ethics (e.g., eudaimonism, situation ethics, virtue ethics) eschew general principles as much as possible, frequently disguising the great complexity of practical reasoning. For Kantian notions of the moral law is a binding requirement of the categorical imperative, and to understand whether they are equivalent at some deep level. Kant's own applications of the notion are not always convincing, as for one cause of confusion in relating Kant's ethics to theories such additional expressivism is that it is easy, but mistaken, to suppose that the categorical nature of the imperative means that it cannot be the expression of sentiment, but must derive from something “unconditional” or “necessary” such as the voice of reason.
For whichever reason, the mortal being makes of its presence to the future of weighing of that which one must do, or that which can be required of one. The term carries implications of that which is owed (due) to other people, or perhaps in onself. Universal duties would be owed to persons (or sentient beings) as such, whereas special duty in virtue of specific relations, such for being the child of someone, or having made someone a promise. Duty or obligation is the primary concept of “deontological” approaches to ethics, but is constructed in other systems out of other notions. In the system of Kant, a perfect duty is one that must be performed whatever the circumstances: Imperfect duties may have to give way to the more stringent ones. In another way, perfect duties are those that are correlative with the right to others, imperfect duties are not. Problems with the concept include the ways in which due needs to be specified (a frequent criticism of Kant is that his notion of duty is too abstract). The concept may also suggest of a regimented view of ethical life in which we are all forced conscripts in a kind of moral army, and may encourage an individualistic and antagonistic view of social relations.
The most generally accepted account of externalism and/or internalism, that this distinction is that a theory of justification is internalist if only the requiems that all of the factors needed for a belief to be epistemologically accounted for a given person be it the cognitive accessibility to that person, internal to his cognitive perceptive, and externalist, if it allows that at least some of the justifying factors need not be thus accessible, so that thy can be external to the believer”s cognitively perceptive, beyond any such given relations. However, epistemologists often use the distinction between internalist and externalist theories of epistemic justification without offering any very explicit explication.
The externalist/internalist distinction has been mainly applied to theories of epistemic justification: It has also been applied in a closely related way to accounts of knowledge and in a rather different way to accounts of belief and thought contents.
The internalist requirement of cognitive accessibility can be interpreted in at least two ways: A strong version of internalism would require that the believer actually be aware of the justifying factor in order to be justified: While a weaker version would require only that he be capable of becoming aware of them by focussing his attentions appropriately, but without the need for any change of position, new information, etc. Though the phrase “cognitively accessible” suggests the weak interpretation, the main intuitive motivation for internalism, viz. the idea that epistemic justification requires that the believer actually have in his cognitive possession a reason for thinking that the belief is true, and would require the strong interpretation.
Perhaps, the clearest example of an internalist position would be a Foundationalist view according to which foundational beliefs pertain to immediately experienced states of mind and other beliefs are justified by standing in cognitively accessible logical or inferential relations to such foundational beliefs. Such a view could count as either a strong or a weak version of internalism, depending on whether actual awareness of the justifying elements or only the capacity to become aware of them is required. Similarly, a coherent view could also be internalist, if both the beliefs or other states with which a justification belief is required to cohere and the coherence relations themselves are reflectively accessible.
It should be carefully noticed that when internalism is construed in this way, it is neither necessary nor sufficient by itself for internalism that the justifying factors literally are internal mental states of the person in question. Not necessary, necessary, because on at least some views, e.g., a direct realist view of perception, something other than a mental state of the believer can be cognitively accessible: Not sufficient, because there are views according to which at least some mental states need not be actual (a strong version) or even possible (a weak version) objects of cognitive awareness. Also, on this way of drawing the distinction, a hybrid view, according to which some of the factors required for justification must be cognitively accessible while others need not and in general will not be, would count as an externalist view. Obviously too, a view that was externalist in relation to a strong version of internalism (by not requiring that the believer actually be aware of all justifying factors) could still be internalist in relation to a weak version (by requiring that he at least is capable of becoming aware of them).
The most prominent recent externalist views have been versions of reliabilism, whose requirements for justification are roughly that the beliefs are produced in a way or via a process that makes of objectively likely that the belief is true. What makes such a view externalist is the absence of any requirement that the person for whom the belief is justified have any sort of cognitive access to the relations of reliability in question. Lacking such access, such a person will in general have no reason for thinking that the belief is true or likely to be true, but will, on such an account, nonetheless be epistemically justified in according it. Thus such a view arguably marks a major break from the modern epistemological tradition, stemming from Descartes, which identifies epistemic justification with having a reason, perhaps even a conclusive reason for thinking that the belief is true. An epistemologist working within this tradition is likely to feel that the externalist, than offering a competing account of the same concept of epistemic justification with which the traditional epistemologist is concerned, has simply changed the subject.
The main objection to externalism rests on the intuitive certainty that the basic requirement for epistemic justification is that the acceptances of the belief in question are rational or responsible in relation to the cognitive goal of truth, which seems to require in turn that the believer actually be dialectally aware of a reason for thinking that the belief is true (or, at the very least, that such a reason be available to him). Since the satisfaction of an externalist condition is neither necessary nor sufficient for the existence of such a cognitively accessible reason, it is argued, externalism is mistaken as an account of epistemic justification. This general point has been elaborated by appeal to two sorts of putative intuitive counter-examples to externalism. The first of these challenges the necessity of belief which seem intuitively to be justified, but for which the externalist conditions are not satisfied. The standard examples in this sort are cases where beliefs are produced in some very nonstandard way, e.g., by a Cartesian demon, but nonetheless, in such a way that the subjective experience of the believer is indistinguishable from that of someone whose beliefs are produced more normally. The intuitive claim is that the believer in such a case is nonetheless epistemically justified, as much so as one whose belief is produced in a more normal way, and hence that externalist account of justification must be mistaken.
Perhaps the most striking replies to this sort of counter-example, on behalf of a cognitive process is to be assessed in “normal” possible worlds, i.e., in possible worlds that are actually the way our world is common-seismically believed to be, than in the world which contains the belief being judged. Since the cognitive processes employed in the Cartesian demon cases are, for which we may assume, reliable when assessed in this way, the reliabilist can agree that such beliefs are justified. The obvious, to a considerable degree of bringing out the issue of whether it is or not an adequate rationale for this construal of reliabilism, so that the reply is not merely a notional presupposition guised as having representation.
The correlative way of elaborating on the general objection to justificatory externalism challenges the sufficiency of the various externalist conditions by citing cases where those conditions are satisfied, but where the believers in question seem intuitively not to be justified. In this context, the most widely discussed examples have to do with possible occult cognitive capacities, like clairvoyance. Considering the point in application once, again, to reliabilism, the claim is that to think that he has such a cognitive power, and, perhaps, even good reasons to the contrary, is not rational or responsible and therefore not epistemically justified in accepting the belief that result from his clairvoyance, despite the fact that the reliablist condition is satisfied.
One sort of response to this latter sorts of an objection is to “bite the bullet” and insist that such believers are in fact justified, dismissing the seeming intuitions to the contrary as latent internalist prejudice. An extensively greater yet wide enough in the adopted response to attempt to impose additional conditions, usually of a roughly internalist sort, which will rule out the offending example, while stopping far of a full internalism. But, while there is little doubt that such modified versions of externalism can handle particular cases, as well enough to avoid clear intuitive implausibility, the usually problematic cases that they cannot handle, and also whether there is and clear motivation for the additional requirements other than the general internalist view of justification that externalist is committed to reject.
An alternative to giving an externalist account of epistemic justification, one which may be more defensible while still accommodating many of the same motivating concerns, is to give an externalist account of knowledge directly, without relying on an intermediate account of justification. Such a view will obviously have to reject the justified true belief account of knowledge, holding instead that knowledge is true belief which satisfies the chosen externalist condition, e.g., a result of a reliable process (and perhaps, further conditions as well). This makes it possible for such a view to retain internalist account of epistemic justification, though the centrality of that concept to epistemology would obviously be seriously diminished.
Such an externalist account of knowledge can accommodate the commonsense conviction that animals, young children, and unsophisticated adults' posse's knowledge, though not the weaker conviction (if such a conviction does exist) that such individuals are epistemically justified in their beliefs. It is also at least less vulnerable to internalist counter-examples of the sort discussed, since the intuitions involved there pertain more clearly to justification than to knowledge. What is uncertain is what ultimate philosophical significance the resulting conception of knowledge is supposed to have. In particular, does it have any serious bearing on traditional epistemological problems and on the deepest and most troubling versions of scepticism, which seems in fact to be primarily concerned with justification, the an knowledge?`
A rather different use of the terms “internalism” and “externalism” have to do with the issue of how the content of beliefs and thoughts is determined: According to an internalist view of content, the content of such intention states depends only on the non-relational, internal properties of the individual's mind or grain, and not at all on his physical and social environment: While according to an externalist view, content is significantly affected by such external factors and suggests a view that appears of both internal and external elements are standardly classified as an external view.
As with justification and knowledge, the traditional view of content has been strongly internalist in character. The main argument for externalism derives from the philosophy y of language, more specifically from the various phenomena pertaining to natural kind terms, indexicals, etc. that motivate the views that have come to be known as “direct reference” theories. Such phenomena seem at least to show that the belief or thought content that can be properly attributed to a person is dependant on facts about his environment -, e.g., whether he is on Earth or Twin Earth, what is fact pointing at, the classificatory criteria employed by expect in his social group, etc.-not just on what is going on internally in his mind or brain.
An objection to externalist account of content is that they seem unable to do justice to our ability to know the content of our beliefs or thought “from the inside,” simply by reflection. If content is depending on external factors pertaining to the environment, then knowledge of content should depend on knowledge of these factors-which will not in general be available to the person whose belief or thought is in question.
The adoption of an externalist account of mental content would seem to support an externalist account of justification, by way that if part or all of the content of a belief inaccessible to the believer, then both the justifying status of other beliefs in relation to that content and the status of that content ss justifying further beliefs will be similarly inaccessible, thus contravening the internalist requirement for justification. An internalist must insist that there are no justification relations of these sorts, that our internally associable content can either denote them in justification of, or simply to justly that from anything else: But such a response appears lame unless it is coupled with an attempt to show that the externalist account of content is mistaken.
The process of moving from a provisional formulation of acceptance of some position to acceptance of others, is that the goal of logic and classical epistemology is to codify kinds of inference, and to provide principles for separating good from bad inherences.
The rule of reference begins by implicating of question through which it is asked “What the tortoise said to Achilles” in the journal “Mind” in 1895. Lewis Carroll raised the Zeno like a problem of how a proof ever gets started.” Suppose I have as premise (1) p and (2) p ➞q. Can I infer q? Only, it seems, if I am sure of (3) (p & p)(p ➞q) ➞q. Can I then infer q? Only, it seems if I am sure that (4) (p & p ➞q & (p & p ➞q) ➞q) ➞q. For each new axiom (N) I need a further axiom (N. + 1) telling me that the set so far implies q, and the regress never stops, usual solution is to trat a system as containing not only axioms but also rules of reference allow only axioms, but also rules of inference, allowing movement from the axiom. The rule modus ponens shows us to pass from the first two premises to q. Carroll”s puzzle shows that it is essential to distinguish two theoretical categories, although there may be choice about which theses to put in which category.
For the best of all possibilities that explains inference had first been formulated by Princeton philosopher Gilber Harnam. The idea in that when we have an easy to excavate, the idea is that when we have a best explanation of some phenomenon, we are entitled to repose confidence in it simply on that account. Sometimes thought to br. the lynchpin of scientific method, the principle is not easy to formulate and has come under attack, notably since our best current explanation of something may only be the best of a bad lot. There exist cases in which the best explanation is still not all that convincing, so other considerations than pure explanation successes seem to play a role.
The philosopher Bas van Fraassen (The Scientific Image, 1980) explicated further in that constructive empiricism divides science into an observation statement and theory statement. It holds that the latter are capable of strict truth and falsity, but maintains that the appropriate attitude is not to believe them, but only to accept them ss best as empirically adequate. It is often regarded as a variety of pragmatism or instrumentalism, although more orthodox varieties of those positions deny that theoretic statements have truth-value. A related debated is held by the German philosopher Hans Valhinger (1854-1933), who, however, thinks that we can be sure that theoretical statements are actually false. , In other words, theories are useful because they enable us to cope with what would otherwise be the unmanageable complexity of things. The doctrine bears some affinity to pragmatism, but differs in that Vaihlinger thinks that our useful theories are nevertheless really false.
Lectures published as Pragmatism, became the new name for old ways of thinking (1907) summed up James”s original contributions to the theory called pragmatism, a term first used by the American logician C. S. Peirce. James generalized the pragmatic method, developing it from a critique of the logical basis of the sciences into a basis for the evaluation of all experience. He maintained that the meaning of ideas is found only in terms of their possible consequences. If consequences are lacking, ideas are meaningless. James contended that this is the method used by scientists to define their terms and to test their hypotheses, which, if meaningful, entail predictions. The hypotheses can be considered true if the predicted events take place. On the other hand, most metaphysical theories are meaningless, because they entail no testable predictions. Meaningful theories, James argued, are instruments for dealing with problems that arise in experience.
According to James”s pragmatism, then, truth is that which works. One determines what works by testing propositions in experience. In so doing, one finds that certain propositions become true. As James put it, “Truth is something that happens to an idea in the process of its verification; it is not a static property. This does not mean, however, that anything can be true. “The true is only the expedient in the way of our thinking, just as “the right” is only the expedient in the way of our behaving,” James maintained. One cannot believe whatever one want to believe, because such self-centered beliefs would not work out.
James was opposed to absolute metaphysical systems and argued against doctrines that describe reality as a unified, monolithic whole. In Essays in Radical Empiricism (1912), he argued for a pluralistic universe, denying that the world can be explained in terms of an absolute force or scheme that determines the interrelations of things and events. He held that the interrelations, whether they serve to hold things together or apart, are just as real as the things themselves.
By the end of his life, James had become world-famous as a philosopher and psychologist. In both fields, he functioned more as an originator of new thought than as a founder of dogmatic schools. His pragmatic philosophy was further developed by American philosopher John Dewey and others; later studies in physics by Albert Einstein made the theories of interrelations advanced by James appear prophetic.
Nevertheless, in science a way of looking at a field, this, however, as accorded through the lines of force whose evident physical reality o of the intervening medium whereby, the task of the philosopher of science has often been posed in terms of demarcating good or scientific theories from bad, unscientific ones wherefore he property of a statement or theory that it is capable of being refuted by experience. It a proper annunciation is accredited within the philosophy falsifiability, wherein scientific theory, as opposed to nonfalsifaiability and noticeably psychoanalysis and historical materialism. The philosopher of science Raimund Karl Popper (1902-1994) whose idea was that it could be a positive virtue is a scientific theory that it is bold, conjectural, and goes beyond the evidence, but that it has to be capable of facing possible refutation. If every way that things turn out is compatible with it, then it is no longer a scientific theory, but for instance, an ideology or article of faith. Popper argued that the central virtue of science, as opposed to pseudo-science, is not that it puts forward the hypotheses that are confirmed by evidence. That is, they genuinely face the possibility of test and rejection through not confirming to evidences so gathered, that is to give no accent of te extent in which it is rational to rely upon scientific theory, however, the actual picture of acceptance and rejection of scientific hypotheses is more complex than Popper suggests. As perhaps, the view that everyday attributions of intention beliefs, and, meanings to other persons proceed via tacit use of a theory that enables one to construct of these interpretations as an explanation of their doing.
The view is commonly held along with “functionalism” according to which psychological states are theoretical entities, identified by te network of their causes ad effects. The theory-theory has different implications, depending upon which feature of theories is being stressed. Theories may be thought of as capable of as yielding predication and explanations, as achieved by a process of theorizing as answering to empirical evidence, that its principle describable without them, as liable to be overturned by newer and better theorise, and so on. The main problem with is with our understanding of others as the outcome of a piece of theorising is the non-existence of medium, in which his theory can be couched as the chid learns simultaneously with the minds language.
Our understanding of others is not gained by the tacit use of a “theory,” enabling us to infer what thoughts or intentions explain their action, but by reliving the situation “in their moccasins” or from their point of view, not thereby understanding what they experienced and thought, and therefore expressed. Understanding others is achieved when we can ourselves deliberate as they did, and hear their words if theory are our own. The suggestion is a modern development of the “Verstehe,” the tradition associated with the German philosopher, literary critic and historian Wilhelm Dilthey (1833-1911).
In addition, the hypnotise especially associated with the American philosopher of mind J.A. Fodor, who affirmed strongly that mental processing occurs in a language different from one”s ordinary native language, but underlying and explaining our competence with it. The idea is an advancement of the ‘Chomskyan’ notion that if an innate universal grammar, it is a way of drawing three analogies between the workings of the brain or mind and those of a standard computer, since computer programs are linguistically complex sets of instruction whose execution explains the surface behaviour of the computer. As an explanation of ordinary language learning and competence the hypothecs of thought have not found universal favour. It apparently only explains ordinary representational powers by invoking innate things of the same sort, and it invites the image of the learning infant translating the language surrounding and in back into an innate language whose own powers are a mysterious biological givens.
In philosophy, the doctrine that all existence is resolvable into matter or into an attribute or effect of matter. According to this doctrine, matter is the ultimate reality, and the phenomenon of consciousness is explained by physicochemical changes in the nervous system. Materialism is thus the antithesis of idealism, in which the supremacy of mind is affirmed and matter is characterized as an aspect or objectification of mind. Extreme or absolute materialism is known as materialistic monism. According to the mind-stuff theory of monism, as expounded by the British metaphysician W. K. Clifford, in his Elements of Dynamic (1879-87), matters and minds are consubstantial, each being merely an aspect of the other. Philosophical materialism is ancient and has had numerous formulations. The early Greek philosophers subscribed to a variant of materialism known as hylozoism, according to which matter and life are identical. Related to hylozoism is the doctrine of hylotheism, in which matter is held to be divine, or the existence of God is disavowed apart from matter. Cosmological materialism is a term used to characterize a materialistic interpretation of the universe.
Antireligious materialism is motivated by a spirit of hostility toward the theological dogmas of organized religion, particularly those of Christianity. Notable among the exponents of antireligious materialism was the 18th-century French philosopher's Denis Diderot, Paul Henri d'Holbach, and Julien Offroy de La Mettrie. According to historical materialism, as set forth in the writings of Karl Marx, Friedrich Engels, and Vladimir Ilich Lenin, in every historical epoch the prevailing economic system by which the necessities of life are produced determines the form of societal organization and the political, religious, ethical, intellectual, and artistic history of the epoch.
In modern times philosophical materialism has been largely influenced by the doctrine of evolution and may indeed be said to have been assimilated in the wider theory of evolution. Supporters of the theory of evolution go beyond the mere antithesis or atheism of materialism and seek positively to show how the diversities and differences in creation are the result of natural as opposed to supernatural processes.
The Philosophy of Mind is the branch of philosophy that considers mental phenomena such as sensation, perception, thought, belief, desire, intention, memory, emotion, imagination, and purposeful action. These phenomena, which can be broadly grouped as thoughts and experiences, are features of human beings; many of them are also found in other animals. Philosophers are interested in the nature of each of these phenomena as well as their relationships to one another and to physical phenomena, such as motion.
Many fields other than philosophy shares an interest in the nature of mind. In religion, the nature of mind is connected with various conceptions of the soul and the possibility of life after death. In many abstract theories of mind there is considerable overlap between philosophy and the science of psychology. Once part of philosophy, psychology split off and formed a separate branch of knowledge in the 19th century. While psychology used scientific experiments to study mental states and events, philosophy uses reasoned argumentation and thought experiments in seeking to understand the concepts that underlie mental phenomena. Also influenced by philosophy of mind is the field of artificial intelligence (AI), which endeavors to develop computers that can mimic what the human mind can do. Cognitive science attempts to integrate the understanding of mind provided by philosophy, psychology, AI, and other disciplines. Finally, all of these fields benefit from the detailed understanding of the brain that has emerged through neuroscience in the late 20th century.
Philosophers use the characteristics of inward accessibility, subjectivity, intentionality, goal-directedness, creativity and freedom, and consciousness to distinguish mental phenomena from physical phenomena.
Perhaps the most important characteristic of mental phenomena is that they are inwardly accessible, or available to us through introspection. We each know our own minds-our sensations, thoughts, memories, desires, and fantasies-in a direct sense, by internal reflection. We also know our mental states and mental events in a way that no one else can. In other words, we have privileged access to our own mental states.
Certain mental phenomena, those we generally call experiences, have a subjective nature-that is, they have certain characteristics we become aware of when we reflect. For instance, there is “something it is like” to feel pain, or have an itch, or see something red. These characteristics are subjective in that they are accessible to the subject of the experience, the person who has the experience, but not to others.
Other mental phenomena, which we broadly refer to as thoughts, have a characteristic philosophers call intentionality. Intentional thoughts are about other thoughts or objects, which are represented as having certain properties or for being related to one another in a certain way. The belief that California is west of Nevada, for example, is about California and Nevada and represents the former being west of the latter. Although we have privileged access to our intentional states, many of them do not seem to have a subjective nature, at least not in the way that experiences do.
A number of mental phenomena appear to be connected to one another as elements in an intelligent, goal-directed system. The system works as follows: First, our sense organs are stimulated by events in our environment; next, by virtue of these stimulations, we perceive things about the external world; finally, we use this information, as well as information we have remembered or inferred, to guide our actions in ways that further our goals. Goal-directedness seems to accompany only mental phenomena.
Another important characteristic of mind, especially of human minds, is the capacity for choice and imagination. Rather than automatically converting past influences into future actions, individual minds are capable of exhibiting creativity and freedom. For instance, we can imagine things we have not experienced and can act in ways that no one expects or could predict.
Mental phenomena are conscious, and consciousness may be the closest term we have for describing what is special about mental phenomena. Minds are sometimes referred to as consciousness, yet it is difficult to describe exactly what consciousness is. Although consciousness is closely related to inward accessibility and subjectivity, these very characteristics seem to hinder us in reaching an objective scientific understanding of it.
Although philosophers have written about mental phenomena since ancient times, the philosophy of mind did not garner much attention until the work of French philosopher René Descartes in the 17th century. Descartes”s work represented a turning point in thinking about mind by making a strong distinction between bodies and minds, or the physical and the mental. This duality between mind and body, known as Cartesian dualism, has posed significant problems for philosophy ever since.
Descartes believed there are two basic kinds of things in the world, a belief known as substance dualism. For Descartes, the principles of existence for these two groups of things-bodies and minds-are completely different from one another: Bodies exist by being extended in space, while minds exist by being conscious. According to Descartes, nothing can be done to give a body thought and consciousness. No matter how we shape a body or combine it with other bodies, we cannot turn the body into a mind, a thing that is conscious, because being conscious is not a way of being extended.
For Descartes, a person consists of a human body and a human mind causally interacting with one another. For example, the intentions of a human being may cause that person's limbs to move. In this way, the mind can affect the body. In addition, the sense organs of a human being maybe affected by light, pressure, or sound, external sources, which in turn affect the brain, affecting mental states. Thus, the body may affect the mind. Exactly how mind can affect body, and vice versa, is a central issue in the philosophy of mind, and is known as the mind-body problem. According to Descartes, this interaction of mind and body is peculiarly intimate. Unlike the interaction between a pilot and his ship, the connection between mind and body more closely resembles two substances that have been thoroughly mixed together.
In response to the mind-body problem arising from Descartes”s theory of substance dualism, a number of philosophers have advocated various forms of substance monism, the doctrine that there is ultimately just one kind of thing in reality. In the 18th century, Irish philosopher George Berkeley claimed there were no material objects in the world, only minds and their ideas. Berkeley thought that talk about physical objects was simply a way of organizing the flow of experience. Near the turn of the 20th century, American psychologist and philosopher William James proposed another form of substance monism. James claimed that experience is the basic stuff from which both bodies and minds are constructed.
Most philosophers of mind today are substance monists of a third type: They are materialists who believe that everything in the world is basically material, or a physical object. Among materialists, there is still considerable disagreement about the status of mental properties, which are conceived as properties of bodies or brains. Materialists who are property dualists believe that mental properties are an additional kind of property or attribute, not reducible to physical properties. Property dualists have the problem of explaining how such properties can fit into the world envisaged by modern physical science, according to which there are physical explanations for all things.
Materialists who are property monists believe that there is ultimately only one type of property, although they disagree on whether or not mental properties exist in material form. Some property monists, known as reductive materialists, hold that mental properties exist simply as a subset of relatively complex and nonbasic physical properties of the brain. Reductive materialists have the problem of explaining how the physical states of the brain can be inwardly accessible and have a subjective character, as mental states do. Other property monists, known as Eliminative materialists, consider the whole category of mental properties to be a mistake. According to them, mental properties should be treated as discredited postulates of an outmoded theory. Eliminative materialism is difficult for most people to accept, since we seem to have direct knowledge of our own Philosophy of mind concerns itself with a number of specialized problems. In addition to the mind-body problem, important issues include those of the personal identity, immortality, and artificial intelligence.
During much of Western history, the mind has been identified with the soul as presented in Christian Theology. According to Christianity, the soul is the source of a person's identity and is usually regarded as immaterial; thus, it is capable of enduring after the death of the body. Descartes”s conception of the mind as a separate, nonmaterial substance fits well with this understanding of the soul. In Descartes's view, we are aware of our bodies only as the cause of sensations and other mental phenomena. Consequently our personal essence is composed more fundamentally of mind and the preservation of the mind after death would constitute our continued existence.
The mind conceived by materialist forms of substance monism does not fit as neatly with this traditional concept of the soul. With materialism, once a physical body is destroyed, nothing enduring remains. Some philosophers think that a concept of personal identity can be constructed that permits the possibility of life after death without appealing to separate immaterial substances. Following in the tradition of 17th-century British philosopher John Locke, these philosophers propose that a person consists of a stream of mental events linked by memory. These link of memory, rather than a single underlying substance, provide the unity of a single consciousness through time. Immortality is conceivable if we think of these memory links as connecting a later consciousness in heaven with an earlier one on earth.
The field of artificial intelligence also raises interesting questions for the philosophy of mind. People have designed machines that mimic or model many aspects of human intelligence, and there are robots currently in use whose behavior is described in terms of goals, beliefs, and perceptions. Such machines are capable of behavior that, were it exhibited by a human being, would surely be taken to be free and creative. As an example, in 1996 an IBM computer named Deep Blue won a chess game against Russian world champion Garry Kasparov under international match regulations. Moreover, it is possible to design robots that have some sort of privileged access to their internal states. Philosophers disagree over whether such robots truly think or simply appear to think and whether such robots should be considered to be conscious.
Philosophy, a speculative world-view which asserts that basic reality is constantly in a process of flux and change. Indeed, reality is identified with pure process. Concepts such as creativity, freedom, novelty, emergence, and growth are fundamental explanatory categories for process philosophy. This metaphysical perspective is to be contrasted with a philosophy of substance, the view that a fixed and permanent reality underlies the changing or fluctuating world of ordinary experience. Whereas substance philosophy emphasizes static being, process philosophy emphasizes dynamically becoming.
Although process philosophy is as old as the 6th-century Bc Greek philosopher, Heraclitus, renewed interest in it was stimulated in the 19th century by the theory of evolution. Key figures in the development of modern process philosophy were the British philosophers’ Herbert Spencer, Samuel Alexander, and Alfred North Whitehead, the American philosopher”s Charles S. Peirce and William James, and the French philosopher's Henri Bergson and Pierre Teilhard de Chardin. Whitehead's Process and Reality: An Essay in Cosmology (1929) is generally considered the most important systematic expression of process philosophy.
A contemporary theology has been strongly influenced by process philosophy. The American theologian Charles Hartshorne, for instance, rather than interpreting God as an unchanging absolute, emphasizes God”s sensitive and caring relationship with the world. A personal God enters into relationships in such a way that he is affected by the relationships, and to be affected by relationships is to change. So God too is in the process of growth and development.
Neurophysiology, is the study of how nerve cells, or neurons, receive and transmit information. Two types of phenomena are involved in processing nerve signals: electrical and chemical. Electrical events propagate a signal within a neuron, and chemical processes transmit the signal from one neuron to another neuron or to a muscle cell.
The signals conveying everything that human beings sense and think, and every motion they make, follows nerve pathways in the human body as waves of ions (atoms or groups of atoms that carries electric charges). Australian physiologist Sir John Eccles discovered many of the intricacies of this electrochemical signaling process, particularly the pivotal step in which a signal is conveyed from one nerve cell to another. He shared the 1963 Nobel Prize in physiology or medicine for this work, which he described in a 1965 Scientific American article.
A neuron is a long cell that has a thick central area containing the nucleus; it also has one long process called an axon and one or more short, bushy processes called dendrites. Dendrites receive impulses from other neurons. (The exceptions are sensory neurons, such as those that transmit information about temperature or touch, in which the signal is generated by specialized receptors in the skin.) These impulses are propagated electrically along the cell membrane to the end of the axon. At the tip of the axon the signal is chemically transmitted to an adjacent neuron or muscle cell.
Like all other cells, neurons contain charged ions: potassium and sodium (positively charged) and chlorine (negatively charged). Neurons differ from other cells in that they are able to produce a nerve impulse. A neuron is polarized-that is, it has an overall negative charge inside the cell membrane because of the high concentration of chlorine ions and low concentration of potassium and sodium ions. The concentration of these same ions is exactly reversed outside the cell. This charge differential represents stored electrical energy, sometimes referred to as membrane potential or resting potential. The negative charge inside the cell is maintained by two features. The first is the selective permeability of the cell membrane, which is more permeable to potassium than sodium. The second feature is sodium pumps within the cell membrane that actively pump sodium out of the cell. When depolarization occurs, this charge differential across the membrane is reversed, and a nerve impulse is produced.
Depolarization is a rapid change in the permeability of the cell membrane. When sensory input or any other kind of stimulating current is received by the neuron, the membrane permeability is changed, allowing a sudden influx of sodium ions into the cell. The high concentration of sodium, or action potential, changes the overalls charge within the cell from negative too positively. The locals change in ion concentration triggers similar reactions along the membrane, propagating the nerve impulse. After a brief period called the refractory period, during which the ionic concentration returned to resting potential, the neuron can repeat this process.
Nerve impulses travel at different speeds, depending on the cellular composition of a neuron. Where speed of impulse is important, as in the nervous system, axons are insulated with a membranous substance called myelin. The insulation provided by myelin maintains the ionic charge over long distances. Nerve impulses are propagated at specific points along the myelin sheath; these points are called the nodes of Ranvier. Examples of myelinated axons are those in sensory nerve fibers and nerves connected to skeletal muscles. In non-myelinated cells, the nerve impulse is propagated more diffusely.
When the electrical signal reaches the tip of an axon, it stimulates small presynaptic vesicles in the cell. These vesicles contain chemicals called neurotransmitters, which are released into the microscopic space between neurons (the synaptic cleft). The neurotransmitters attach to specialized receptors on the surface of the adjacent neuron. This stimulus causes the adjacent cell to depolarize and propagate an action potential of its own. The duration of a stimulus from a neurotransmitter is limited by the breakdown of the chemicals in the synaptic cleft and the reuptake by the neuron that produced them. Formerly, each neuron was thought to make only one transmitter, but recent studies have shown that some cells make two or more.
Philosophy of Mind considers mental phenomena such as sensation, perception, thought, belief, desire, intention, memory, emotion, imagination, and purposeful action. These phenomena, which can be broadly grouped as thoughts and experiences, are features of human beings; many of them are also found in other animals. Philosophers are interested in the nature of each of these phenomena as well as their relationships to one another and to physical phenomena, such as motion.
In the 17th century, French philosopher René Descartes proposed that only two substances ultimately exist; mind and body. Yet, if the two are entirely distinct, as Descartes believed, how can one substance interact with the other? How, for example, is the intention of a human mind able to cause movement in the person's limbs? The issue of the interaction between mind and body is known in philosophy as the mind-body problem.
Many fields other than a philosophy share an interest in the nature of mind. In religion, the nature of mind is connected with various conceptions of the soul and the possibility of life after death. In many abstract theories of mind there is considerable overlap between philosophy and the science of psychology. Once part of philosophy, psychology split off and formed a separate branch of knowledge in the 19th century. While psychology used scientific experiments to study mental states and events, philosophy uses reasoned arguments and thought experiments in seeking to understand the concepts that underlie mental phenomena. Also influenced by philosophy of mind is the field of artificial intelligence (AI), which endeavors to develop computers that can mimic what the human mind can do. Cognitive science attempts to integrate the understanding of mind provided by philosophy, psychology, AI, and other disciplines. Finally, all of these fields benefit from the detailed understanding of the brain that has emerged through neuroscience in the late 20th century.
Philosophers use the characteristics of inward accessibility, subjectivity, intentionality, goal-directedness, creativity and freedom, and consciousness to distinguish mental phenomena from physical phenomena.
Perhaps the most important characteristic of mental phenomena is that they are inwardly accessible, or available to us through introspection. We each know our own minds-our sensations, thoughts, memories, desires, and fantasies-in a direct sense, by internal reflection. We also know our mental states and mental events in a way that no one else can. In other words, we have privileged access to our own mental states.
Certain mental phenomena, those we generally call experiences, have a subjective nature-that is, they have certain characteristics we become aware of when we reflect. For instance, there is “something it is like” to feel pain, or have an itch, or see something red. These characteristics are subjective in that they are accessible to the subject of the experience, the person who has the experience, but not to others.
Other mental phenomena, which we broadly refer to as thoughts, have a characteristic philosophers call intentionality. Intentional thoughts are about other thoughts or objects, which are represented as having certain properties or for being related to one another in a certain way. The belief that California is west of Nevada, for example, is about California and Nevada and represents the former for being west of the latter. Although we have privileged access to our intentional states, many of them do not seem to have a subjective nature, at least not in the way that experiences do.
A number of mental phenomena appear to be connected to one another as elements in an intelligent, goal-directed system. The system works as follows: First, our sense organs are stimulated by events in our environment; next, by virtue of these stimulations, we perceive things about the external world; finally, we use this information, as well as information we have remembered or inferred, to guide our actions in ways that further our goals. Goal-directedness seems to accompany only mental phenomena.
Another important characteristic of mind, especially of human minds, is the capacity for choice and imagination. Rather than automatically converting past influences into future actions, individual minds are capable of exhibiting creativity and freedom. For instance, we can imagine things we have not experienced and can act in ways that no one expects or could predict.
Scientists have long considered the nature of consciousness without producing a fully satisfactory definition. In the early 20th century American philosopher and psychologist William James suggested that consciousness is a mental process involving both attention to external stimuli and short-term memory. Later scientific explorations of consciousness mostly expanded upon James”s work. In this article from a 1997 special issue of Scientific American, Nobel laureate Francis Crick, who helped determine the structure of DNA, and fellow biophysicists Christof Koch explains how experiments on vision might deepen our understanding of consciousness.
Mental phenomena are conscious, and consciousness may be the closest term we have for describing what is special about mental phenomena. Minds are sometimes referred to as consciousness, yet it is difficult to describe exactly what consciousness is. Although consciousness is closely related to inward accessibility and subjectivity, these very characteristics seem to hinder us in reaching an objective scientific understanding of it.
Although philosophers have written about mental phenomena since ancient times, the philosophy of mind did not garner much attention until the work of French philosopher René Descartes in the 17th century. Descartes”s work represented a turning point in thinking about mind by making a strong distinction between bodies and minds, or the physical and the mental. This duality between mind and body, known as Cartesian dualism, has posed significant problems for philosophy ever since.
In response to the mind-body problem arising from Descartes”s theory of substance dualism, a number of philosophers have advocated various forms of substance monism, the doctrine that there is ultimately just one kind of thing in reality. In the 18th century, Irish philosopher George Berkeley claimed there were no material objects in the world, only minds and their ideas. Berkeley thought that talk about physical objects was simply a way of organizing the flow of experience. Near the turn of the 20th century, American psychologist and philosopher William James proposed another form of substance monism. James claimed that experience is the basic stuff from which both bodies and minds are constructed.
Most philosophers of mind today are substance monists of a third type: They are materialists who believe that everything in the world is basically material, or a physical object. Among materialists, there is still considerable disagreement about the status of mental properties, which are conceived as properties of bodies or brains. Materialists who are property dualists believe that mental properties are an additional kind of property or attribute, not reducible to physical properties. Property dualists have the problem of explaining how such properties can fit into the world envisaged by modern physical science, according to which there are physical explanations for all things.
Materialists who are property monists believe that there is ultimately only one type of property, although they disagree on whether or not mental properties exist in material form. Some property monists, known as reductive materialists, hold that mental properties exist simply as a subset of relatively complex and nonbasic physical properties of the brain. Reductive materialists have the problem of explaining how the physical states of the brain can be inwardly accessible and have a subjective character, as mental states do. Other property monists, known as Eliminative materialists, consider the whole category of mental properties to be a mistake. According to them, mental properties should be treated as discredited postulates of an outmoded theory. Eliminative materialism is difficult for most people to accept, since we seem to have direct knowledge of our own mental phenomena by introspection and because we use the general principles we understand about mental phenomena to predict and explain the behavior of others.
Philosophy of mind concerns itself with a number of specialized problems. In addition to the mind-body problem, important issues including those whose personal identity, immortality, and/or some artificial intelligence.
During much of Western history, the mind has been identified with the soul as presented in Christian Theology. According to Christianity, the soul is the source of a person's identity and is usually regarded as immaterial; thus, it is capable of enduring after the death of the body. Descartes”s conception of the mind as a separate, nonmaterial substance fits well with this understanding of the soul. In Descartes's view, we are aware of our bodies only as the cause of sensations and other mental phenomena. Consequently our personal essence is composed more fundamentally of mind and the preservation of the mind after death would constitute our continued existence.
The mind conceived by materialist forms of substance monism does not fit as neatly with this traditional concept of the soul. With materialism, once a physical body is destroyed, nothing enduring remains. Some philosophers think that a concept of personal identity can be constructed that permits the possibility of life after death without appealing to separate immaterial substances. Following in the tradition of 17th-century British philosopher John Locke, these philosophers propose that a person consists of a stream of mental events linked by memory. These link of memory, rather than a single underlying substance, provide the unity of a single consciousness through time. Immortality is conceivable if we think of these memory links as connecting a later consciousness in heaven with an earlier one on earth.
No simple, agreed-upon definition of consciousness exists. Attempted definitions tend to be tautological (for example, consciousness defined as awareness) or merely descriptive (for example, consciousness described as sensations, thoughts, or feelings). Despite this problem of definition, the subject of consciousness has had a remarkable history. At one time the primary subject matter of psychology, consciousness as an area of study suffered an almost total demise, later reemerging to become a topic of current interest.
Most of the philosophical discussions of consciousness arose from the mind-body issues posed by the French philosopher and mathematician René Descartes in the 17th century. Descartes asked: Is the mind, or consciousness, independent of matter? Is consciousness extended (physical) or unextended (nonphysical)? Is consciousness determinative, or is it determined? English philosophers such as John Locke equated consciousness with physical sensations and the information they provide, whereas European philosophers such as Gottfried Wilhelm Leibniz and Immanuel Kant gave a more central and active role to consciousness.
The philosopher who most directly influenced subsequent exploration of the subject of consciousness was the 19th-century German educator Johann Friedrich Herbart, who wrote that ideas had quality and intensity and that they may inhibit or facilitate one another. Thus, ideas may pass from “states of reality” (consciousness) to “states of tendencies” (unconsciousness), with the dividing line between the two states being described as the threshold of consciousness. This formulation of Herbart clearly presages the development, by the German psychologist and physiologist Gustav Theodor Fechner, of the psychophysical measurement of sensation thresholds, and the later development by Sigmund Freud of the concept of the unconscious.
The experimental analysis of consciousness dates from 1879, when the German psychologist Wilhelm Max Wundt started his research laboratory. For Wundt, the task of psychology was the study of the structure of consciousness, which extended well beyond sensations and included feelings, images, memory, attention, duration, and movement. Because early interest focused on the content and dynamics of consciousness, it is not surprising that the central methodology of such studies was introspection; that is, subjects reported on the mental contents of their own consciousness. This introspective approach was developed most fully by the American psychologist Edward Bradford Titchener at Cornell University. Setting his task as that of describing the structure of the mind, Titchener attempted to detail, from introspective self-reports, the dimensions of the elements of consciousness. For example, taste was “dimensionalized” into four basic categories: sweet, sour, salt, and bitter. This approach was known as structuralism.
By the 1920s, however, a remarkable revolution had occurred in psychology that was to essentially remove considerations of consciousness from psychological research for some 50 years: Behaviorism captured the field of psychology. The main initiator of this movement was the American psychologist John Broadus Watson. In a 1913 article, Watson stated, “I believe that we can write a psychology and never use the term’s consciousness, mental states, mind . . . imagery and the like. Psychologists then turned almost exclusively to behavior, as described in terms of stimulus and response, and consciousness was totally bypassed as a subject. A survey of eight leading introductory psychology texts published between 1930 and the 1950s found no mention of the topic of consciousness in five texts, and in two it was treated as a historical curiosity.
Beginning in the late 1950s, interests in the subject of consciousness returned, specifically in those subjects and techniques relating to altered states of consciousness: sleep and dreams, meditation, biofeedback, hypnosis, and drug-induced states. The surge in studying sleep and dream research was directly fueled by a discovery relevant to the nature of consciousness. A physiological indicator of the dream state was found: At roughly 90-minute intervals, the eyes of sleepers were observed to move rapidly, and at the same time the sleepers brain waves would show a pattern resembling the waking state. When people were awakened during these periods of rapid eye movement, they almost always reported dreams, whereas if awakened at other times they did not. This and other research clearly indicated that sleep, once considered a passive state, were instead an active state of consciousness.
During the 1960s, an increased search for “higher levels” of consciousness through meditation resulted in a growing interest in the practices of Zen Buddhism and Yoga from Eastern cultures. A full flowering of this movement in the United States was seen in the development of training programs, such as Transcendental Meditation, that were self-directed procedures of physical relaxation and focused attention. Biofeedback techniques also were developed to bring body systems involving factors such as blood pressure or temperature under voluntary control by providing feedback from the body, so that subjects could learn to control their responses. For example, researchers found that persons could control their brain-wave patterns to some extent, particularly the so-called alpha rhythms generally associated with a relaxed, meditative state. This finding was especially relevant to those interested in consciousness and meditation, and a number of “alpha training” programs emerged.
Another subject that led to increased interest in altered states of consciousness was hypnosis, which involves a transfer of conscious control from the subject to another person. Hypnotism has had a long and intricate history in medicine and folklore and has been intensively studied by psychologists. Much has become known about the hypnotic state, relative to individual suggestibility and personality traits; the subject has now largely been demythologized, and the limitations of the hypnotic state are fairly well known. Despite the increasing use of hypnosis, however, much remains to be learned about this unusual state of focused attention.
Finally, many people in the 1960s experimented with the psychoactive drugs known as hallucinogens, which produce disorders of consciousness. The most prominent of these drugs are lysergic acid diethylamide, or LSD; mescaline; and psilocybin; the latter two have long been associated with religious ceremonies in various cultures. LSD, because of its radical thought-modifying properties, was initially explored for its so-called mind-expanding potential and for its psychotomimetic effects (imitating psychoses). Little positive use, however, has been found for these drugs, and their use is highly restricted.
As the concept of a direct, simple linkage between environment and behavior became unsatisfactory in recent decades, the interest in altered states of consciousness may be taken as a visible sign of renewed interest in the topic of consciousness. That persons are active and intervening participants in their behavior has become increasingly clear. Environments, rewards, and punishments are not simply defined by their physical character. Memories are organized, not simply stored. An entirely new area called cognitive psychology has emerged that centers on these concerns. In the study of children, increased attention is being paid to how they understand, or perceive, the world at different ages. In the field of animal behavior, researchers increasingly emphasize the inherent characteristics resulting from the way a species has been shaped to respond adaptively to the environment. Humanistic psychologists, with a concern for self-actualization and growth, have emerged after a long period of silence. Throughout the development of clinical and industrial psychology, the conscious states of persons in terms of their current feelings and thoughts were of obvious importance. The role of consciousness, however, was often de-emphasizing in favor of unconscious needs and motivations. Trends can be seen, however, toward a new emphasis on the nature of states of consciousness.
Neurophysiology, is the study of how nerve cells, or neurons, receive and transmit information. Two types of phenomena are involved in processing nerve signals: electrical and chemical. Electrical events propagate a signal within a neuron, and chemical processes transmit the signal from one neuron to another neuron or to a muscle cell.
The signals conveying everything that human beings sense and think, and every motions they make, follow nerve pathways in the human body as waves of ions (atoms or groups of atoms that carries electric charges). Australian physiologist Sir John Eccles discovered many of the intricacies of this electrochemical signaling process, particularly the pivotal step in which a signal is conveyed from one nerve cell to another. He shared the 1963 Nobel Prize in physiology or medicine for this work, which he described in a 1965 Scientific American article.
A neuron is a long cell that has a thick central area containing the nucleus; it also has one long process called an axon and one or more short, bushy processes called dendrites. Dendrites receive impulses from other neurons. (The exceptions are sensory neurons, such as those that transmit information about temperature or touch, in which the signal is generated by specialized receptors in the skin.) These impulses are propagated electrically along the cell membrane to the end of the axon. At the tip of the axon the signal is chemically transmitted to an adjacent neuron or muscle cell.
Like all other cells, neurons contain charged ions: potassium and sodium (positively charged) and chlorine (negatively charged). Neurons differ from other cells in that they are able to produce a nerve impulse. A neuron is polarized-that is, it has an overall negative charge inside the cell membrane because of the high concentration of chlorine ions and low concentration of potassium and sodium ions. The concentration of these same ions is exactly reversed outside the cell. This charge differential represents stored electrical energy, sometimes referred to as membrane potential or resting potential. The negative charge inside the cell is maintained by two features. The first is the selective permeability of the cell membrane, which is more permeable to potassium than sodium. The second feature is sodium pumps within the cell membrane that actively pump sodium out of the cell. When depolarization occurs, this charge differential across the membrane is reversed, and a nerve impulse is produced.
Depolarization is a rapid change in the permeability of the cell membrane. When sensory input or any other kind of stimulating current is received by the neuron, the membrane permeability is changed, allowing a sudden influx of sodium ions into the cell. The high concentration of sodium, or action potential, changes the overall charges within the cell from negative too positively. The local changes in ion concentration triggers similar reactions along the membrane, propagating the nerve impulse. After a brief period called the refractory period, during which the ionic concentration returned to resting potential, the neuron can repeat this process.
Nerve impulses travel at different speeds, depending on the cellular composition of a neuron. Where speed of impulse is important, as in the nervous system, axons are insulated with a membranous substance called myelin. The insulation provided by myelin maintains the ionic charge over long distances. Nerve impulses are propagated at specific points along the myelin sheath; these points are called the nodes of Ranvier. Examples of myelinated axons are those in sensory nerve fibers and nerves connected to skeletal muscles. In non-myelinated cells, the nerve impulse is propagated more diffusely.
When the electrical signal reaches the tip of an axon, it stimulates small presynaptic vesicles in the cell. These vesicles contain chemicals called neurotransmitters, which are released into the microscopic space between neurons (the synaptic cleft). The neurotransmitters attach to specialized receptors on the surface of the adjacent neuron. This stimulus causes the adjacent cell to depolarize and propagate an action potential of its own. The duration of a stimulus from a neurotransmitter is limited by the breakdown of the chemicals in the synaptic cleft and the reuptake by the neuron that produced them. Formerly, each neuron was thought to make only one transmitter, but recent studies have shown that some cells make two or more.
All human emotions-including love, hate, fear, anger, elation, and sadness-are controlled by the brain. It also receives and interprets the countless signals that are sent to it from other parts of the body and from the external environment. The brain makes us conscious, emotional, and intelligent.
The adult human brain is a 1.3-kg. (3-lb.) mass of pinkish-gray jellylike tissue made up of approximately 100 billion nerve cells, or neurons; neuroglia (supporting-tissue) cells; and vascular (blood-carrying) and other tissues.
Between the brain and the cranium-the part of the skull that directly covers the brain-are three protective membranes, or meninges. The outermost membrane, the dura mater, is the toughest and thickest. Below the dura mater is a middle membrane, called the arachnoid layer. The innermost membrane, the pia mater, consists mainly of small blood vessels and follows the contours of the surface of the brain.
A clear liquid, the cerebrospinal fluid, bathes the entire brain and fills a series of four cavities, called ventricles, near the center of the brain. The cerebrospinal fluid protects the internal portion of the brain from varying pressures and transports chemical substances within the nervous system.
From the outside, the brain appears as three distinct but connected parts: the cerebrum (the Latin word for brain)-two large, almost symmetrical hemispheres; the cerebellum (“little brain”)-whose smaller hemispheres located at the back of the cerebrum; and the brain stem-a central core that gradually becomes the spinal cord, exiting the skull through an opening at its base called the foramen magnum. Two other major parts of the brain, the thalamus and the hypothalamus, lie in the midline above the brain stem underneath the cerebellum.
The brain and the spinal cord together make up the central nervous system, which communicates with the rest of the body through the peripheral nervous system. The peripheral nervous system consists of 12 pairs of cranial nerves extending from the cerebrum and brain stem; a system of other nerves branching throughout the body from the spinal cord; and the autonomic nervous system, which regulates vital functions not too conscious control, such as the activity of the heart muscle, smooth muscle (involuntary muscle found in the skin, blood vessels, and internal organs), and glands.
Most high-level brain functions take place in the cerebrum. Its two large hemispheres make up approximately 85 percent of the brain”s weight. The exterior surface of the cerebrum, the cerebral cortex, is a convoluted, or folded, grayish layer of cell bodies known as the gray matter. The gray matter covers an underlying mass of fibers called the white matter. The convolutions are made up of ridgelike bulges, known as gyri, separated by small grooves called sulci and larger grooves called fissures. Approximately two-thirds of the cortical surface is hidden in the folds of the sulci. The extensive convolutions enable a very large surface area of brain cortex-about 1.5 m2 (16 ft2) in an adult-to fit within the cranium. The pattern of these convolutions is similar, although not identical, in all humans.
The two cerebral hemispheres are partially separated from each other by a deep fold known as the longitudinal fissure. Communication between the two hemispheres is through several concentrated bundles of axons, called commissaries, the largest of which is the corpus callosum.
Several major sulci-divide the cortex into distinguishable regions. The central sulcus, or Rolandic fissures, runs from the middle of the top of each hemisphere downward, forward, and toward another major sulcus, the lateral (“side”), or Sylvian, sulcus. These and other sulci and gyri divide the cerebrum into five lobes: the frontal, parietal, temporal, and occipital lobes and the insula.
The frontal lobe is the largest of the five and consists of all the cortex in front of the central sulcus. Broca”s area, a part of the cortex related to speech, is located in the frontal lobe. The parietal lobe consists of the cortex behind the central sulcus to a sulcus near the back of the cerebrum known as the parietocipital sulcus. The parieto-occipital sulcus, in turn, forms the front border of the occipital lobe, which is the rearmost part of the cerebrum. The temporal lobe is to the side of and below the lateral sulcus. Wernicke”s area, a part of the cortex related to the understanding of language, is located in the temporal lobe. The insula lies deep within the folds of the lateral sulcus.
The cerebrum receives information from all the sense organs and sends motor commands (signals that results in activity in the muscles or glands) to other parts of the brain and the rest of the body. Motor commands are transmitted by the motor cortex, a strip of cerebral cortex extending from side to side across the top of the cerebrum just in front of the central sulcus. The sensory cortex, a parallel strips of cerebral cortex just in back of the central sulcus, receives input from the sense organs.
Many other areas of the cerebral cortex have also been mapped according to their specific functions, such as vision, hearing, speech, emotions, language, and other aspects of perceiving, thinking, and remembering. Cortical regions known as associative cortices are responsible for integrating multiple inputs, processing the information, and carrying out complex responses.
The cerebellum coordinates body movements. Located at the lower back of the brain beneath the occipital lobes, the cerebellum is divided into two lateral (side-by-side) lobes connected by a fingerlike bundle of white fibers called the vermis. The outer layer, or cortex, of the cerebellum consists of fine folds called folia. As in the cerebrum, the outer layer of cortical gray matter surrounds a deeper layer of white matter and nuclei (groups of nerve cells). Three fiber bundles called cerebellar peduncles connect the cerebellum to the three parts of the brain stem-the midbrain, the pons, and the medulla oblongata.
The cerebellum coordinates voluntary movements by fine-tuning commands from the motor cortex in the cerebrum. The cerebellum also maintains posture and balance by controlling muscle tone and sensing the position of the limbs. All motor activity, from hitting a baseball to fingering a violin, depends on the cerebellum.
The thalamus and the hypothalamus lie underneath the cerebrum and connect it to the brain stem. The thalamus consist of two rounded masses of gray tissue lying within the middle of the brain, between the two cerebral hemispheres. The thalamus are the main relay station for incoming sensory signals to the cerebral cortex and for outgoing motor signals from it. All sensory input to the brain, except that of the sense of smell, connects to individual nuclei of the thalamus.
The hypothalamus lies beneath the thalamus on the midline at the base of the brain. It regulates or is involved directly in the control of many of the body's vital drives and activities, such as eating, drinking, temperature regulation, sleep, emotional behavior, and sexual activity. It also controls the function of internal body organs by means of the autonomic nervous system, interacts closely with the pituitary gland, and helps coordinate activities of the brain stem.
The brain stem is revolutionarily the most primitive part of the brain and is responsible for sustaining the basic functions of life, such as breathing and blood pressure. It includes three main structures lying between and below the two cerebral hemispheres-the midbrain, pons, and medulla oblongata.
The topmost structure of the brain stem is the midbrain. It contains major relay stations for neurons transmitting signals to the cerebral cortex, as well as many reflex centers-pathways carrying sensory (input) information and motor (output) command. Relay and reflex centers for visual and auditory (hearing) functions are located in the top portion of the midbrain. A pair of nuclei called the superior colliculus control reflex actions of the eye, such as blinking, opening and closing the pupil, and focusing the lens. A second pair of nuclei, called the inferior colliculus, controls auditory reflexes, such as adjusting the ear to the volume of sound. At the bottom of the midbrain are reflex and relay centers relating to pain, temperature, and touch, as well as several regions associated with the control of movement, such as the red nucleus and the substantia nigra and directly in front of the cerebellum is a prominent bulge in the brain stem called the pons. The pons consists of large bundles of nerve fibers that connect the two halves of the cerebellum and also connect each side of the cerebellum with the opposite-side cerebral hemisphere. The pons serves mainly as a relay station linking the cerebral cortex and the medulla oblongata.
The long, stalk-like lowermost portion of the brain stem is called the medulla oblongata. At the top, it is continuous with the pons and the midbrain; at the bottom, it makes a gradual transition into the spinal cord at the foramen magnum. Sensory and motor nerve fibers connecting the brain and the rest of the body cross over to the opposite side as they pass through the medulla. Thus, the left half of the brain communicates with the right half of the body, and the right half of the brain with the left half of the body.
Running up the brain stem from the medulla oblongata through the pons and the midbrain is a netlike formation of nuclei known as the reticular formation. The reticular formation controls respiration, cardiovascular function digestion, levels of alertness, and patterns of sleep. It also determines which parts of the constant flow of sensory information into the body are received by the cerebrum.
There are two main types of brain cells: neurons and neuroglia. Neurons are responsible for the transmission and analysis of all electrochemical communication within the brain and other parts of the nervous system. Each neuron is composed of a cell body called a soma. A major fiber called an axon, and a system of branches called dendrites. Axons, also called nerve fibers, convey electrical signals away from the soma and can be up to 1 m. (3.3 ft.) in length. Most axons are covered with a protective sheath of myelin, a substance made of fats and protein, which insulates the axon. Myelinated axons conduct neuronal signals faster than do unmyelinated axons. Dendrites convey electrical signals toward the soma, are shorter than axons, and are usually multiple and branching.
Neuroglial cells are twice as numerous as neurons and account for half of the brain”s weight. Neuroglia (from glia, Greek for “glue”) provides structural support to the neurons. Neuroglial cells also form myelin, guide developing neurons, take up chemicals involved in cell-to-cell communication, and contribute to the maintenance of the environment around neurons.
Twelve pairs of cranial nerves arise symmetrically from the base of the brain and are numbered, from front to back, in the order in which they arise. They connect mainly with structures of the head and neck, such as the eyes, ears, nose, mouth, tongue, and throat. Some are motor nerves, controlling muscle movement; some are sensory nerves, conveying information from the sense organs; and others contain fibers for both sensory and motor impulses. The first and second pairs of cranial nerves-the olfactory (smell) nerve and the optic (vision) nerve-carry, sensory information from the nose and eyes, respectively, to the undersurface of the cerebral hemispheres. The other ten pairs of cranial nerves originate in or end in the brain stem.
The brain functions by complex neuronal, or nerve cell, circuits. Communication between neurons is both electrical and chemical and always travels from the dendrites of a neuron, through its soma, and out its axon to the dendrites of another neuron.
Dendrites of one neuron receive signals from the axons of other neurons through chemicals known as neurotransmitters. The neurotransmitters set off electrical charges in the dendrites, which then carry the signals electrochemically to the soma. The soma integrates the information, which is then transmitted electrochemically down the axon to its tip.
At the tip of the axon, small, bubble-like structures called vesicles, that are after released from approaches with which traveling on or upon the neurotransmitters that carry the signal across the synapse, or gap, between two neurons. There are many types of neurotransmitters, including norepinephrine, dopamine, and serotonin. Neurotransmitters can be excitatory (that is, they excite an electrochemical response in the dendrite receptors) or inhibitory (they block the response of the dendrite receptors).
One neuron may communicate with thousands of other neurons, and many thousands of neurons are involved with even the simplest behavior. It is believed that these connections and their efficiency can be modified, or altered, by experience.
Scientists have used two primary approaches to studying how the brain works. One approach is to study brain function after parts of the brain have been damaged. Functions that disappear or that is no longer normal after injury to specific regions of the brain can often be associated with the damaged areas. The second approach is to study the response of the brain to direct stimulation or to stimulation of various sense organs.
Neurons are grouped by function into collections of cells called nuclei. These nuclei are connected to form sensory, motor, and other systems. Scientists can study the function of somatosensory (pain and touch), motor, olfactory, visual, auditory, language, and other systems by measuring the physiological (physical and chemical) change that occur in the brain when these senses are activated. For example, electroencephalography (EEG) measures the electrical activity of specific groups of neurons through electrodes attached to the surface of the skull. Electrodes inserted directly into the brain can give readings of individual neurons. Changes in blood flow, glucose (sugar), or oxygen consumption in groups of active cells can also be mapped.
Although the brain appears symmetrical, how it functions is not. Each hemisphere is specializing and dominates the other in certain functions. Research has shown that hemispheric dominance is related to whether a person is predominantly right-handed or left-handed. In most right-handed people, the left hemisphere processes arithmetic, language, and speech. The right hemisphere interprets music, complex imagery, and spatial relationships and recognizes and expresses emotion. In left-handed people, the pattern of brain organization is more variable.
Hemispheric specialization has traditionally been studied in people who have sustained damage to the connections between the two hemispheres, as may occur with a stroke, an interruption of blood flow to an area of the brain that causes the death of nerve cells in that area. The division of functions between the two hemispheres has also been studied in people who have had to have the connection between the two hemispheres surgically cut in order to control severe epilepsy, a neurological disease characterized by convulsions and loss of consciousness.
The visual system of humans is one of the most advanced sensory systems in the body. More information is conveyed visually than by any other means. In addition to the structures of the eye itself, several cortical regions-collectively called primary visual and a visual associative cortices-as well as the midbrain is involved in the visual system. Conscious processing of visual input occurs in the primary visual cortex, but reflexive-that is, immediate and unconscious-responses occur at the superior colliculus in the midbrain. Associative cortical regions-specialized regions that can associate, or integrate, multiple inputs-in the parietal and frontal lobes along with parts of the temporal lobe are also involved in the processing of visual information and the establishment of visual memories.
Language involves specialized cortical regions in a complex interaction that allows the brain to comprehend and communicate abstract ideas. The motor cortex initiates impulses that travel through the brain stem to produce audible sounds. Neighboring regions of motor cortices, called the supplemental motor cortex, are involved in sequencing and coordinating sounds. Broca's area of the frontal lobe is responsible for the sequencing of language elements for output. The comprehension of language is dependent upon Wernicke”s area of the temporal lobe. Other cortical circuits connect these areas.
Memory is usually considered a diffusely stored associative process-that is, it puts together information from many different sources. Although research has failed to identify specific sites in the brain as locations of individual memories, certain brain areas are critical for memory to function. Immediate recall-the ability to repeat short series of words or numbers immediately after hearing them-is thought to be located in the auditory associative cortex. Short-term memory-the ability to retain a limited amount of information for up to an hour - is located in the deep temporal lobe. Long-term memory probably involves exchanges between the medial temporal lobe, various cortical regions, and the midbrain.
The autonomic nervous system regulates the life support systems of the body reflexively-that is, without conscious direction. It automatically controls the muscles of the heart, digestive system, and lungs; certain glands; and homeostasis-that is, the equilibrium of the internal environment of the body. The autonomic nervous system itself is controlled by nerve centers in the spinal cord and brain stem and is fine-tuned by regions higher in the brain, such as the midbrain and cortex. Reactions such as blushing indicate that cognitive, or thinking, centers of the brain are also involved in autonomic responses.
The brain is guarded by several highly developed protective mechanisms. The bony cranium, the surrounding meninges, and the cerebrospinal fluid all contribute to the mechanical protection of the brain. In addition, a filtration system called the blood-brain barrier protects the brain from exposure to potentially harmful substances carried in the bloodstream. Brain disorders have a wide range of causes, including head injury, stroke, bacterial diseases, complex chemical imbalances, and changes associated with aging.
Head injury can initiate a cascade of damaging events. After a blow to the head, a person may be stunned or may become unconscious for a moment. This injury, called a concussion, usually leaves no permanent damage. If the blow is more severe and hemorrhage (excessive bleeding) and swelling occurs, however, severe headache, dizziness, paralysis, a convulsion, or temporary blindness may result, depending on the area of the brain affected. Damage to the cerebrum can also result in profound personality changes.
Damage to Broca”s area in the frontal lobe causes difficulty in speaking and writing, a problem known as Broca”s aphasia. Injury to Wernicke”s area in the left temporal lobe results in an inability to comprehend spoken language, called Wernicke's aphasia.
An injury or disturbance to a part of the hypothalamus may cause a variety of different symptoms, such as loss of appetite with an extreme drop in body weight; increase in appetite leading to obesity; extraordinary thirst with excessive urination (diabetes insipidus); failure in body-temperature control, resulting in either low temperature (hypothermia) or high temperature (fever); excessive emotionality; and uncontrolled anger or aggression. If the relationship between the hypothalamus and the pituitary gland is damaged, other vital bodily functions may be disturbed, such as sexual function, metabolism, and cardiovascular activity.
Injury to the brain stem is even more serious because it houses the nerve centers that control breathing and heart action. Damage to the medulla oblongata usually results in immediate death.
To the brain due to an interruption in blood flow. The interruption may be caused by a blood clot: constriction of a blood vessel, or rupture of a vessel accompanied by bleeding. A pouchlike expansion of the wall of a blood vessel, called an aneurysm, may weaken and burst, for example, because of high blood pressure.
Sufficient quantities of glucose and oxygen, transported through the bloodstream, are needed to keep nerve cells alive. When the blood supply to a small part of the brain is interrupted, the cells in that area die and the function of the area is lost. A massive stroke can cause a one-sided paralysis (hemiplegia) and sensory loss on the side of the body opposite the hemisphere damaged by the stroke.
Epilepsy is a broad term for a variety of brain disorders characterized by seizures, or convulsions. Epilepsy can result from a direct injury to the brain at birth or from a metabolic disturbance in the brain at any time later in life.
Some brain diseases, such as multiple sclerosis and Parkinson disease, are progressive, becoming worse over time. Multiple sclerosis damages the myelin sheath around axons in the brain and spinal cord. As a result, the affected axons cannot transmit nerve impulses properly. Parkinson disease destroys the cells of the substantia nigra in the midbrain, resulting in a deficiency in the neurotransmitter dopamine that affects motor functions.
Cerebral palsy is a broad term for brain damage sustained close to birth that permanently affects motor function. The damage may take place either in the developing fetus, during birth, or just after birth and is the result of the faulty development or breaking down of motor pathways. Cerebral palsy is nonprogressive-that is, it does not worsen with time.
A bacterial infection in the cerebrum or in the coverings of the brain swelling of the brain, or an abnormal growth of healthy brain tissue can all cause an increase in intracranial pressure and result in serious damage to the brain.
Scientists are finding that certain brain chemical imbalances are associated with mental disorders such as schizophrenia and depression. Such findings have changed scientific understanding of mental health and have resulted in new treatments that chemically correct these imbalances.
During childhood development, the brain is particularly susceptible to damage because of the rapid growth and reorganization of nerve connections. Problems that originate in the immature brain can appear as epilepsy or other brain-function problems in adulthood.
Several neurological problems are common in aging. Alzheimer”s disease damages many areas of the brain, including the frontal, temporal, and parietal lobes. The brain tissue of people with Alzheimer's disease shows characteristic patterns of damaged neurons, known as plaques and tangles. Alzheimer's disease produces a progressive dementia, characterized by symptoms such as failing attention and memory, loss of mathematical ability, irritability, and poor orientation in space and time.
Several commonly used diagnostic methods give images of the brain without invading the skull. Some portray anatomy-that is, the structure of the brain-whereas others measures brain function. Two or more methods may be used to complement each other, together providing a more complete picture than would be possible by one method alone.
Magnetic resonance imaging (MRI), introduced in the early 1980s, beams high-frequency radio waves into the brain in a highly magnetized field that causes the protons that form the nuclei of hydrogen atoms in the brain to remit the radio waves. The remitted radio waves are analyzed by computer to create thin cross-sectional images of the brain. MRI provides the most detailed images of the brain and is safer than imaging methods that use X rays. However, MRI is a lengthy process and also cannot be used with people who have pacemakers or metal implants, both of which are adversely affected by the magnetic field.
Computed tomography (CT), also known as CT scans, developed in the early 1970s. This imaging method X-rays the brain from many different angles, feeding the information into a computer that produces a series of cross-sectional images. CT is particularly useful for diagnosing blood clots and brain tumors. It is a much quicker process than magnetic resonance imaging and is therefore advantageous in certain situations-for example, with people who are extremely ill.
Changes in brain function due to brain disorders can be visualized in several ways. Magnetic resonance spectroscopy measures the concentration of specific chemical compounds in the brain that may change during specific behaviors. Functional magnetic resonance imaging (fMRI) maps changes in oxygen concentration that correspond to nerve cell activity.
Positron emission tomography (PET), developed in the mid-1970s, uses computed tomography to visualize radioactive tracers radioactive substances introduced into the brain intravenously or by inhalation. PET can measure such brain functions as cerebral metabolism, blood flow and volume, oxygen use, and the formation of neurotransmitters. Single photon emission computed tomography (SPECT), developed in the 1950s and 1960s, used radioactive tracers to visualize the circulation and volume of blood in the brain.
Brain-imaging studies have provided new insights into sensory, motor, language, and memory processes, as well as brain disorders such as epilepsy cerebrovascular disease; Alzheimer's, Parkinson, and Huntington”s diseases: And the various mental disorders, such as schizophrenia.
In lower vertebrates, such as fish and reptiles, the brain is often tubular and bears a striking resemblance to the early embryonic stages of the brains of more highly evolved animals. In all vertebrates, the brain is divided into three regions: the forebrain (prosencephalon), the midbrain (mesencephalon), and the hindbrain (rhombencephalon). These three regions further subdivide into different structures, systems, nuclei, and layers.
The more highly evolved the animal, the more complex is the brain structure. Human beings have the most complex brains of all animals. Evolutionary forces have also resulted in a progressive increase in the size of the brain. In vertebrates lower than mammals, the brain is small. In meat-eating animals, particularly primates, the brain increases dramatically in size.
The cerebrum and cerebellum of higher mammals are highly convoluted in order to fit the most gray matter surface within the confines of the cranium. Such highly convoluted brains are called gyrencephalic. Many lower mammals have a smooth, or lissencephalic (“smooth head”), cortical surfaces.
There is also evidence of evolutionary adaption of the brain. For example, many birds depend on an advanced visual system to identify food at great distances while in flight. Consequently, their optic lobes and cerebellum are well developed, giving them keen sight and outstanding motor coordination in flight. Rodents, on the other hand, as nocturnal animals, do not have a well-developed visual system. Instead, they rely more heavily on other sensory systems, such as a highly-developed sense of smell and facial whiskers.
Recent research in brain function suggests that there may be sexual differences in both brain anatomy and brain function. One study indicated that men and women may use their brains differently while thinking. Researchers used functional magnetic resonance imaging to observe which parts of the brain were activated as groups of men and women tried to determine whether sets of nonsense words rhymed. Men used only Broca”s area in this task, whereas women used Broca's area plus an area on the right side of the brain.
Both Analytic and Linguistic philosophy, are 20th-century philosophical movements, and dominate most of Britain and the United States since World War II, that aims to clarify language and analyze the concepts expressed in it. The movement has been given a variety of designations, including linguistic analysis, logical empiricism, logical positivism, Cambridge analysis, and “Oxford philosophy.” The last two labels are derived from the universities in England where this philosophical method has been particularly influential. Although no specific doctrines or tenets are accepted by the movement as a whole, analytic and linguistic philosophers agree that the proper activity of philosophy is clarifying language, or, as some prefer, clarifying concepts. The aim of this activity is to settle philosophical disputes and resolve philosophical problems, which, it is argued, originates in linguistic confusion.
A considerable diversity of views exists among analytic and linguistic philosophers regarding the nature of conceptual or linguistic analysis. Some have been primarily concerned with clarifying the meaning of specific words or phrases as an essential step in making philosophical assertions clear and unambiguous. Others have been more concerned with determining the general conditions that must be met for any linguistic utterance to be meaningful; their intent is to establish a criterion that will distinguish between meaningful and nonsensical sentences. Still other analysts have been interested in creating formal, symbolic languages that are mathematical in nature. Their claim is that philosophical problems can be more effectively dealt with once they are formulated in a rigorous logical language.
By contrast, many philosophers associated with the movement have focused on the analysis of ordinary, or natural, language. Difficulties arise when concepts such as time and freedom, for example, are considered apart from the linguistic context in which they normally appear. Attention to language as it is ordinarily used is the key, it is argued, to resolving many philosophical puzzles.
Many experts believe that philosophy as an intellectual discipline originated with the work of Plato, one of the most celebrated philosophers in history. The Greek thinker had an immeasurable influence on Western thought. However, Plato's expression of ideas in the form of dialogues-the dialectical method, used most famously by his teacher Socrates-has led to difficulties in interpreting some of the finer points of his thoughts. The issue of what exactly Plato meant to say is addressed in the following excerpt by author R. M. Hare.
Linguistic analysis as a method of philosophy is as old as the Greeks. Several of the dialogues of Plato, for example, are specifically concerned with clarifying terms and concepts. Nevertheless, this style of philosophizing has received dramatically renewed emphasis in the 20th century. Influenced by the earlier British empirical tradition of John Locke, George Berkeley, David Hume, and John Stuart Mill and by the writings of the German mathematician and philosopher Gottlob Frége, the 20th-century English philosopher's G. E. Moore and Bertrand Russell became the founders of this contemporary analytic and linguistic trend. As students together at the University of Cambridge, Moore and Russell rejected Hegelian idealism, particularly as it was reflected in the work of the English metaphysician F. H. Bradley, who held that nothing is completely real except the Absolute. In their opposition to idealism and in their commitment to the view that careful attention to language is crucial in philosophical inquiry. They set the mood and style of philosophizing for much of the 20th century English-speaking world.
For Moore, philosophy was first and foremost analysis. The philosophical task involves clarifying puzzling propositions or concepts by indicating less puzzling propositions or concepts to which the originals are held to be logically equivalent. Once this task has been completed, the truth or falsity of problematic philosophical assertions can be determined more adequately. Moore was noted for his careful analyses of such puzzling philosophical claims as “time is unreal,” analyses that then aided in determining the truth of such assertions.
Russell, strongly influenced by the precision of mathematics, was concerned with developing an ideal logical language that would accurately reflect the nature of the world. Complex propositions, Russell maintained, can be resolved into their simplest components, which he called atomic propositions. These propositions refer to atomic facts, the ultimate constituents of the universe. The metaphysical views based on this logical analysis of language and the insistence that meaningful propositions must correspond to facts constitute what Russell called logical atomism. His interest in the structure of language also led him to distinguish between the grammatical form of a proposition and its logical form. The statements “John is good” and “John is tall” have the same grammatical form but different logical forms. Failure to recognize this would lead one to treat the property “goodness” as if it were a characteristic of John in the same way that the property “tallness” is a characteristic of John. Such failure results in philosophical confusion.
Austrian-born philosopher Ludwig Wittgenstein was one of the most influential thinkers of the 20th century. With his fundamental work, Tractatus Logico-philosophicus, published in 1921, he became a central figure in the movement known as analytic and linguistic philosophy.
Russell”s work in mathematics attracted at Cambridge, the Austrian philosopher Ludwig Wittgenstein, who became a central figure in the analytic and linguistic movement. In his first major work, Tractatus Logico-philosophicus (1921, translated, 1922), in which he first presented his theory of language, Wittgenstein argued that “all philosophy is a ‘critique of language’ and that ‘philosophy aims at the logical clarification of thoughts’. The results of Wittgenstein's analysis resembled Russell's logical atomism. The world, he argued, is ultimately composed of simple facts, which it is the purpose of language to picture. To be meaningful, statements about the world must be reducible to linguistic utterances that have a structure similar to the simple facts pictured. In this early Wittgensteinian analysis, only propositions that picture facts-the propositions of science-are considered factually meaningful. Metaphysical, theological, and ethical sentences were judged to be factually meaningless.
Influenced by Russell, Wittgenstein, Ernst Mach, and others, a group of philosophers and mathematicians in Vienna in the 1920s initiated the movement known as logical positivism: Led by Moritz Schlick and Rudolf Carnap, the Vienna Circle initiated one of the most important chapters in the history of analytic and linguistic philosophy. According to the positivist, the task of philosophy is the clarification of meaning, not the discovery of new facts (the job of the scientists) or the construction of comprehensive accounts of reality (the misguided pursuit of traditional metaphysics).
The positivist divided all meaningful assertions into two classes: analytic propositions and empirically verifiable ones. Analytic propositions, which include the propositions of logic and mathematics, are statements the truth or falsity of which depended altogether on the meanings of the terms constituting the statement. An example would be the proposition “two plus two equals four.” The second class of meaningful propositions includes all statements about the world that can be verified, at least in principle, by sense experience. Indeed, the meaning of such propositions is identified with the empirical method of their verification. This verifiability theory of meaning, the positivist concluded, would demonstrate that scientific statements are legitimate factual claims and that metaphysical, religious, and ethical sentences are factually empties. The ideas of logical positivism were made popular in England by the publication of A. J. Ayer”s Language, Truth and Logic in 1936.
The positivist verifiability theory of meaning came under intense criticism by philosophers such as the Austrian-born British philosopher Karl Popper. Eventually this narrow theory of meaning yielded to a broader understanding of the nature of language. Again, an influential figure was Wittgenstein. Repudiating many of his earlier conclusions in the Tractatus, he initiated a new line of thought culminating in his posthumously published Philosophical Investigations (1953; translated, 1953). In this work, Wittgenstein argued that once attention is directed to the way language is actually used in ordinary discourse, the variety and flexibility of language become clear. Propositions do much more than simply picture facts.
This recognition led to Wittgenstein's influential concept of language games. The scientist, the poet, and the theologian, for example, are involved in different language games. Moreover, the meaning of a proposition must be understood in its context, that is, in terms of the rules of the language game of which that proposition is a part. Philosophy, concluded Wittgenstein, is an attempt to resolve problems that arise as the result of linguistic confusion, and the key to the resolution of such problems is ordinary language analysis and the proper use of language.
Additional contributions within the analytic and linguistic movement include the work of the British philosopher's Gilbert Ryle, John Austin, and P. F. Strawson and the American philosopher W. V. Quine. According to Ryle, the task of philosophy is to restate “systematically misleading expressions” in forms that are logically more accurate. He was particularly concerned with statements the grammatical form of which suggests the existence of nonexistent objects. For example, Ryle is best known for his analysis that has of a mental capacity of language, language that misleadingly suggests that the mind is an entity in the same way as the body.
Austin maintained that one of the most fruitful starting points for philosophical inquiry is attention to the extremely fine distinctions drawn in ordinary language. His analysis of language eventually led to a general theory of speech acts, that is, to a description of the variety of activities that an individual may be performing when something is uttered.
Strawson is known for his analysis of the relationship between formal logic and ordinary language. The complexity of the latter, he argued, is inadequately represented by formal logic. A variety of analytic tools, therefore, are needed in addition to logic in analyzing ordinary language.
Quine discussed the relationship between language and ontology. He argued that language systems tend to commit their users to the existence of certain things. For Quine, the justification for speaking one way rather than another is a thoroughly pragmatic one.
The commitment to language analysis as a way of pursuing philosophy has continued as a significant contemporary dimension in philosophy. A division also continues to exist between those who prefer to work with the precision and rigor of symbolic logical systems and those who prefer to analyze ordinary language. Although few contemporary philosophers maintain that all philosophical problems are linguistic, the view continues to be widely held that attention to the logical structure of language and to how language is used in everyday discourse can often aid in resolving philosophical problems.
A loose title for various philosophies that emphasize certain common themes, the individual, the experience of choice, and the absence of rational understandings of the universe, with a consequent dread or sense of “absurdity i9n human life” however, Existentialism is a philosophical movement or tendency, emphasizing individual existence, freedom, and choice, that influenced many diverse writers in the 19th and 20th centuries.
Because of the diversity of positions associated with existentialism, the term is impossible to define precisely. Certain themes common to virtually all existentialist writers can, however, be identified. The term itself suggests one major theme: the stress on concrete individual existence and, consequently, on subjectivity, individual freedom, and choice.
Most philosophers since Plato have held that the highest ethical good are the same for everyone; insofar as one approaches moral perfection, one resembles other morally perfect individuals. The 19th-century Danish philosopher Søren Kierkegaard, who was the first writer to call himself existential, reacted against this tradition by insisting that the highest good for the individual are to find his or her own unique vocation. As he wrote in his journal, “I must find a truth that is true for me . . . the idea for which I can live or die.” Other existentialist writers have echoed Kierkegaard”s belief that one must choose one's own way without the aid of universal, objective standards. Against the traditional view that moral choice involves an objective judgment of right and wrong, Existentialists have argued that no objective, rational basis can be found for moral decisions. The 19th-century German philosopher Friedrich Nietzsche further contended that the individual must decide which situations are to count as moral situations.
All Existentialists have followed Kierkegaard in stressing the importance of passionate individual action in deciding questions of both morality and truth. They have insisted, accordingly, that personal experience and acting on one's own convictions are essential in arriving at the truth. Thus, the understanding of a situation by someone involved in that situation is superior to that of a detached, objective observer. This emphasis on the perspective of the individual agent has also made Existentialists suspicious of systematic reasoning. Kierkegaard, Nietzsche, and other existentialist writers have been deliberately unsystematic in the exposition of their philosophies, preferring to express themselves in aphorisms, dialogues, parables, and other literary forms. Despite their antirationalist position, however, most Existentialists cannot be said to be irrationalists in the sense of denying all validity to rational thought. They have held that rational clarity is desirable wherever possible, but that the most important questions in life are not accessible to reason or science. Furthermore, they have argued that even science is not as rational as is commonly supposed. Nietzsche, for instance, asserted that the scientific assumption of an orderly universe is for the most part a useful fiction.
Perhaps the most prominent theme in existentialist writing is that of choice. Humanity's primary distinction, in the view of most Existentialists, is the freedom to choose. Existentialists have held that human beings do not have a fixed nature, or essence, as other animals and plants do; each human being makes choices that create his or her own nature. In the formulation of the 20th-century French philosopher Jean-Paul Sartre, existence precedes essence. Choice is therefore central to human existence, and it is inescapable; even the refusal to choose is a choice. Freedom of choice entails commitment and responsibility. Because individuals are free to choose their own path, Existentialists have argued, they must accept the risk and responsibility of following their commitment wherever it leads.
Kierkegaard held that it is spiritually crucial to recognize that one experiences not only a fear of specific objects but also a feeling of general apprehension, which he called dread. He interpreted it as God's way of calling each individual to make a commitment to a personally valid way of life. The word anxiety (German Angst) has a similarly crucial role in the work of the 20th-century German philosopher Martin Heidegger; anxiety leads to the individual”s confrontation with nothingness and with the impossibility of finding ultimate justification for the choices he or she must make. In the philosophy of Sartre, the word nausea is used for the individual”s recognition of the pure contingency of the universe, and the word anguish is used for the recognition of the total freedom of choice that confronts the individual at every moment.
Existentialism as a distinct philosophical and literary movement belongs to the 19th and 20th centuries, but elements of existentialism can be found in the thought (and life) of Socrates, in the Bible, and in the work of many premodern philosophers and writers.
The first to anticipate the major concerns of modern existentialism was the 17th-century French philosopher Blaise Pascal. Pascal rejected the rigorous rationalism of his contemporary René Descartes, asserting, in his Pensées (1670), that a systematic philosophy that presumes to explain God and humanity is a form of pride. Like later existentialist writers, he saw human life in terms of paradoxes: The human self, which combines mind and body, is itself a paradox and contradiction.
Kierkegaard, generally regarded as the founder of modern existentialism, reacted against the systematic absolute idealism of the 19th-century German philosopher Georg Wilhelm Friedrich Hegel, who claimed to have worked out a total rational understanding of humanity and history. Kierkegaard, on the contrary, stressed the ambiguity and absurdity of the human situation. The individual”s response to this situation must be to live a totally committed life, and this commitment can only be understood by the individual who has made it. The individual therefore must always be prepared to defy the norms of society for the sake of the higher authority of a personally valid way of life. Kierkegaard ultimately advocated a 'leap of faith' into a Christian way of life, which, although incomprehensible and full of risk, was the only commitment he believed could save the individual from despair.
Danish religious philosopher Søren Kierkegaard rejected the all-encompassing, analytical philosophical systems of such 19th-century thinkers as German philosopher G. W. F. Hegel. Instead, Kierkegaard focused on the choices the individual must make in all aspects of his or her life, especially the choice to maintain religious faith. In Fear and Trembling (1846; trans. 1941), Kierkegaard explored the concept of faith through an examination of the biblical story of Abraham and Isaac, in which God demanded that Abraham demonstrate his faith by sacrificing his son.
One of the most controversial works of 19th-century philosophy, Thus Spake Zarathustra (1883-1885) articulated German philosopher Friedrich Nietzsche's theory of the Übermensch, a term translated as “Superman” or “Overman.” The Superman was an individual who overcame what Nietzsche termed the “slave morality” of traditional values, and lived according to his own morality. Nietzsche also advanced his idea that “God is dead,” or that traditional morality was no longer relevant in people”s lives. In this passage, the sage Zarathustra came down from the mountain where he had spent the last ten years alone to preach to the people.
Nietzsche, who was not acquainted with the work of Kierkegaard, influenced subsequent existentialist thought through his criticism of traditional metaphysical and moral assumptions and through his espousal of tragic pessimism and the life-affirming individual will that opposes itself to the moral conformity of the majority. In contrast to Kierkegaard, whose attack on conventional morality led him to advocate a radically individualistic Christianity, Nietzsche proclaimed the “death of God” and went on to reject the entire Judeo-Christian moral tradition in favor of a heroic pagan ideal.
The modern philosophy movements of phenomenology and existentialism have been greatly influenced by the thought of German philosopher Martin Heidegger. According to Heidegger, humankind has fallen into a crisis by taking a narrow, technological approach to the world and by ignoring the larger question of existence. People, if they wish to live authentically, must broaden their perspectives. Instead of taking their existence for granted, people should view themselves as part of Being (Heidegger's terms for that which underlies all existence).
Heidegger, like Pascal and Kierkegaard, reacted against an attempt to put philosophy on a conclusive rationalistic basis-in this case the phenomenology of the 20th-century German philosopher Edmund Husserl. Heidegger argued that humanity finds itself in an incomprehensible, indifferent world. Human beings can never hope to understand why they are here; instead, each individual must choose a goal and follow it with passionate conviction, aware of the certainty of death and the ultimate meaninglessness of one”s life. Heidegger contributed to existentialist thought an original emphasis on being and ontology (see Metaphysics) as well as on language.
Twentieth-century French intellectual Jean-Paul Sartre helped to develop existential philosophy through his writings, novels, and plays. Much of Sartre”s work focuses on the dilemma of choice faced by free individuals and on the challenge of creating meaning by acting responsibly in an indifferent world. In stating that “man is condemned to be free,” Sartre reminds us of the responsibility that accompanies human decisions.
Sartre first gave the term existentialism general currency by using it for his own philosophy and by becoming the leading figure of a distinct movement in France that became internationally influential after World War II. Sartre”s philosophy is explicitly atheistic and pessimistic; he declared that human beings require a rational basis for their lives but are unable to achieve one, and thus human life is a “futile passion.” Sartre nevertheless insisted that his existentialism is a form of humanism, and he strongly emphasized human freedom, choice, and responsibility. He eventually tried to reconcile these existentialist concepts with a Marxist analysis of society and history.
Although existentialist thought encompasses the uncompromising atheism of Nietzsche and Sartre and the agnosticism of Heidegger, its origin in the intensely religious philosophies of Pascal and Kierkegaard foreshadowed its profound influence on 20th-century theologies. The 20th-century German philosopher Karl Jaspers, although he rejected explicit religious doctrines, influenced a contemporary theology through his preoccupation with transcendence and the limits of human experience. The German Protestant theologian”s Paul Tillich and Rudolf Bultmann, the French Roman Catholic theologian Gabriel Marcel, the Russian Orthodox philosopher Nikolay Berdyayev, and the German Jewish philosopher Martin Buber inherited many of Kierkegaard's concerns, especially that a personal sense of authenticity and commitment is essential to religious faith.
Renowned as one of the most important writers in world history, 19th-century Russian author Fyodor Dostoyevsky wrote psychologically intense novels which probed the motivations and moral justifications for his characters” actions. Dostoyevsky commonly addressed themes such as the struggle between good and evil within the human soul and the idea of salvation through suffering. The Brothers Karamazov (1879-1880), generally considered Dostoyevsky's best work, interlaces religious exploration with the story of a family's violent quarrels over a woman and a disputed inheritance.
Also, Maurice Merleau-Ponty (1908-1961), the French existentialist philosopher, whose phenomenological studies of the role of the body in perception and society opened a new field of philosophical investigation. He taught at the University of Lyon, at the Sorbonne, and, after 1952, at the Collège de France. His first important work was The Structure of Comportment (1942; trans. 1963), a critique of behaviorism. His major work, Phenomenology of Perception (1945; trans. 1962), is a detailed study of perception, influenced by the German philosopher Edmund Husserl”s phenomenology and by Gestalt psychology. In it, he argues that science presupposes an original and unique perceptual relation to the world that cannot be explained or even described in scientific terms. This book can be viewed as a critique of cognitivism-the view that the working of the human mind can be understood in terms of rules or programs. It is also a telling critique of the existentialism of his contemporary, Jean-Paul Sartre, showing how human freedom is never total, as Sartre claimed, but is limited by our embodiment.
With Sartre and Simone de Beauvoir, Merleau-Ponty founded an influential postwar French journal, Les Temps Modernes. His brilliant and timely essays on art, film, politics, psychology, and religion, first published in this journal, were later collected in Sense and Nonsense (1948; trans. 1964). At the time of his death, he was working on a book, The Visible and the Invisible (1964; trans. 1968), arguing that the whole perceptual world has the sort of organic unity he had earlier.
A number of existentialist philosophers used literary forms to convey their thought, and existentialism has been as vital and as extensive a movement in literature as in philosophy. The 19th-century Russian novelist Fyodor Dostoyevsky is probably the greatest existentialist literary figure. In Notes from the Underground (1864), the alienated antihero rages against the optimistic assumptions of rationalist humanism. The view of human nature that emerges in this and other novels of Dostoyevsky is that it is unpredictable and perversely self-destructive; only Christian love can save humanity from itself, but such love cannot be understood philosophically. As the character Alyosha says in The Brothers Karamazov (1879-80), “We must love life more than the meaning of it.”
The opening lines of Russian novelist Fyodor Dostoyevsky's Notes from Underground (1864)-“I am a sick man . . . I am a spiteful man”-are among the most famous in 19th-century literature. Published five years after his release from prison and involuntary, military service in Siberia, Notes from Underground is a sign of Dostoyevsky's rejection of the radical social thinking he had embraced in his youth. The unnamed narrator is antagonistic in tone, questioning the reader”s sense of morality as well as the foundations of rational thinking. In this excerpt from the beginning of the novel, the narrator describes himself, derisively referring to himself as an “overly conscious” intellectual.
In the 20th century, the novels of the Austrian Jewish writer Franz Kafka, such as The Trial (1925; trans. 1937) and The Castle (1926; trans. 1930), present isolated men confronting vast, elusive, menacing bureaucracies; Kafka”s themes of anxiety, guilt, and solitude reflect the influence of Kierkegaard, Dostoyevsky, and Nietzsche. The influence of Nietzsche is also discernible in the novels of the French writer”s André Malraux and in the plays of Sartre. The work of the French writer Albert Camus is usually associated with existentialism because of the prominence in it of such themes as the apparent absurdity and futility of life, the indifference of the universe, and the necessity of engagement in a just cause. Existentialist themes are also reflected in the theater of the absurd, notably in the plays of Samuel Beckett and Eugène Ionesco. In the United States, the influence of existentialism on literature has been more indirect and diffuse, but traces of Kierkegaard”s thought can be found in the novels of Walker Percy and John Updike, and various existentialist themes are apparent in the work of such diverse writers as Norman Mailer, John Barth, and Arthur
The problem of defining knowledge in terms of true belief plus some favoured relation between the believer and the facts began with Plato’s view in the Theaetetus, that knowledge is true belief plus some logos, an epistemology is to begin of holding the Foundations of knowledge, a special branches of philosophy that addresses the philosophical problems surrounding the theory of knowledge. Epistemology is concerned with the definition of knowledge and related concepts, the sources and criteria of knowledge, the kinds of knowledge possible and the degree to which each is certain, and the exact relation among the one who knows and the object known.
Thirteenth-century Italian philosopher and theologian Saint Thomas Aquinas attempted to synthesize Christian belief with a broad range of human knowledge, embracing diverse sources such as Greek philosopher Aristotle and Islamic and Jewish scholars. His thought exerted lasting influence on the development of Christian theology and Western philosophy. Author Anthony Kenny examines the complexities of Aquinas”s concepts of substance and accident.
In the 5th century Bc, the Greek Sophists questioned the possibility of reliable and objective knowledge. Thus, a leading Sophist, Gorgias, argued that nothing really exists, that if anything did exist it could not be known, and that if knowledge were possible, it could not be communicated. Another prominent Sophist, Protagoras, maintained that no person”s opinions can be said to be more correct than another”s, because each is the sole judge of his or her own experience. Plato, following his illustrious teacher Socrates, tried to answer the Sophists by postulating the existence of a world of unchanging and invisible forms, or ideas, about which it is possible to have exact and certain knowledge. The thing”s one sees and touches, they maintained, are imperfect copies of the pure forms studied in mathematics and philosophy. Accordingly, only the abstract reasoning of these disciplines yields genuine knowledge, whereas reliance on sense perception produces vague and inconsistent opinions. They concluded that philosophical contemplation of the unseen world of forms is the highest goal of human life.
Aristotle followed Plato in regarding abstract knowledge as superior to any other, but disagreed with him as to the proper method of achieving it. Aristotle maintained that almost all knowledge is derived from experience. Knowledge is gained either directly, by abstracting the defining traits of a species, or indirectly, by deducing new facts from those already known, in accordance with the rules of logic. Careful observation and strict adherence to the rules of logic, which were first set down in systematic form by Aristotle, would help guard against the pitfalls the Sophists had exposed. The Stoic and Epicurean schools agreed with Aristotle that knowledge originates in sense perception, but against both Aristotle and Plato they maintained that philosophy is to be valued as a practical guide to life, rather than as an end in itself.
After many centuries of declining interest in rational and scientific knowledge, the Scholastic philosopher Saint Thomas Aquinas and other philosophers of the Middle Ages helped to restore confidence in reason and experience, blending rational methods with faith into a unified system of beliefs. Aquinas followed Aristotle in regarding perception as the starting point and logic as the intellectual procedure for arriving at reliable knowledge of nature, but he considered faith in scriptural authority as the main source of religious belief.
From the 17th to the late 19th century, the main issue in epistemology was reasoning versus sense perception in acquiring knowledge. For the rationalists, of whom the French philosopher René Descartes, the Dutch philosopher Baruch Spinoza, and the German philosopher Gottfried Wilhelm Leibniz were the leaders, the main source and final test of knowledge was deductive reasoning based on self-evident principles, or axioms. For the empiricist, beginning with the English philosophers Francis Bacon and John Locke, the main source and final test of knowledge was sense perception.
Bacon inaugurated the new era of modern science by criticizing the medieval reliance on tradition and authority and also by setting down new rules of scientific method, including the first set of rules of inductive logic ever formulated. Locke attacked the rationalist belief that the principles of knowledge are intuitively self-evident, arguing that all knowledge is derived from experience, either from experience of the external world, which stamps sensations on the mind, or from internal experience, in which the mind reflects on its own activities. Human knowledge of external physical objects, he claimed, is always subject to the errors of the senses, and he concluded that one cannot have absolutely certain knowledge of the physical world.
Irish-born philosopher and clergyman George Berkeley (1685-1753) argued that everything that human being conceived of exists as an idea in a mind, a philosophical focus which is known as idealism. Berkeley reasoned that because one cannot control one”s thoughts, they must come directly from a larger mind: that of God. In this excerpt from his Treatise Concerning the Principles of Human Knowledge, written in 1710, Berkeley explained why he believed that it is impossible . . . that there should be any such thing as an outward object.
The Irish philosopher George Berkeley agreed with Locke, that knowledge comes through ideas, but he denied Locke”s belief that a distinction can be made between ideas and objects. The British philosopher David Hume continued the empiricist tradition, but he did not accept Berkeley”s conclusion that knowledge was of ideas only. He divided all knowledge into two kinds: knowledge of relations of ideas-that is, the knowledge found in mathematics and logic, which is exact and certain but provide no information about the world; and knowledge of matters of fact-that is, the knowledge derived from sense perception. Hume argued that most knowledge of matters of fact depends upon cause and effect, and since no logical connection exists between any given cause and its effect, one cannot hope to know any future matter of fact with certainty. Thus, the most reliable laws of science might not remain true-a conclusion that had a revolutionary impact on philosophy.
The German philosopher Immanuel Kant tried to solve the crisis precipitated by Locke and brought to a climax by Hume; his proposed solution combined elements of rationalism with elements of empiricism. He agreed with the rationalists that one can have exact and certain knowledge, but he followed the empiricist in holding that such knowledge is more informative about the structure of thought than about the world outside of thought. He distinguished three kinds of knowledge: analytical a priori, which is exact and certain but uninformative, because it makes clear only what is contained in definitions; synthetic a posteriori, which conveys information about the world learned from experience, but is subject to the errors of the senses; and synthetic a priori, which is discovered by pure intuition and is both exact and certain, for it expresses the necessary conditions that the mind imposes on all objects of experience. Mathematics and philosophy, according to Kant, provide this last. Since the time of Kant, one of the most frequently argued questions in philosophy has been whether or not such a thing as synthetic a priori knowledge really exists.
During the 19th century, the German philosopher Georg Wilhelm Friedrich Hegel revived the rationalist claim that absolutely certain knowledge of reality can be obtained by equating the processes of thought, of nature, and of history. Hegel inspired an interest in history and a historical approach to knowledge that was further emphasized by Herbert Spencer in Britain and by the German school of historicism. Spencer and the French philosopher Auguste Comte brought attention to the importance of sociology as a branch of knowledge, and both extended the principles of empiricism to the study of society.
The American school of pragmatism, founded by the philosophers Charles Sanders Peirce, William James, and John Dewey at the turn of this century, carried empiricism further by maintaining that knowledge is an instrument of action and that all beliefs should be judged by their usefulness as rules for predicting experiences.
In the early 20th century, epistemological problems were discussed thoroughly, and subtle shades of difference grew into rival schools of thought. Special attention was given to the relation between the act of perceiving something, the object directly perceived, and the thing that can be said to be known as a result of the perception. The phenomenalists contended that the objects of knowledge are the same as the objects perceived. The neorealists argued that one has direct perceptions of physical objects or parts of physical objects, rather than of one's own mental states. The critical realists took a middle position, holding that although one perceives only sensory data such as colors and sounds, these stand for physical objects and provide knowledge thereof.
A method for dealing with the problem of clarifying the relation between the act of knowing and the object known was developed by the German philosopher Edmund Husserl. He outlined an elaborate procedure that he called phenomenology, by which one is said to be able to distinguish the way things appear to be from the way one thinks they really are, thus gaining a more precise understanding of the conceptual foundations of knowledge.
Scientific knowledge reveals that scientific knowledge and method did not spring full-blown from the minds of the ancient Greeks any more than language and culture emerged fully in the minds of Homo sapiens sapient. Scientific knowledge is an extension of ordinary language into greater levels of abstraction and precision through reliance upon geometric and numerical relationships. We speculate that the seeds of the scientific imagination were planted in ancient Greece, as opposed to Chinese or Babylonian culture, partly because the social, political and an economic climate in Greece was more open because the social, political, and an economic climate in Greece was more open to the pursuit of knowledge with marginal cultural utility. Another important factor was that the special character of Homeric religion allowed the Greeks to invent a conceptual framework that would prove useful in future scientific investigations, but it was only after this inheritance from Greek philosophy was wed to some essential features of Judeo-Christian beliefs about the origin of the cosmos that the paradigm for classical physics emerged.
During the second quarter of the 20th century, two schools of thought emerged, each indebted to the Austrian philosopher Ludwig Wittgenstein. The first of these schools, logical empiricism, or logical positivism, had its origins in Vienna, Austria, but it soon spread to England and the United States. The logical empiricist insisted that there is only one kind of knowledge: scientific knowledge; that any valid knowledge claim must be verifiable in experience; and hence that much that had passed for philosophy was neither true nor false but literally meaningless. Finally, following Hume and Kant, a clear distinction must be maintained between analytic and synthetic statements. The so-called verifiability criterion of meaning has undergone changes as a result of discussions among the logical empiricist themselves, as well as their critics, but has not been discarded. More recently, the sharp distinction between the Analytic and the synthetic has been attacked by a number of philosophers, chiefly by American philosopher W.V.O. Quine, whose overall approach is in the pragmatic tradition.
The latter of these recent schools of thought, generally referred to as linguistic analysis, or ordinary language philosophy, seem to break with traditional epistemology. The linguistic analysts undertake to examine the actual way key epistemological terms are used-terms such as knowledge, perception, and probability-and to formulate definitive rules for their use in order to avoid verbal confusion. British philosopher John Langshaw Austin argued, for example, that to say a statement was truly added but nothing to the statement except a promise by the speaker or writer. Austin does not consider truth a quality or property attaching to statements or utterances. However, the ruling thought is that it is only through a correct appreciation of the role and point of this language that we can come to a better conception of what the language is about, and avoid the oversimplifications and distortion we are apt to bring to its subject matter.
Linguistics is the scientific study of language. It encompasses the description of languages, the study of their origin, and the analysis of how children acquire language and how people learn languages other than their own. Linguistics is also concerned with relationships between languages and with the ways languages change over time. Linguists may study language as a thought process and seek a theory that accounts for the universal human capacity to produce and understand language. Some linguists examine language within a cultural context. By observing talk, they try to determine what a person needs to know in order to speak appropriately in different settings, such as the workplace, among friends, or among family. Other linguists focus on what happens when speakers from different language and cultural backgrounds interact. Linguists may also concentrate on how to help people learn another language, using what they know about the learner”s first language and about the language being acquired.
Although there are many ways of studying language, most approaches belong to one of the two main branches of linguistics: descriptive linguistics and comparative linguistics.
Descriptive linguistics is the study and analysis of spoken language. The techniques of descriptive linguistics were devised by German American anthropologist Franz Boas and American linguist and anthropologist Edward Sapir in the early 1900s to record and analyze Native American languages. Descriptive linguistics begins with what a linguist hears native speakers say. By listening to native speakers, the linguist gathered a body of data and analyzed it in order to identify distinctive sounds, called phonemes. Individual phonemes, such as /p/ and /b/, are established on the grounds that substitution of one for the other changes the meaning of a word. After identifying the entire inventory of sounds in a language, the linguist looks at how these sounds combine to create morphemes, or units of sound that carry meaning, such as the words push and bush. Morphemes may be individual words such as push; root words, such as a berry in a blueberry; or prefixes (pre-in previews) and suffixes (-ness in openness).
The linguist”s next step is to see how morphemes combine into sentences, obeying both the dictionary meaning of the morpheme and the grammatical rules of the sentence. In the sentence “She pushed the bush,” the morpheme she, a pronoun, is the subject; push, a transitive verb, is the verb “the”, a definite article, is the determiner; and bush, a noun, is the object. Knowing the function of the morphemes in the sentence enables the linguist to describe the grammar of the language. The scientific procedures of phonemics (finding phonemes), morphology (discovering morphemes), and syntaxes (describing the order of morphemes and their function) provided descriptive linguists with a way to write down grammars of languages never before written down or analyzed. In this way they can begin to study and understand these languages.
Comparative linguistics is the study and analysis, by means of written records, of the origins and relatedness of different languages. In 1786 Sir William Jones, a British scholar, asserted that Sanskrit, Greek, and Latin was related to one another and had descended from a common source. He based this assertion on observations of similarities in sounds and meanings among the three languages. For example, the Sanskrit word bhratar for “brother” resembles the Latin word frater, the Greek word phrater, (and the English word brother).
Other scholars went on to compare Icelandic with Scandinavian languages, and Germanic languages with Sanskrit, Greek, and Latin. The correspondences among languages, known as genetic relationships, came to be represented on what comparative linguists refer to as family trees. Family trees established by comparative linguists include the Indo-European, relating Sanskrit, Greek, Latin, German, English, and other Asian and European languages; the Algonquian, relating Fox, Cree, Menomini, Ojibwa, and other Native North American languages; and the Bantu, relating Swahili, Xhosa, Zulu, Kikuyu, and other African languages.
Comparative linguists also look for similarities in the way words are formed in different languages. Latin and English, for example, change the form of a word to express different meanings, as when the English verb goes interchangeably to go and gone, only to express a past action. Chinese, on the other hand, has no such inflected forms; the verb remains the same while other words indicate the time (as, in “go store tomorrow”). In Swahili, prefixes, suffixes, and infixes (additions in the body of the word) combine with a root word to change its meaning. For example, a single word might expressibly go when something was done, by whom, to whom, and in what manner.
Some comparative linguists reconstruct hypothetical ancestral languages known as proto-languages, which they use to demonstrate relatedness among contemporary languages. A proto-language is not intended to depict a real language, however, and does not represent the speech of ancestors of people speaking modern languages. Unfortunately, some groups have mistakenly used such reconstructions in efforts to demonstrate the ancestral homeland of those people.
Comparative linguists have suggested that certain basic words in a language do not change over time, because people are reluctant to introduce new words for such constants as arm, eye, or mother. These words are termed culture free. By comparing lists of culture-free words in languages within a family, linguists can derive the percentage of related words and use a formula to figure out when the languages separated from one another.
By the 1960s comparativists were no longer satisfied with focusing on origins, migrations, and the family tree method. They challenged as unrealistic the notion that an earlier language could remain sufficiently isolated for other languages to be derived exclusively from it over a period of time. Today comparativists seek to understand the more complicated reality of language history, taking language contact into account. They are concerned with universal characteristics of language and with comparisons of grammars and structures.
The field of linguistics both borrows from and lends its own theories and methods to other disciplines. The many subfields of linguistics have expanded our understanding of languages. Linguistic theories and methods are also used in other fields of study. These overlapping interests have led to the creation of several cross-disciplinary fields.
Sociolinguistics are the study of patterns and variations in language within a society or community. It focuses on the way people use language to express social class, group status, gender, or ethnicity, and it looks at how they make choices about the form of language they use. It also examines the way people use language to negotiate their role in society and to achieve positions of power. For example, sociolinguistic studies have found that the way a New Yorker pronounces the phoneme /r/ in an expression such as “fourth floor” can indicate the person”s social class. According to one study, people aspiring to move from the lower middle classes to the upper middle class attach prestige to pronouncing the /r/. Sometimes they even overcorrect their speech, pronouncing a /r/ where those whom they wish to copy may not.
Some sociolinguists believe that analyzing such variables as the use of a particular phoneme can predict the direction of language change. Change, they say, moves toward the variable associated with power, prestige, or other quality having high social value. Other sociolinguists focus on what happens when speakers of different languages interact. This approach to language change emphasizes the way languages mix rather than the direction of change within a community. The goal of Sociolinguistics is to understand communicative competence-what people need to know to use the appropriate language for a given social setting.
Psycholinguistics merge the fields of psychology and linguistics to study how people process language and how language use is related to underlying mental processes. Studies of children”s language acquisition and of second-language acquisition are psycholinguistic in nature. Psycholinguists work to develop models for how language is processed and understood, using evidence from studies of what happens when these processes go awry. They also study language disorders such as aphasia (impairment of the ability to use or comprehend words) and dyslexia (impairment of the ability to make out written language).
Computational linguistics involves the use of computers to compile linguistic data, analyze languages, translate from one language to another, and develop and test models of language processing. Linguists use computers and large samples of actual language to analyze the relatedness and the structure of languages and to look for patterns and similarities. Computers also aid in stylistic studies, information retrieval, various forms of textual analysis, and the construction of dictionaries and concordances. Applying computers to language studies has resulted in machine translation systems and machines that recognize and produce speech and text. Such machines facilitate communication with humans, including those who are perceptually or linguistically impaired.
Applied linguistics employs linguistic theory and methods in teaching and in research on learning a second language. Linguists look at the errors people make as they learn another language and at their strategies for communicating in the new language at different degrees of competence. In seeking to understand what happens in the mind of the learner, applied linguists recognize that motivation, attitude, learning style, and personality affect how well a person learns another language.
Anthropological linguistics, also known as linguistic anthropology, uses linguistic approaches to analyze culture. Anthropological linguists examine the relationship between a culture and its language. The way cultures and languages have changed over time, and how different cultures and languages are related to one another. For example, the present English use of family and given names arose in the late 13th and early 14th centuries when the laws concerning registration, tenure, and inheritance of property were changed.
Philosophical linguistics examines the philosophy of language. Philosophers of language search for the grammatical principles and tendencies that all human languages share. Among the concerns of linguistic philosophers is the range of possible word order combinations throughout the world. One finding is that 95 percent of the world's languages use a subject-verb-object (SVO) order as English does (“She pushed the bush.”). Only 5 percent use a subject-object-verb (SOV) order or verb-subject-object (VSO) order.
Neurolinguistics are the study of how language is processed and represented in the brain. Neurolinguists seek to identify the parts of the brain involved with the production and understanding of language and to determine where the components of language (phonemes, morphemes, and structure or syntax) are stored. In doing so, they make use of techniques for analyzing the structure of the brain and the effects of brain damage on language.
Speculation about language goes back thousands of years. Ancient Greek philosophers speculated on the origins of language and the relationship between objects and their names. They also discussed the rules that govern language, or grammar, and by the 3rd century Bc they had begun grouping words into parts of speech and devising names for different forms of verbs and nouns.
In India religion provided the motivation for the study of language nearly 2500 years ago. Hindu priests noted that the language they spoke had changed since the compilation of their ancient sacred texts, the Vedas, starting about 1000 Bc. They believed that for certain religious ceremonies based upon the Vedas to succeed, they needed to reproduce the language of the Vedas precisely. Panini, an Indian grammarian who lived about 400 Bc, produced the earliest work describing the rules of Sanskrit, the ancient language of India.
The Romans used Greek grammars as models for their own, adding commentary on Latin style and usage. Statesman and orator Marcus Tullius Cicero wrote on rhetoric and style in the 1st century Bc. Later grammarian’s Aelius Donatus (4th century ad) and Priscian (6th century ad) produced detailed Latin grammars. Roman works served as textbooks and standards for the study of language for more than 1000 years.
It was not until the end of the 18th century that language was researched and studied in a scientific way. During the 17th and 18th centuries, modern languages, such as French and English, replaced Latin as the means of universal communication in the West. This occurrence, along with developments in printing, meant that many more texts became available. At about this time, the study of phonetics, or the sounds of a language, began. Such investigations led to comparisons of sounds in different languages; in the late 18th century the observation of correspondences among Sanskrit, Latin, and Greek gave birth to the field of Indo-European linguistics.
During the 19th century, European linguists focused on philology, or the historical analysis and comparison of languages. They studied written texts and looked for changes over time or for relationships between one language and another.
American linguist, writer, teacher, and political activist Noam Chomsky is considered the founder of transformational-generative linguistic analysis, which revolutionized the field of linguistics. This system of linguistics treats grammar as a theory of language-that is, Chomsky believes that in addition to the rules of grammar specific to individual languages, there are universal rules common to all languages that indicate that the ability to form and understand language is innate to all human beings. Chomsky also is well known for his political activism-he opposed United States involvement in Vietnam in the 1960s and 1970s and has written various books and articles and delivered many lectures in an attempt to educate and empower people on various political and social issues.
In the early 20th century, linguistics expanded to include the study of unwritten languages. In the United States linguists and anthropologists began to study the rapidly disappearing spoken languages of Native North Americans. Because many of these languages were unwritten, researchers could not use historical analysis in their studies. In their pioneering research on these languages, anthropologists' Franz Boas and Edward Sapir developed the techniques of descriptive linguistics and theorized on the ways in which language shapes our perceptions of the world.
An important outgrowth of descriptive linguistics is a theory known as structuralism, which assumes that language is a system with a highly organized structure. Structuralism began with publication of the work of Swiss linguist Ferdinand de Saussure in Cours de linguistique générale (1916; Course in General Linguistics, 1959). This work, compiled by Saussure”s students after his death, is considered the foundation of the modern field of linguistics. Saussure made a distinction between actual speech, and spoken language, and the knowledge underlying speech that speakers share about what is grammatical. Speech, he said, represents instances of grammar, and the linguist”s task is to find the underlying rules of a particular language from examples found in speech. To the Structuralists, grammar is a set of relationships that account for speech, rather than a set of instances of speech, as it is to the descriptivist.
Once linguists began to study language as a set of abstract rules that somehow account for speech, other scholars began to take an interest in the field. They drew analogies between language and other forms of human behavior, based on the belief that a shared structure underlies many aspects of a culture. Anthropologists, for example, became interested in a Structuralists approach to the interpretation of kinship systems and analysis of myth and religion. American linguist Leonard Bloomfield promoted structuralism in the United States.
Saussure”s ideas also influenced European linguistics, most notably in France and Czechoslovakia (now the Czech Republic). In 1926 Czech linguist Vilem Mathesius founded the Linguistic Circle of Prague, a group that expanded the focus of the field to include the context of language use. The Prague circle developed the field of phonology, or the study of sounds, and demonstrated that universal features of sounds in the languages of the world interrelate in a systematic way. Linguistic analysis, they said, should focus on the distinctiveness of sounds rather than on the ways they combine. Where descriptivists tried to locate and describe individual phonemes, such as /b/ and /p/, the Prague linguists stressed the features of these phonemes and their interrelationships in different languages. In English, for example, the voice distinguishes between the similar sounds of /b/ and /p/, but these are not distinct phonemes in a number of other languages. An Arabic speaker might pronounce the cities Pompei and Bombay the same way.
As linguistics developed in the 20th century, the notion became prevalent that language is more than speech-specifically, that it is an abstract system of interrelationships shared by members of a speech community. Structural linguistics led linguists to look at the rules and the patterns of behavior shared by such communities. Whereas structural linguists saw the basis of language in the social structure, other linguists looked at language as a mental process.
The 1957 publication of Syntactic Structures by American linguist Noam Chomsky initiated what many view as a scientific revolution in linguistics. Chomsky sought a theory that would account for both linguistic structure and the creativity of language-the fact that we can create entirely original sentences and understand sentences never before uttered. He proposed that all people have an innate ability to acquire language. The task of the linguist, he claimed, is to describe this universal human ability, known as language competence, with a grammar from which the grammars of all languages could be derived. The linguist would develop this grammar by looking at the rules children use in hearing and speaking their first language. He termed the resulting model, or grammar, a transformational-generative grammar, referring to the transformations (or rules) that generate (or account for) language. Certain rules, Chomsky asserted, are shared by all languages and form part of a universal grammar, while others are language specific and associated with particular speech communities. Since the 1960s much of the development in the field of linguistics has been a reaction to or against Chomsky”s theories.
At the end of the 20th century, linguists used the term grammar primarily to refer to a subconscious linguistic system that enables people to produce and comprehend an unlimited number of utterances. Grammar thus accounts for our linguistic competence. Observations about the actual language we use, or language performance, are used to theorize about this invisible mechanism known as grammar.
The orientation toward the scientific study of language led by Chomsky has had an impact on nongenerative linguists as well. Comparative and historically oriented linguists are looking for the various ways linguistic universals show up in individual languages. Psycholinguists, interested in language acquisition, are investigating the notion that an ideal speaker-hearer is the origin of the acquisition process. Sociolinguists are examining the rules that underlie the choice of language variants, or codes, and allow for switching from one code to another. Some linguists are studying language performance-the way people use language-see how it reveals a cognitive ability shared by all human beings. Others seek to understand animal communication within such a framework. What mental processes enable chimpanzees to make signs and communicate with one another and how do these processes differ from those of humans?
A written bibliographic note in gratification To Ludwig Wittgenstein (1889-1951), an Austrian-British philosopher, who was one of the most influential thinkers of the 20th century, particularly noted for his contribution to the movement known as analytic and linguistic philosophy.
Born in Vienna on April 26, 1889, Wittgenstein was raised in a wealthy and cultured family. After attending schools in Linz and Berlin, he went to England to study engineering at the University of Manchester. His interest in pure mathematics led him to Trinity College, University of Cambridge, to study with Bertrand Russell. There he turned his attention to philosophy. By 1918 Wittgenstein had completed his Tractatus Logico-philosophicus (1921; trans. 1922), a work he then believed provided the “final solution” to philosophical problems. Subsequently, he turned from philosophy and for several years taught elementary school in an Austrian village. In 1929 he returned to Cambridge to resume his work in philosophy and was appointed to the faculty of Trinity College. Soon he began to reject certain conclusions of the Tractatus and to develop the position reflected in his Philosophical Investigations (pub. posthumously 1953; trans. 1953). Wittgenstein retired in 1947; he died in Cambridge on April 29, 1951. A sensitive, intense man who often sought solitude and was frequently depressed, Wittgenstein abhorred pretense and was noted for his simple style of life and dress. The philosopher was forceful and confident in personality, however, and he exerted considerable influence on those with whom he came in contact.
Wittgenstein”s philosophical life may be divided into two distinct phases: an early period, represented by the Tractatus, and a later period, represented by the Philosophical Investigations. Throughout most of his life, however, Wittgenstein consistently viewed philosophy as linguistic or conceptual analysis. In the Tractatus he argued that philosophy aims at the logical clarification of thoughts. In the Philosophical Investigations, however, he maintained that “philosophy is a battle against the bewitchment of our intelligence by means of language.”
Language, Wittgenstein argued in the Tractatus, is composed of complex propositions that can be analyzed into fewer complex propositions until one arrives at simple or elementary propositions. Correspondingly, the world is composed of complex facts that can be analyzed into fewer complex facts until one arrives at simple, or atomic, facts. The world is the totality of these facts. According to Wittgenstein”s picture theory of meaning, it is the nature of elementary propositions logically to picture atomic facts, or states of affairs. He claimed that the nature of language required elementary propositions, and his theory of meaning required that there be atomic facts pictured by the elementary propositions. On this analysis, only propositions that picture facts-the propositions of science-are considered cognitively meaningfully. Metaphysical and ethical statements are not meaningful assertions. The logical positivist associated with the Vienna Circle was greatly influenced by this conclusion.
Wittgenstein came to believe, however, that the narrow view of language reflected in the Tractatus was mistaken. In the Philosophical Investigations he argued that if one actually looks to see how language is used, the variety of linguistic usage becomes clear. Words are like tools, and just as tools serve different functions, so linguistic expressions serve many functions. Although some propositions are used to picture facts, others are used to command, question, pray, thank, curse, and so on. This recognition of linguistic flexibility and variety led to Wittgenstein”s concept of a language game and to the conclusion that people play different language games. The scientist, for example, is involved in a different language game than the theologian. Moreover, the meaning of a proposition must be understood in terms of its context, that is, in terms of the rules of the game of which that proposition is a part. The key to the resolution of philosophical puzzles is the therapeutic process of examining and describing language in use.
Analytic and Linguistic Philosophy, is a product out of the 20th-century philosophical movement, and dominant in Britain and the United States since World War II, that aims to clarify language and analyze the concepts expressed in it. The movement has been given a variety of designations, including linguistic analysis, logical empiricism, logical positivism, Cambridge analysis, and “Oxford philosophy.” The last two labels are derived from the universities in England where this philosophical method has been particularly influential. Although no specific doctrines or tenets are accepted by the movement as a whole, analytic and linguistic philosophers agree that the proper activity of philosophy is clarifying language, or, as some prefer, clarifying concepts. The aim of this activity is to settle philosophical disputes and resolve philosophical problems, which, it is argued, originates in linguistic confusion.
A considerable diversity of views exists among analytic and linguistic philosophers regarding the nature of conceptual or linguistic analysis. Some have been primarily concerned with clarifying the meaning of specific words or phrases as an essential step in making philosophical assertions clear and unambiguous. Others have been more concerned with determining the general conditions that must be met for any linguistic utterance to be meaningful; their intent is to establish a criterion that will distinguish between meaningful and nonsensical sentences. Still other analysts have been interested in creating formal, symbolic languages that are mathematical in nature. Their claim is that philosophical problems can be more effectively dealt with once they are formulated in a rigorous logical language.
By contrast, many philosophers associated with the movement have focused on the analysis of ordinary, or natural, language. Difficulties arise when concepts such as time and freedom, for example, are considered apart from the linguistic context in which they normally appear. Attention to language as it is ordinarily used is the key, it is argued, to resolving many philosophical puzzles.
Linguistic analysis as a method of philosophy is as old as the Greeks. Several of the dialogues of Plato, for example, are specifically concerned with clarifying terms and concepts. Nevertheless, this style of philosophizing has received dramatically renewed emphasis in the 20th century. Influenced by the earlier British empirical tradition of John Locke, George Berkeley, David Hume, and John Stuart Mill and by the writings of the German mathematician and philosopher Gottlob Frége, the 20th-century English philosopher”s G. E. Moore and Bertrand Russell became the founders of this contemporary analytic and linguistic trend. As students together at the University of Cambridge, Moore and Russell rejected Hegelian idealism, particularly as it was reflected in the work of the English metaphysician F. H. Bradley, who held that nothing is completely real except the Absolute. In their opposition to idealism and in their commitment to the view that careful attention to language is crucial in philosophical inquiry. They set the mood and style of philosophizing for much of the 20th century English-speaking world.
For Moore, philosophy was first and foremost analysis. The philosophical task involves clarifying puzzling propositions or concepts by indicating fewer puzzling propositions or concepts to which the originals are held to be logically equivalent. Once this task has been completed, the truth or falsity of problematic philosophical assertions can be determined more adequately. Moore was noted for his careful analyses of such puzzling philosophical claims as “time is unreal,” analyses that then aided in determining the truth of such assertions.
Russell, strongly influenced by the precision of mathematics, was concerned with developing an ideal logical language that would accurately reflect the nature of the world. Complex propositions, Russell maintained, can be resolved into their simplest components, which he called atomic propositions. These propositions refer to atomic facts, the ultimate constituents of the universe. The metaphysical views based on this logical analysis of language and the insistence that meaningful propositions must correspond to facts constitute what Russell called logical atomism. His interest in the structure of language also led him to distinguish between the grammatical form of a proposition and its logical form. The statements “John is good” and “John is tall” have the same grammatical form but different logical forms. Failure to recognize this would lead one to treat the property “goodness” as if it were a characteristic of John in the same way that the property “tallness” is a characteristic of John. Such failure results in philosophical confusion.
Russell”s work in mathematics attracted to Cambridge. The Austrian philosopher Ludwig Wittgenstein, who became a central figure in the analytic and linguistic movement, in his first major work, Tractatus Logico-philosophicus (1921; trans. 1922), in which he first presented his theory of language, Wittgenstein argued that “all philosophy is a “critique of language” and that “philosophy aims at the logical clarification of thoughts." The results of Wittgenstein”s analysis resembled Russell”s logical atomism. The world, he argued, is ultimately composed of simple facts, which it is the purpose of language to picture. To be meaningful, statements about the world must be reducible to linguistic utterances that have a structure similar to the simple facts pictured. In this early Wittgensteinian analysis, only propositions that picture facts-the propositions of science-are considered factually meaningfully. Metaphysical, theological, and ethical sentences were judged to be factually meaningless continues to exist between.
Influenced by Russell, Wittgenstein, Ernst Mach, and others, a group of philosophers and mathematicians in Vienna in the 1920s initiated the movement known as logical positivism. Led by Moritz Schlick and Rudolf Carnap, the Vienna Circle initiated one of the most important chapters in the history of analytic and linguistic philosophy. According to the positivist, the task of philosophy is the clarification of meaning, not the discovery of new facts (the job of the scientists) or the construction of comprehensive accounts of reality (the misguided pursuit of traditional metaphysics).
The positivist divided all meaningful assertions into two classes: analytic propositions and empirically verifiable ones. Analytic propositions, which include the propositions of logic and mathematics, are statements the truth or falsity of which depend together on the meanings of the terms constituting the statement. An example would be the proposition “two plus two equals four.” The second class of meaningful propositions includes all statements about the world that can be verified, at least in principle, by sense experience. Indeed, the meaning of such propositions is identified with the empirical method of their verification. This verifiability theory of meaning, the positivist concluded, would demonstrate that scientific statements are legitimate factual claims and that metaphysical, religious, and ethical sentences are factually empties. The ideas of logical positivism were made popular in England by the publication of A.J. Ayer”s Language, Truth and Logic in 1936.
The positivist verifiability theory of meaning came under intense criticism by philosophers such as the Austrian-born British philosopher Karl Popper. Eventually this narrow theory of meaning yielded to a broader understanding of the nature of language. Again, an influential figure was Wittgenstein. Repudiating many of his earlier conclusions in the Tractatus, he initiated a new line of thought culminating in his posthumously published Philosophical Investigations (1953; trans. 1953). In this work, Wittgenstein argued that once attention is directed to the way language is actually used in ordinary discourse, the variety and flexibility of language become clear. Propositions do much more than simply picture facts.
This recognition led to Wittgenstein”s influential concept of language games. The scientist, the poet, and the theologian, for example, are involved in different language games. Moreover, the meaning of a proposition must be understood in its context, that is, in terms of the rules of the language game of which that proposition is a part. Philosophy, concluded Wittgenstein, is an attempt to resolve problems that arise as the result of linguistic confusion, and the key to the resolution of such problems is ordinary language analysis and the proper use of language.
Additional contributions within the analytic and linguistic movement include the work of the British philosopher's Gilbert Ryle, John Austin, and P. F. Strawson and the American philosopher W. V. Quine. According to Ryle, the task of philosophy is to restate “systematically misleading expressions” in forms that are logically more accurate. He was particularly concerned with statements the grammatical form of which suggests the existence of nonexistent objects. For example, Ryle is best known for his analysis of mentalistic language, language that misleadingly suggests that the mind is an entity in the same way as the body.
Austin maintained that one of the most fruitful starting points for philosophical inquiry is attention to the extremely fine distinctions drawn in ordinary language. His analysis of language eventually led to a general theory of speech acts, that is, to a description of the variety of activities that an individual may be performing when something is uttered.
Strawson is known for his analysis of the relationship between formal logic and ordinary language. The complexity of the latter, he argued, is inadequately represented by formal logic. A variety of analytic tools, therefore, are needed in addition to logic in analyzing ordinary language.
Quine discussed the relationship between language and ontology. He argued that language systems tend to commit their users to the existence of certain things. For Quine, the justification for speaking one way rather than another is a thoroughly pragmatic one.
The commitment to language analysis as a way of pursuing philosophy has continued as a significant contemporary dimension in philosophy. A division also those who prefer to work with the precision and rigor of symbolic logical systems and those who prefer to analyze ordinary language. Although few contemporary philosophers maintain that all philosophical problems are linguistic, the view continues to be widely held that attention to the logical structure of language and to how language is used in everyday discourse can often aid in resolving philosophical problems
Are terms of logical calculus are also called a formal language, and a logical system? A system in which explicit rules are provided to determining (1) which are the expressions of the system (2) which sequence of expressions count as well formed (well-forced formulae) (3) which sequence would count ss proofs. A system may include axioms for which leaves terminate a proof, however, it shows of the prepositional calculus and the predicated calculus.
It”s most immediate of issues surrounding certainty are especially connected with those concerning scepticism. Although Greek scepticism entered on the value of enquiry and questioning, scepticism is now the denial that knowledge or even rational belief is possible, either about some specific subject-matter, e.g., ethics, or in any area whatsoever. Classical scepticism, springs from the observation that the best method in some area seems to fall short of giving us contact with the truth, e.g., there is a gulf between appearances and reality, it frequently cites the conflicting judgements that our methods deliver, with the result that questions of truth become undefinable. In classic thought the various examples of this conflict were systemized in the tropes of Aenesidemus. So that, the scepticism of Pyrrho and the new Academy was a system of argument and inasmuch as opposing dogmatism, and, particularly the philosophical system building of the Stoics.
As it has come down to us, particularly in the writings of Sextus Empiricus, its method was typically to cite reasons for finding our issue undesirable (sceptics devoted particular energy to undermining the Stoics conception of some truths as delivered by direct apprehension or some katalepsis). As a result the sceptic concludes eposhé, or the suspension of belief, and then go on to celebrate a way of life whose object was ataraxia, or the tranquillity resulting from suspension of belief.
Fixed by its will for and of itself, the mere mitigated scepticism which accepts every day or commonsense belief, is that, not the delivery of reason, but as due more to custom and habit. Nonetheless, it is self-satisfied at the proper time, however, the power of reason to give us much more. Mitigated scepticism is thus closer to the attitude fostered by the accentuations from Pyrrho through to Sextus Expiricus. Despite the fact that the phrase “Cartesian scepticism” is sometimes used, Descartes himself was not a sceptic, however, in the “method of doubt” uses a sceptical scenario in order to begin the process of finding a general distinction to mark its point of knowledge. Descartes trusts in categories of “clear and distinct” ideas, not far removed from the phantasiá kataleptikê of the Stoics.
For many sceptics had traditionally held that knowledge requires certainty, artistry. And, of course, they claim that certain knowledge is not possible. In part, nonetheless, of the principle that every effect it's a consequence of an antecedent cause or causes. For causality to be true it is not necessary for an effect to be predictable as the antecedent causes may be numerous, too complicated, or too interrelated for analysis. Nevertheless, in order to avoid scepticism, this participating sceptic has generally held that knowledge does not require certainty. Except for alleged cases of things that are evident for one just by being true. It has often been thought, that any thing known must satisfy certain criteria as well for being true. It is often taught that anything is known must satisfy certain standards. In so saying, that by “deduction” or “induction,” there will be criteria specifying when it is. As these alleged cases of self-evident truths, the general principle specifying the sort of consideration that will make such standards in the apparent or justly conclude in accepting it warranted to some degree.
Besides, there is another view-the absolute globular view that we do not have any knowledge whatsoever. In whatever manner, it is doubtful that any philosopher seriously entertains of an absolute scepticism. Even the Pyrrhonist sceptics, who held that we should refrain from accenting to any non-evident standards that no such hesitancy about asserting to “the evident,” the non-evident are any belief that requires evidences because it is warranted.
René Descartes (1596-1650), in his sceptical guise, never doubted the content of his own ideas. It”s challenging logic, inasmuch as of whether they “corresponded” to anything beyond ideas.
All the same, Pyrrhonism and Cartesian form of a virtual globular scepticism, in having been held and defended, that of assuming that knowledge is some form of true, sufficiently warranted belief, it is the warranted condition that provides the truth or belief conditions, in that of providing the grist for the sceptic”s mill about. The Pyrrhonist will suggest that there is no non-evident, empirically deferring the sufficiency of giving in but warranted. Whereas, a Cartesian sceptic will agree that no empirical standard about anything other than one”s own mind and its contents are sufficiently warranted, because there are always legitimate grounds for doubting it. Whereby, the essential difference between the two views concerns the stringency of the requirements for a belief being sufficiently warranted to take account of as knowledge.
A Cartesian requires certainty. A Pyrrhonist merely requires that the standards in case are more warranted then its negation.
Cartesian scepticism was by an unduly influence with which Descartes agues for scepticism, than his reply holds, in that we do not have any knowledge of any empirical standards, in that of anything beyond the contents of our own minds. The reason is roughly in the position that there is a legitimate doubt about all such standards, only because there is no way to justifiably deny that our senses are being stimulated by some sense, for which it is radically different from the objects which we normally think, in whatever manner they affect our senses. Therefrom, if the Pyrrhonist is the agnostic, the Cartesian sceptic is the atheist.
Because the Pyrrhonist requires much less of a belief in order for it to be confirmed as knowledge than do the Cartesian, the argument for Pyrrhonism are much more difficult to construct. A Pyrrhonist must show that there is no better set of reasons for believing to any standards, of which are in case that any knowledge learnt of the mind is understood by some of its forms, that has to require certainty.
The underlying latencies that are given among the many derivative contributions as awaiting their presence to the future that of specifying to the theory of knowledge, is, but, nonetheless, the possibility to identify a set of shared doctrines, however, identity to discern two broad styles of instances to discern, in like the manner, these two styles of pragmatism, clarify the innovation that a Cartesian approval is fundamentally flawed, nonetheless, of responding very differently but not fordone.
Repudiating the requirements of absolute certainty or knowledge, insisting on the connection of knowledge with activity, as, too, of pragmatism of a reformist distributing knowledge upon the legitimacy of traditional questions about the truth-unconductiveness of our cognitive practices, and sustain a conception of truth objectives, enough to give those questions that undergo of a gathering in their own purposive latencies, yet we are given to the spoken word for which a dialectic awareness sparks the flame from the ambers of fire.
Pragmatism of a determinant revolution, by contrast, relinquishing the objectivity of youth, acknowledges no legitimate epistemological questions over and above those that are naturally kindred of our current cognitive conviction.
It seems clear that certainty is a property that can be assembled to either a person or a belief. We can say that a person, “S” is certain, or we can say that its descendable alinement is aligned as of “p," is certain. The two uses can be connected by saying that “S” has the right to be certain just in case the value of “p” is sufficiently verified.
In defining certainty, it is crucial to note that the term has both an absolute and relative sense. More or less, we take a proposition to be certain when we have no doubt about its truth. We may do this in error or unreasonably, but objectively a proposition is certain when such absence of doubt is justifiable. The sceptical tradition in philosophy denies that objective certainty is often possible, or ever possible, either for any proposition at all, or for any proposition from some suspect family (ethics, theory, memory, empirical judgement etc.) a major sceptical weapon is the possibility of upsetting events that Can cast doubts back onto what were hitherto taken to be certainties. Others include reminders of the divergence of human opinion, and the fallible source of our confidence. Fundamentalist approaches to knowledge look for a basis of certainty, upon which the structure of our system is built. Others reject the metaphor, looking for mutual support and coherence, without foundation.
However, in moral theory, the views that there are inviolable moral standards or absolute variable human desires or policies or prescriptions.
In spite of the notorious difficulty of reading Kantian ethics, a hypothetical imperative embeds a command which is in place only given some antecedent desire or project: “If you want to look wise, stay quiet." The injunction to stay quiet only applies to those with the antecedent desire or inclination. If one has no desire to look wise, the injunction cannot be so avoided: It is a requirement that binds anybody, regardless of their inclination. It could be represented as, for example, “tell the truth (regardless of whether you want to or not)." The distinction is not always signalled by presence or absence of the conditional or hypothetical form: “If you crave drink, don't become a bartender” may be regarded as an absolute injunction applying to anyone, although only activated in case of those with the stated desire.
In Grundlegung zur Metaphsik der Sitten (1785), Kant discussed five forms of the categorical imperative: (1) the formula of universal law: “act only on that maxim through which you can at the same times will that it should become universal law: (2) the formula of the law of nature: “act as if the maxim of your action were to become through your will a universal law of nature”: (3) the formula of the end-in-itself: “act in such a way that you always treat humanity, whether in your own person or in the person of any other, never simply as a means, but always at the same time as an end”: (4) the formula of autonomy, or considering “the will of every rational being as a will which makes universal law”: (5) the formula of the Kingdom of Ends, which provides a model for the systematic union of different rational beings under common laws.
Even so, a proposition that is not a conditional “p." Moreover, the affirmative and negative, modern opinion is wary of this distinction, since what appears categorical may vary notation. Apparently, categorical propositions may also turn out to be disguised conditionals: “X” is intelligent (categorical?) = if “X” is given a range of tasks she performs them better than many people (conditional?) The problem. Nonetheless, is not merely one of classification, since deep metaphysical questions arise when facts that seem to be categorical and therefore solid, come to seem by contrast conditional, or purely hypothetical or potential.
A limited area of knowledge or endeavour to which pursuits, activities and interests are a central representation held to a concept of physical theory. In this way, a field is defined by the distribution of a physical quantity, such as temperature, mass density, or potential energy y, at different points in space. In the particularly important example of force fields, such as gravitational, electrical, and magnetic fields, the field value at a point is the force which a test particle would experience if it were located at that point. The philosophical problem is whether a force field is to be thought of as purely potential, so the presence of a field merely describes the propensity of masses to move relative to each other, or whether it should be thought of in terms of the physically real modifications of a medium, whose properties result in such powers that are force field's purely potential, fully characterized by dispositional statements or conditionals, or are they categorical or actual? The former option seems to require within ungrounded dispositions, or regions of space that differs only in what happens if an object is placed there. The law-like shape of these dispositions, apparent for example in the curved lines of force of the magnetic field, may then seem quite inexplicable. To atomists, such as Newton it would represent a return to Aristotelian entelechies, or quasi-psychological affinities between things, which are responsible for their motions. The latter option requires understanding of how forces of attraction and repulsion can be “grounded” in the properties of the medium.
The basic idea of a field is arguably present in Leibniz, who was certainly hostile to Newtonian atomism. Although his equal hostility to “action at a distance” muddies the water. It is usually credited to the Jesuit mathematician and scientist Joseph Boscovich (1711-87) and Immanuel Kant (1724-1804), both of whom influenced the scientist Faraday, with whose work the physical notion became established. In his paper “On the Physical Character of the Lines of Magnetic Force” (1852). Faraday was to suggest several criteria for assessing the physical reality of lines of force, such as whether they are affected by an intervening material medium, whether the motion depends on the nature of what is placed at the receiving end. As far as electromagnetic fields go, Faraday himself inclined to the view that the mathematical similarity between heat flow, currents, and electromagnetic lines of force was evidence for the physical reality of the intervening medium.
Once, again, our mentioning recognition for which its case value, whereby its view is especially associated the American psychologist and philosopher William James (1842-1910), that the truth of a statement can be defined in terms of a “utility” of accepting it. Communicated, so much as a dispiriting position for which its place of valuation may be viewed as an objection. Since there are things that are false, as it may be useful to accept, and conversely there are things that are true and that it may be damaging to accept. Nevertheless, there are deep connections between the idea that a representation system is accorded, and the likely success of the projects in progressive formality, by its possession. The evolution of a system of representation either perceptual or linguistic, seems bounded to connect successes with everything adapting or with utility in the modest sense. The Wittgenstein doctrine stipulates the meaning of use that upon the nature of belief and its relations with human attitude, emotion and the idea that belief in the truth on one hand, the action of the other. One way of binding with cement, wherefore the connection is found in the idea that natural selection becomes much as much in adapting us to the cognitive creatures, because beliefs have effects, they work. Pragmatism can be found in Kant”s doctrine, and continued to play an influencing role in the theory of meaning and truth.
James, (1842-1910), although with characteristic generosity exaggerated in his debt to Charles S. Peirce (1839-1914), he charted that the method of doubt encouraged people to pretend to doubt what they did not doubt in their hearts, and criticize its individualist”s insistence, that the ultimate test of certainty is to be found in the individuals personalized consciousness.
From his earliest writings, James understood cognitive processes in teleological terms. Though, he held us in the satisfactory interests. His will to Believe doctrine, the view that we are sometimes justified in believing beyond the evidential relics upon the notion that a belief's benefits are relevant to its justification. His pragmatic method of analysing philosophical problems, for which requires that we find the meaning of terms by examining their application to objects in experimental situations, similarly reflects the teleological approach in its attention to consequences.
Such an approach, however, sets ”James” theory of meaning apart from verification, dismissive of metaphysics. Unlike the verification alist, who takes cognitive meaning to be a matter only of consequences in sensory experience. James” took pragmatic meaning to include emotional and matter responses. Moreover, his, metaphysical standard of value, not a way of dismissing them as meaningless. It should also be noted that in a greater extent, circumspective moments' James did not hold that even his broad sets of consequences were exhaustive of a terms meaning. “Theism," for example, he took to have antecedently, definitional meaning, in addition to its varying degree of importance and chance upon an important pragmatic meaning.
James” theory of truth reflects upon his teleological conception of cognition, by considering a true belief to be one which is compatible with our existing system of beliefs, and leads us to satisfactory interaction with the world.
However, Peirce”s famous pragmatist principle is a rule of logic employed in clarifying our concepts and ideas. Consider the claim the liquid in a flask is an acid, if, we believe this, we except that it would turn red: We accept an action of ours to have certain experimental results. The pragmatic principle holds that listing the conditional expectations of this kind, in that we associate such immediacy with applications of a conceptual representation that provides a complete and orderly sets clarification of the concept. This is relevant ti the logic of abduction: Clarificationists using the pragmatic principle provides all the information about the content of a hypothesis that is relevantly to decide whether it is worth testing.
To a greater extent, and what is most important, is the famed apprehension of the pragmatic principle, in so that, Pierces”s account of reality: When we take something to be rea that by this single case, we think it is “fated to be agreed upon by all who investigate” the matter to which it stands, in other words, if I believe that it is really the case that “P," then I except that if anyone were to inquire depthfully into the finding its measure into whether “p," they would arrive at the belief that “p." It is not part of the theory that the experimental consequences of our actions should be specified by a warranted empiricist vocabulary-Peirce insisted that perceptual theories are abounding in latency. Even so, nor is it his view that the collected conditionals do or not clarify a concept as all analytic. In addition, in later writings, he argues that the pragmatic principle could only be made plausible to someone who accepted its metaphysical realism: It requires that “would-bees” are objective and, of course, real.
If realism itself can be given a fairly quick clarification, it is more difficult to chart the various forms of supposition, for they seem legendary. Other opponents deny that the entitles posited by the relevant discourses that exist or at least exists: The standard example is “idealism," which reality is somehow mind-curative or mind-co-ordinated-that real object comprising the “external world” is dependently of eloping minds, but only exists as in some way correlative to the mental operations. The doctrine assembled of “idealism” enters on the conceptual note that reality as we understand this as meaningful and reflects the working of mindful purposes. And it construes this as meaning that the inquiring mind itself makes of a formative constellations and not of any mere understanding of the nature of the “real” bit even the resulting charger we attribute to it.
Wherefore, the term ids most straightforwardly used when qualifying another linguistic form of Grammatik: a real “x” may be contrasted with a fake, a failed “x," a near “x," and so on. To trat something as real, without qualification, is to suppose it to be part of the actualized world. To reify something is to suppose that we have committed by some indoctrinated treatise, as that of a theory. The central error in thinking of reality and the totality of existence is to think of the “unreal” as a separate domain of things, perhaps, unfairly to that of the benefits of existence.
Such that non-existence of all things, as the product of logical confusion of treating the term “nothing” as itself a referring expression instead of a “quantifier." (Stating informally as a quantifier is an expression that reports of a quantity of times that a predicate is satisfied in some class of things, i.e., in a domain.) This confusion leads the unsuspecting to think that a sentence such as “Nothing is all around us” talks of a special kind of thing that is all around us, when in fact it merely denies that the predicate “is all around us” have appreciations. The feelings that lad some philosophers and theologians, notably Heidegger, to talk of the experience of. Nothing, is not properly the experience of anything, but rather the failure of a hope or expectations that there would be something of some kind at some point. This may arise in quite everyday cases, as when one finds that the article of functions one expected to see as usual, in the corner has disappeared. The difference between “existentialist” and “analytic philosophy," on the point of what, whereas the former is afraid of nothing, and the latter think that there is nothing to be afraid of.
A rather different sets of concerns arise when actions are specified in terms of doing nothing, saying nothing may be an admission of guilt, and doing nothing in some circumstances may be tantamount to murder. Still, other substitutional problems arise over conceptualizing empty space and time.
Whereas, the standard opposition between those who affirm and those who deny, the real existence of some kind of thing or some kind of fact or state of affairs. Almost any area of discourse may be the focus of this dispute: The external world, the past and future, other minds, mathematical objects, possibilities, universals, moral or aesthetic properties are examples. There be to one influential suggestion, as associated with the British philosopher of logic and language, and the most determinative of philosophers centred round Anthony Dummett (1925), to which is borrowed from the “intuitionistic” critique of classical mathematics, and suggested that the unrestricted use of the “principle of a bivalence” is the trademark of “realism." However, this ha to overcome counter-examples both ways: Although Aquinas wads a moral “realist," he held that moral really was not sufficiently structured to make true or false every moral claim. Unlike Kant who believed that he could use the law of bivalence happily in mathematics, precisely because it was only our own construction. Realism can itself be subdivided: Kant, for example, combines empirical realism (within the phenomenal world the realist says the right things-surrounding objects really exist and independent of us and our mental stares) with transcendental idealism (the phenomenal world as a whole reflects the structures imposed on it by the activity of our minds as they render it intelligible to us). In modern philosophy the orthodox oppositions to realism have been from philosopher such as Goodman, who, impressed by the extent to which we perceive the world through conceptual and linguistic lenses of our own making.
Assigned to the modern treatment of existence in the theory of “quantification” is sometimes put by saying that existence is not a predicate. The idea is that the existential quantify themselves as an operator on a predicate, indicating that the property it expresses has instances. Existence is therefore treated as a second-order property, or a property of properties. It is fitting to say, that in this it is like number, for when we say that these things of a kind, we do not describe the thing (ad we would if we said there are red things of the kind), but instead attribute a property to the kind itself. The parallelled numbers are exploited by the German mathematician and philosopher of mathematics Gottlob Frége in the dictum that affirmation of existence is merely denied of the number nought. A problem, nevertheless, proves accountable for it”s crated by sentences like “This exists," where some particular thing is undirected, such that a sentence seems to express a contingent truth (for this insight has not existed), yet no other predicate is involved. “This exists” is, and unlike “Tamed tigers exist," where a property is said to have an instance, for the word “this” and does not locate a property, but only an individual.
Possible worlds seem able to differ from each other purely in the presence or absence of individuals, and not merely in the distribution of exemplification of properties.
The special way that we each have of knowing our own thoughts, intentions, and sensationalist have brought in the many philosophical behaviorist and functionalist tendencies, that have found it important to deny that there is such a special way, arguing the way that I know of my own mind inasmuch as the way that I know of yours, e.g., by seeing what I say when asked. Others, however, point out that the behaviour of reporting the result of introspection in a particular and legitimate kind of behavioural access that deserves notice in any account of historically human psychology. The historical philosophy of reflection upon the astute of history, or of historical, thinking, finds the term was used in the 18th century, e.g., by Volante was to mean critical historical thinking as opposed to the mere collection and repetition of stories about the past. In Hegelian, particularly by conflicting elements within his own system, however, it came to man universal or world history. The Enlightenment confidence was being replaced by science, reason, and understanding that gave history a progressive moral thread, and under the influence of the German philosopher, whom is in spreading Romanticism, came Gottfried Herder (1744-1803), and, Immanuel Kant, this idea took it further to hold, so that philosophy of history cannot be the detecting of a grand system, the unfolding of the evolution of human nature as witnessed in successive sages (the progress of rationality or of Spirit). This essential speculative philosophy of history is given an extra Kantian twist in the German idealist Johann Fichte, in whom the extra association of temporal succession with logical implication introduces the idea that concepts themselves are the dynamic engine of historical change. The idea is readily intelligible in that their world of nature and of thought become identified. The work of Herder, Kant, Flichte and Schelling is synthesized by Hegel: History has a plot, as too, this to the moral development of man, equates with freedom within the state, this in turn is the development of thought, or a logical development in which various necessary moment in the life of the concept are successively achieved and improved upon. Hegel”s method is at it’s most successful, when the object is the history of ideas, and the evolution of thinking may march in steps with logical oppositions and their resolution encounters red by various systems of thought.
Within the revolutionary communism, Karl Marx (1818-83) and the German social philosopher Friedrich Engels (1820-95), there emerges a rather different kind of story, based upon Hefl's progressive structure not laying the achievement of the goal of history to a future in which the political condition for freedom comes to exist, so that economic and political fears than “reason” is in the engine room. Although, it is such that speculations upon the history may that it is continued to be written, notably: late examples, by the late 19th century large-scale speculation of tis kind with the nature of historical understanding, and in particular with a comparison between the, methos of natural science and with the historians. For writers such as the German neo-Kantian Wilhelm Windelband and the German philosopher and literary critic and historian Wilhelm Dilthey, it is important to show that the human sciences such as history are objective and legitimate, nonetheless they are in some way deferent from the enquiry of the scientist. Since the subjective-matter is the past thought and actions of human brings, what is needed and actions of human beings, past thought and actions of human beings, what is needed is an ability to re-live that past thought, knowing the deliberations of past agents, as if they were the historian's own. The most influential British writer on this theme was the philosopher and historian George Collingwood (1889-1943) whose, The Idea of History (1946), contains an extensive defence of the Verstehe approach, but it is nonetheless, the explanation from their actions, however, by re-living the situation as our understanding that understanding others is not gained by the tactic use of a “theory," enabling us to infer what thoughts or intentionality experienced, again, the matter to which the subjective-matters of past thoughts and actions, as I have a human ability of knowing the deliberations of past agents as if they were the historian”s own. The immediate question of the form of historical explanation, and the fact that general laws have other than no place or any apprentices in the order of a minor place in the human sciences, it is also prominent in thoughts about distinctiveness as to regain their actions, but by re-living the situation in or thereby understanding of what they experience and thought.
By Comparison, Bolzano argues, though, that there is something else, an infinity that doe does not have this ‘whatever you need it to be’ elasticity. In fact a truly infinite quantity (for example, the length of a straight line unbounded in either direction, meaning: The magnitude of the spatial entity containing all the points determined solely by their abstractly conceivable relation to two fixed points) does not by any means need to be variable, and in adduced example it is in fact not variable. Conversely, it is quite possible for a quantity merely capable of being taken greater than we have already taken it, and of becoming larger than any pre-assigned (finite) quantity, nevertheless to mean at all times merely finite, which holds in particular of every numerical quantity 1, 2, 3, 4, 5.
In other words, for Bolzano there could be a true infinity that was not a variable ‘something’ that was only bigger than anything you might specify. Such a true infinity was the result of joining two pints together and extending that line in both directions without stopping. And what is more, and he could separate off the demands of calculus, using a finite quality without ever bothering with the slippery potential infinity. Here was both a deeper understanding of the nature of infinity and the basis on which are built in his ‘safe’ infinity free calculus.
This use of the inexhaustible follows on directly from most Bolzano’s criticism of the way that ∞ we used as a variable something that would be bigger than anything you could specify, but never quite reached the true, absolute infinity. In Paradoxes of the Infinity Bolzano points out that is possible for a quantity merely capable of becoming larger than any other one pre-assigned (finite) quantity, nevertheless to remain at all times merely finite.
Bolzano intended tis as a criticism of the way infinity was treated, but Professor Jacquette sees it instead of a way of masking use of practical applications like calculus without the need for weasel words about infinity.
By replacing ∞ with ¤ we do away with one of the most common requirements for infinity, but is there anything left that map out to the real world? Can we confine infinity to that pure mathematical other world, where anything, however unreal, can be constructed, and forget about it elsewhere? Surprisingly, this seems to have been the view, at least at one point in time, even of the German mathematician and founder of set-theory Georg Cantor (1845-1918), himself, whose comments in 1883, that only the finite numbers are real.
Keeping within the lines of reason, both the Cambridge mathematician and philosopher Frank Plumpton Ramsey (1903-30) and the Italian mathematician G. Peano (1858-1932) has been to distinguish logical paradoxes and that depends upon the notion of reference or truth (semantic notions), such are the postulates justifying mathematical induction. It ensures that a numerical series is closed, in the sense that nothing but zero and its successors can be numbers. In that any series satisfying a set of axioms can be conceived as the sequence of natural numbers. Candidates from set theory include the Zermelo numbers, where the empty set is zero, and the successor of each number is its unit set, and the von Neuman numbers, where each number is the set of all smaller numbers. A similar and equally fundamental complementarity exists in the relation between zero and infinity. Although the fullness of infinity is logically antithetical to the emptiness of zero, infinity can be obtained from zero with a simple mathematical operation. The division of many numbers by zero is infinity, while the multiplication of any number by zero is zero.
With the set theory developed by the German mathematician and logician Georg Cantor. From 1878 to 1807, Cantor created a theory of abstract sets of entities that eventually became a mathematical discipline. A set, as he defined it, is a collection of definite and distinguished objects in thought or perception conceived as a whole.
Cantor attempted to prove that the process of counting and the definition of integers could be placed on a solid mathematical foundation. His method was to repeatedly place the elements in one set into ‘one-to-one’ correspondence with those in another. In the case of integers, Cantor showed that each integer (1, 2, 3, . . . n) could be paired with an even integer (2, 4, 6, . . . n), and, therefore, that the set of all integers was equal to the set of all even numbers.
Amazingly, Cantor discovered that some infinite sets were large than others and that infinite sets formed a hierarchy of greater infinities. After this failed attempt to save the classical view of logical foundations and internal consistency of mathematical systems, it soon became obvious that a major crack had appeared in the seemingly sold foundations of number and mathematics. Meanwhile, an impressive number of mathematicians began to see that everything from functional analysis to the theory of real numbers depended on the problematic character of number itself.
While, in the theory of probability Ramsey was the first to show how a personalised theory could be developed, based on precise behavioural notions of preference and expectation. In the philosophy of language, Ramsey was one of the first thinkers to accept a ‘redundancy theory of truth’, which hr combined with radical views of the function of man y kinds of propositions. Neither generalizations nor causal propositions, nor those treating probability or ethics, describe facts, but each has a different specific function in our intellectual economy.
Ramsey advocates that of a sentence generated by taking all the sentence affirmed in a scientific theory that use some term, e.g., ‘quark’. Replacing the term by a variable, and existentially quantifying into the result. Instead of saying quarks have such-and-such properties, Ramsey postdated that the sentence as saying that there is something that has those properties. If the process is repeated, the sentence gives the ‘topic-neutral’ structure of the theory, but removes any implications that we know what the term so treated denote. I t leaves open the possibility of identifying the theoretical item with whatever. It is that best fits the description provided. Nonetheless, it was pointed out by the Cambridge mathematician Newman that if the process is carried out for all except the logical bones of the theory, then by the Löwenheim-Skolem theorem, the result will be interpretable in any domain of sufficient cardinality, and the content of the theory may reasonably be felt to have been lost.
It seems, that the most taken of paradoxes in the foundations of ‘set theory’ as discovered by Russell in 1901. Some classes have themselves as members: The class of all abstract objects, for example, is an abstract object, whereby, others do not: The class of donkeys is not itself a donkey. Now consider the class of all classes that are not members of themselves, is this class a member of itself, that, if it is, then it is not, and if it is not, then it is.
The paradox is structurally similar to easier examples, such as the paradox of the barber. Such one like a village having a barber in it, who shaves all and only the people who do not have in themselves. Who shaves the barber? If he shaves himself, then he does not, but if he does not shave himself, then he does not. The paradox is actually just a proof that there is no such barber or in other words, that the condition is inconsistent. All the same, it is no too easy to say why there is no such class as the one Russell defines. It seems that there must be some restriction on the kind of definitions that are allowed to define classes and the difficulty that of finding a well-motivated principle behind any such restriction.
The French mathematician and philosopher Henri Jules Poincaré (1854-1912) believed that paradoses like those of Russell and the ‘barber’ were due to such as the impredicative definitions, and therefore proposed banning them. But, it tuns out that classical mathematics required such definitions at too many points for the ban to be easily absolved. Having, in turn, as forwarded by Poincaré and Russell, was that in order to solve the logical and semantic paradoxes it would have to ban any collection (set) containing members that can only be defined by means of the collection taken as a whole. It is, effectively by all occurring principles into which have an adopting vicious regress, as to mark the definition for which involves no such failure. There is frequently room for dispute about whether regresses are benign or vicious, since the issue will hinge on whether it is necessary to reapply the procedure. The cosmological argument is an attempt to find a stopping point for what is otherwise seen for being an infinite regress, and, to ban of the predicative definitions.
The investigation of questions that arise from reflection upon sciences and scientific inquiry, are such as called of a philosophy of science. Such questions include, what distinctions in the methods of science? s there a clear demarcation between scenes and other disciplines, and how do we place such enquires as history, economics or sociology? And scientific theories probable or more in the nature of provisional conjecture? Can the be verified or falsified? What distinguished good from bad explanations? Might there be one unified since, embracing all the special science? For much of the 20th century their questions were pursued in a highly abstract and logical framework it being supposed that as general logic of scientific discovery that a general logic of scientific discovery a justification might be found. However, many now take interests in a more historical, contextual and sometimes sociological approach, in which the methods and successes of a science at a particular time are regarded less in terms of universal logical principles and procedure, and more in terms of their availability to methods and paradigms as well as the social context.
In addition, to general questions of methodology, there are specific problems within particular sciences, giving subjects as biology, mathematics and physics.
The intuitive certainties that spark aflame the dialectic awarenesses for its immediate concerns are either of the truths or by some other in an object of apprehensions, such as a concept. Awareness as such, has to its amounting quality value the place where philosophical understanding of the source of our knowledge are, however, in covering the sensible apprehension of things and pure intuition it is that which stricture sensation into the experience of things accent of its direction that orchestrates the celestial overture into measures in space and time.
The notion that determines how something is seen or evaluated of the status of law and morality especially associated with St Thomas Aquinas and the subsequent scholastic tradition. More widely, any attempt to cement the moral and legal order together with the nature of the cosmos or how the nature of human beings, for which sense it is also found in some Protestant writers, and arguably derivative from a Platonic view of ethics, and is implicit in ancient Stoicism. Law stands above and apart from the activities of human lawmaker, it constitutes an objective set of principles that can be seen true by ‘natural light’ or reason, and (in religion versions of the theory) that express God’s will for creation. Non-religious versions of the theory substitute objective conditions for human flourishing as the source of constraints upon permissible actions and social arrangements. Within the natural law tradition, different views have been held about the relationship between the rule of law about God’ s will, for instance the Dutch philosopher Hugo Grothius (1583-1645), similarly takes upon the view that the content of natural law is independent of any will, including that of God, while the German theorist and historian Samuel von Pufendorf (1632-94) takes the opposite view, thereby facing the problem of one horn of the Euthyphro dilemma, that simply states, that its dilemma arises from whatever the source of authority is supposed to be, for in which do we care about the general good because it is good, or do we just call good things that we care about. Wherefore, by facing the problem that may be to assume of a strong form, in which it is claimed that various facts entail values, or a weaker form, from which it confines itself to holding that reason by itself is capable of discerning moral requirements that are supped of binding to all human bings regardless of their desires
Although the moralities of people send the ethical amount from which the same thing, is that there is a usage that restricts morality to systems such as that of the German philosopher and founder of ethical philosophy Immanuel Kant (1724-1804), based on notions such as duty, obligation, and principles of conduct, reserving ethics for more than the Aristotelian approach to practical reasoning based on the notion of a virtue, and generally avoiding the separation of ‘moral’ considerations from other practical considerations. The scholarly issues are complex, with some writers seeing Kant as more Aristotelian and Aristotle as, ore involved in a separate sphere of responsibility and duty, than the simple contrast suggests. Some theorists see the subject in terms of a number of laws (as in the Ten Commandments). The status of these laws may be test, but they are the edicts of a divine lawmaker, or that they are truths of reason, knowable deductively. Other approaches to ethics (e.g., eudaimonism, situation ethics, virtue ethics) eschew general principles as much as possible, frequently disguising the great complexity of practical reasoning. For Kantian notion of the moral law is a binding requirement of the categorical imperative, and to understand whether they are equivalent at some deep level. Kant’s own applications of the notion are not always convincing, as for one cause of confusion in relating Kant’s ethics to theories such additional expressivism is that it is easy, but mistaken, to suppose that the categorical nature of the imperative means that it cannot be the expression of sentiment, but must derive from something ‘unconditional’ or ‘necessary’ such as the voice of reason.
For which ever reason, the mortal being makes of its presence to the future of weighing of that which one must do, or that which can be required of one. The term carries implications of that which is owed (due) to other people, or perhaps in onself. Universal duties would be owed to persons (or sentient beings) as such, whereas special duty in virtue of specific relations, such for being the child of someone, or having made someone a promise. Duty or obligation is the primary concept of ‘deontological’ approaches to ethics, but is constructed in other systems out of other notions. In the system of Kant, a perfect duty is one that must be performed whatever the circumstances: Imperfect duties may have to give way to the more stringent ones. In another way, perfect duties are those that are correlative with the right to others, imperfect duties are not. Problems with the concept include the ways in which due needs to be specified (a frequent criticism of Kant is that his notion of duty is too abstract). The concept may also suggest of a regimented view of ethical life in which we are all forced conscripts in a kind of moral army, and may encourage an individualistic and antagonistic view of social relations.
The most generally accepted account of externalism and/or internalism, that this distinction is that a theory of justification is internalist if only if it requiems that all of the factors needed for a belief to be epistemologically justified for a given person are cognitively accessible to that person, internal to his cognitive perceptive, and externalist, if it allows that at least some of the justifying factors need not be thus accessible, so that thy can be external to the believer’s cognitive perceptive, beyond any such given relations. However, epistemologists often use the distinction between internalist and externalist theories of epistemic justification without offering any very explicit explication.
The externalist/internalist distinction has been mainly applied to theories of epistemic justification: It has also been applied in a closely related way to accounts of knowledge and in a rather different way to accounts of belief and thought contents.
The internalist requirement of cognitive accessibility can be interpreted in at least two ways: A strong version of internalism would require that the believer actually be aware of the justifying factor in order to be justified: While a weaker version would require only that he be capable of becoming aware of them by focussing his attentions appropriately, but without the need for any change of position, new information, etc. Though the phrase ‘cognitively accessible’ suggests the weak interpretion, the main intuitive motivation for internalism, viz. the idea that epistemic justification requires that the believer actually have in his cognitive possession a reason for thinking that the belief is true, and would require the strong interpretation.
Perhaps, the clearest example of an internalist position would be a Foundationalist view according to which foundational beliefs pertain to immediately experienced states of mind and other beliefs are justified by standing in cognitively accessible logical or inferential relations to such foundational beliefs. Such a view could count as either a strong or a weak version of internalism, depending on whether actual awareness of the justifying elements or only the capacity to become aware of them is required. Similarly, a coherent view could also be internalist, if both the beliefs or other states with which a justification belief is required to cohere and the coherence relations themselves are reflectively accessible.
It should be carefully noticed that when internalism is construed in this way, it is neither necessary nor sufficient by itself for internalism that the justifying factors literally be internal mental states of the person in question. Not necessary, necessary, because on at least some views, e.g., a direct realist view of perception, something other than a mental state of the believer can be cognitively accessible: Not sufficient, because there are views according to which at least some mental states need not be actual (strong version) or even possible (weak version) objects of cognitive awareness. Also, on this way of drawing the distinction, a hybrid view, according to which some of the factors required for justification must be cognitively accessible while others need not and in general will not be, would count as an externalist view. Obviously too, a view that was externalist in relation to a strong version of internalism (by not requiring that the believer actually be aware of all justifying factors) could still be internalist in relation to a weak version (by requiring that he at least be capable of becoming aware of them).
The most prominent recent externalist views have been versions of reliabilism, whose requirements for justification is roughly that the belief be produced in a way or via a process that makes of objectively likely that the belief is true. What makes such a view externalist is the absence of any requirement that the person for whom the belief is justified have any sort of cognitive access to the relations of reliability in question. Lacking such access, such a person will in general have no reason for thinking that the belief is true or likely to be true , but will, on such an account, nonetheless be epistemically justified in according it. Thus such a view arguably marks a major break from the modern epistemological tradition, stemming from Descartes, which identifies epistemic justification with having a reason, perhaps even a conclusive reason for thinking that the belief is true. An epistemologist working within this tradition is likely to feel that the externalist, than offering a competing account of the same concept of epistemic justification with which the traditional epistemologist is concerned, has simply changed the subject.
The main objection to externalism rests on the intuitive certainty that the basic requirement for epistemic justification is that the acceptance of the belief in question be rational or responsible in relation to the cognitive goal of truth, which seems to require in turn that the believer actually be dialectally aware of a reason for thinking that the belief is true (or, at the very least, that such a reason be available to him). Since the satisfaction of an externalist condition is neither necessary nor sufficient for the existence of such a cognitively accessible reason, it is argued, externalism is mistaken as an account of epistemic justification. This general point has been elaborated by appeal to two sorts of putative intuitive counter-examples to externalism. The first of these challenges the necessity of belief which seem intuitively to be justified, but for which the externalist conditions are not satisfied. The standard examples in this sort are cases where beliefs are produced in some very nonstandard way, e.g., by a Cartesian demon, but nonetheless, in such a way that the subjective experience of the believer is indistinguishable from that of someone whose beliefs are produced more normally. The intuitive claim is that the believer in such a case is nonetheless epistemically justified, as much so as one whose belief is produced in a more normal way, and hence that externalist account of justification must be mistaken.
Perhaps the most striking reply to this sort of counter-example, on behalf of a cognitive process is to be assessed in ‘normal’ possible worlds, i.e., in possible worlds that are actually the way our world is common-seismically believed to be, than in the world which contains the belief being judged. Since the cognitive processes employed in the Cartesian demon cases are, for which we may assume, reliable when assessed in this way, the reliabilist can agree that such beliefs are justified. The obvious, to a considerable degree of bringing out the issue of whether it is or not an adequate rationale for this construal of reliabilism, so that the reply is not merely a notional presupposition guised as having representation.
The correlative way of elaborating on the general objection to justificatory externalism challenges the sufficiency of the various externalist conditions by citing cases where those conditions are satisfied, but where the believers in question seem intuitively not to be justified. In this context, the most widely discussed examples have to do with possible occult cognitive capacities, like clairvoyance. Considering the point in application once, again, to reliabilism, the claim is that to think that he has such a cognitive power, and, perhaps, even good reasons to the contrary, is not rational or responsible and therefore not epistemically justified in accepting the belief that result from his clairvoyance, despite the fact that the reliablist condition is satisfied.
One sort of response to this latter sorts of objection is to ‘bite the bullet’ and insist that such believers are in fact justified, dismissing the seeming intuitions to the contrary as latent internalist prejudice. A more widely adopted response attempts to impose additional conditions, usually of a roughly internalist sort, which will rule out the offending example, while stopping far of a full internalism. But, while there is little doubt that such modified versions of externalism can handle particular cases, as well enough to avoid clear intuitive implausibility, the usually problematic cases that they cannot handle, and also whether there is and clear motivation for the additional requirements other than the general internalist view of justification that externalist are committed to reject.
A view in this same general vein, one that might be described as a hybrid of internalism and externalism holds that epistemic justification requires that there is a justificatory factor that is cognitively accessible to the believer in question (though it need not be actually grasped), thus ruling out, e.g., a pure reliabilism. At the same time, however, though it must be objectively true that beliefs for which such a factor is available are likely to be true, in addition, the fact need not be in any way grasped or cognitively accessible to the believer. In effect, of the premises needed to argue that a particular belief is likely to be true, one must be accessible in a way that would satisfy at least weak internalism, the internalist will respond that this hybrid view is of no help at all in meeting the objection and has no belief nor is it held in the rational, responsible way that justification intuitively seems to require, for the believer in question, lacking one crucial premise, still has no reason at all for thinking that his belief is likely to be true.
An alternative to giving an externalist account of epistemic justification, one which may be more defensible while still accommodating many of the same motivating concerns, is to give an externalist account of knowledge directly, without relying on an intermediate account of justification. Such a view will obviously have to reject the justified true belief account of knowledge, holding instead that knowledge is true belief which satisfies the chosen externalist condition, e.g., a result of a reliable process (and perhaps, further conditions as well). This makes it possible for such a view to retain internalist account of epistemic justification, though the centrality of that concept to epistemology would obviously be seriously diminished.
Such an externalist account of knowledge can accommodate the commonsense conviction that animals, young children, and unsophisticated adult's posse's knowledge, though not the weaker conviction (if such a conviction does exist) that such individuals are epistemically justified in their beliefs. It is also at least less vulnerable to internalist counter-examples of the sort discussed, since the intuitions involved there pertain more clearly to justification than to knowledge. What is uncertain is what ultimate philosophical significance the resulting conception of knowledge is supposed to have. In particular, does it have any serious bearing on traditional epistemological problems and on the deepest and most troubling versions of scepticism, which seems in fact to be primarily concerned with justification, the an knowledge?`
A rather different use of the terms ‘internalism’ and ‘externalism’ has to do with the issue of how the content of beliefs and thoughts is determined: According to an internalist view of content, the content of such intention states depends only on the non-relational, internal properties of the individual’s mind or grain, and not at all on his physical and social environment: While according to an externalist view, content is significantly affected by such external factors and suggests a view that appears of both internal and external elements is standardly classified as an external view.
An objection to externalist account of content is that they seem unable to do justice to our ability to know the content of our beliefs or thought ‘from the inside’, simply by reflection. If content is depending on external factors pertaining to the environment, then knowledge of content should depend on knowledge of these factors-which will not in general be available to the person whose belief or thought is in question.
According to theory, each individual of the system in a certain sense, at any-one time, exists simultaneously in every part of the space occupied by the system. Its physical reality must be described as to continuous functions in space. The material point, therefore, can hardly be anticipated any more than the basic concept of theory.
A human being is part of the whole, and he experiences himself, his thoughts and feelings as something separate from the rest-a kind of optical illusion of his consciousness. This delusion is kind of prison for us, restricting us to our personal desires and to affection for a few persons nearest to us. Our task must be to free ourselves from this prison by widening our circle of compassion to embrace all living creatures and the whole of nature in its beauty. Nobody could achieve this completely, but the striving for such achievement is in itself a part of the liberation and a foundation for inner security.
The more the universe seems comprehensible, the more it seems pointless, just as life is merely a disease of matter, and, so, I think, any attempts to preserve this view not only require metaphysical leaps that result in unacceptable levels of ambiguity. They also fail to meet the requirement that testability is necessary to confirm the validity of all theoretical undertakings.
From its start, the languages of biblical literature were equally valid sources of communion with the eternal and immutable truths exiting in the mind of God, yet the extant documents alone were consisted with more than a million words in his own hand, and some of his speculations seem quite bizarre by contemporary standards, least of mention, they deem of a sacred union with which on the appearance of an unexamined article of faith, expanding our worship upon some altar of an unknown god.
Our inherent consciousness, as the corpses of times generations were to evolve, having distributively confirmed a striking unity, as unified consciousness can take more than one form, it is, nonetheless that when we are consciously aware that we are capably communicable of a conscious content, in a dialectic awareness of several conscious states that seem alternatively assembled.
As no assumption can be taken for granted, and no thoughtful conclusion should be lightly dismissed as fallacious in studying the phenomenon of consciousness, nonetheless, which of exercising intellectual humanity and caution we must try to move ahead to reach some positive conclusion upon its topic.
Our consciousness shows us of a striking unity, as unified consciousness can take more than one form, it is, nonetheless that when we are consciously aware that we are capably communicable of a conscious content, in a dialectic awareness of several conscious states that seem alternatively assembled. Mental states have related us interconnectively as given among us. Because, I am aware not of 'A' and the separability of 'B' and independence of 'C', but am dialectically aware of 'A-and-B-and-C', simultaneously, or better, as all parts of the content of a single conscious state. Since from the time of Kant, persuading with reason that sets itself the good use the faculty arising to engage the attaining knowledge to be considered the name designated as this phenomenon imposes and responds to definite qualities as the ‘unity of consciousness’.
Historically, the notion of the unity of consciousness has played a very large role in thought about the mind. Of this point in fact, it figured centrally in most influential arguments about the mind from the time of Descartes to the 20th century. In the early part of the 20th century, the notion largely disappeared for a time. Analytic philosophers began to pay attention to it again only in the 1960s. We unstretchingly distribute some arranging affirmation that this history subsisted up til the nineteen-hundreds. At that point, we should delineate the unity of consciousness more carefully and examine some evidence from neuropsychology because both are necessary to understand the recent work on the issue.
Descartes would assert that if the partialities existent to the parts have not constructed the constituents parts that decide of its partiality, it cannot contain matter, presumably because, as he saw it, anything material has parts. He then goes on to say that this would be enough to prove dualism by itself, had he not already proved it elsewhere. Recognizing where it is that I cannot distinguish any parts, as accumulatively collected through Unified consciousness is that, it may, have converged of itself.
Directly of another, moderately compound argument based on unified consciousness. The conclusion will be that any system of components could never achieve unified consciousness acting in concert. William James' well-known version of the argument starts as follows: Take a sentence of a dozen words, take twelve men, and to each word. Then stand the men in a row or jam them in a bunch, and let each think of his word as intently as he will, no where will there be a consciousness of the whole sentence.
James generalizes this observation to all conscious states. To get dualism out of this, we need to add the premise, that if whatever makes up the pursuing situations that mind is considered the supreme reality and have the ultimate means. Tenably to assert the creation from a speculative assumption that bestows to its beginning that makes inherent descendabilities the value for which of existence may so be embraced of the mind of matter. A variable takes the corresponding definitive criteria of possibilities in value accorded with reality, and would have distributed the relational states that consciousness expresses over some group of components in some relevant way. Still, this thought experiment is meant to show that they cannot so distribute conscious states. Therefore, the conscious mind is not made out of matter, recounting the argument that James is attending the use of the Unity Argument. Clearly, the idea that our consciousness of, here, the parts of a sentence are unified is at the centre of the Unity Argument. Like the first, this argument goes all the way back to Descartes. Versions of it can be found in thinkers otherwise as different from one another as Leibniz, Reid, and James. The Unity Argument continued to be persuasively influential into the 20th century. That the argument was considered a powerful reason for concluding that the mind is not the body is illustrated in a backhanded way by Kant's treatment of it (as he found it in Descartes and Leibniz, not James, of course).
February 9, 2010
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment