The disagreement between us and Piaget on this point will be made quite clear by the following example: I am sitting at my desk talking to a person who is behind me and whom I cannot see; he leaves the room without my noticing it, and I continue to talk, under the illusion that he listens and understands. Outwardly, I am talking with myself and for myself, but psychologically my speech is social. From the point of view of Piaget’s theory, the opposite happens in the case of the child: His egocentric talk is for and with himself; it only has the appearance of social speech, just as my speech gave the false impression of being egocentric. From our point of view, the whole situation is much more complicated than that: Subjectively, the child’s egocentric speech already has its own peculiar function - to that extent, it is independent from social speech; Yet its independence is not complete because it is not felt as inner speech and is not distinguished by the child from speech for others. Objectively, also, it is different from social speech but again not entirely, because it functions only within social situations. Both subjectively and objectively, egocentric speech represents a transition from speech for others to speech for oneself. It already has the function of inner speech but remains similar to social speech in its expression.
The investigation of egocentric speech has paved the way to the understanding of inner speech, while our experiments convinced us that inner speech must be regarded, not as speech minus sound, but as an entirely separate speech function. Its main distinguishing trait is its peculiar syntax. Compared with external speech, inner speech appears disconnected and incomplete.
This is not a new observation. All the students of inner speech, even those who approached it from the behaviouristic standpoint, noted this trait. The method of genetic analysis permits us to go beyond a mere description of it. We applied this method and found that as egocentric speech transforms by its showing tendencies toward an altogether specific form of abbreviation: Namely, omitting the subject of a sentence and all words connected with it, while preserving the predicate. This tendency toward predication appears in all our experiments with such regularity that we must assume it to be the basic syntactic form of inner speech.
It may help us to understand this tendency if we recall certain situations in which external speech shows a similar structure. Pure predication occurs in external speech in two cases: Either as an answer or when the subject of the sentence is known beforehand to all concerned. The answer to “Would you like a cup of tea?” is never “No, I do not want a cup of tea “ but a simple “No.?” Obviously, such a sentence is possible only because its subject is tacitly understood by both parties. To “Has your brother read this book?” No one ever replies, “Yes, my brother has read this book.” The answer is a short “Yes,” or “Yes, he has.” Now let us imagine that several people are waiting for a bus. No one will say, on seeing the bus approach, “The bus for which we are waiting is coming.” The sentence is likely to be an abbreviated “Coming,” or some such expression, because the subject is plain from the situation. Exceptionally hold to a frequent shortened sentence causing confusion. The listener may relate the sentence to a subject foremost in his own mind, not the one meant by the speaker. If the thoughts of two people coincide, perfect understanding can be achieved through the use of mere predicates, but if they are thinking about different things they are bound to misunderstand each other.
Having examined abbreviation in external speech, we can now return enriched to the same phenomenon in inner speech, where it is not an exception but the rule. It will be instructive to compare abbreviation in oral, inner, and written speech. Communication in writing relies on the formal meanings of words and requires a much greater number of words than oral speech to convey the same idea. It is addressed to an absent person who rarely has in mind the same subject as the writer. Therefore, it must be fully deployed; Syntactic differentiation is at a maximum, and expressions are used that would seem unnatural in conversation. Griboedov’s “He talks like writing” refers to the droll effect of elaborate constructions in daily speech.
The multifunctional nature of language, which has recently attracted the close attention of linguists, had already been pointed out by Humboldt in relation to poetry and prose – two forms very different in function and in the means they use. Poetry, according to Humboldt, is inseparable from music, while prose depends entirely on language and is dominated by thought. Consequently, each has its own diction, grammar, and syntax. This is a conception of primary importance, although neither Humboldt nor those who encourage in developing his thought fully realised its implications. They distinguished only between poetry and prose, and within the latter between the exchange of ideas and ordinary conversation, i.e., the mere exchange of news or conventional chatter. There are other important functional distinctions in speech. One of them is the distinction between dialogue and monologue, as if written through the avenue of inner speech representation whereby it seems profoundly definitely strung by the monologue; The totalities of expression are uttered of some oral fashion as their linguistic manner as to be inferred by the spoken exchange that might be correlated by speech, in that in most cases, are contained through dialogue.
Dialogue always presupposes that in accordance with the collaborator’s formality that holds within the forming of knowledge, which it is maintained by its subject and is likely to be approved by an abbreviated speech and, under certain conditions, purely predicative sentences. It also presupposes that each person can see his partners, their facial expressions and gestures, and hear the tone of their voices. We have already discussed abbreviation and will consider here only its auditory aspect, using a classical example from Dostoevski’s, The Diary of a Writer, to show how much intonation helps the subtly differentiated understanding of a word’s meaning.
Dostoevski relates a conversation of drunks that entirely consisted of one unprintable word: “One Sunday night I happened to walk for some fifteen paces next to a group of six drunken young labourers, and I suddenly realised that all thoughts, feelings and even a whole chain of reasoning could be expressed by that one noun, which is moreover extremely short. One young fellow said it harshly and forcefully, to express his utter contempt for whatever it was they had all been talking about. Another answered with the same noun but in a quite different tone and sense - doubting that the negative attitude of the first one was warranted. A third suddenly became incensed against the first and roughly intruded on the conversation, excitedly shouting the same noun, this time as a curse and obscenity. Here the second fellow interfered again, angry with the third, the aggressor, and restraining him, in the sense of “Presently, as to implicate the now in question why to do you have to butt in, we were discussing things quietly and here you come and start swearing. And he told this whole thought in one word, the same venerable word, except that he also raised his hand and put it on the third fellow’s shoulder. All at once a fourth, the youngest of the group, who had kept silent till then, probably having suddenly found a solution to the original difficulty that had started the argument, raised his hand in a transport of joy and shouted . . . Eureka, do you think? I have it? No, not eureka and not I have it; he repeated the unprintable noun, one word, merely one word, but with ecstasy, in a shriek of delight - which was apparently too strong, because the sixth and the oldest, a glum-looking fellow, did not like it and cut the infantile joy of the other one short, addressing him in a sullen, exhortative bass and repeating . . . yes, still the same noun, forbidden in the presence of ladies but which this time clearly meant “What are you yelling yourself hoarse for? So, without uttering a single other word, they repeated that one beloved word is six times in a row, and only one after another, and understood one another completely.” [The Diary of a Writer]
Inflection reveals the psychological context within which a word is to be understood. In Dostoevski’s story, it was contemptuous negation in one case, doubt in another, anger in the third. When the context is as clear as in this example, it really becomes possible to convey all thoughts, feelings, and even a whole chain of reasoning by one word.
In written speech, as tone of voice and knowledge of subject are excluded, we are obliged to use many more words, and to use them more exactly. Written speech is the most elaborate form of speech.
Some linguists consider dialogue the natural form of oral speech, the one in which language fully reveals its nature, and monologue to a greater degree for being artificial. Psychological investigation leaves no doubt that monologue is indeed the higher, more complicated form, and of later historical development. At present, however, we are interested in comparing them only in regard with the tendency toward abbreviation.
The speed of oral speech is unfavourable to a complicated process of formulation, but it does not leave time for deliberation and choice. Dialogue implies immediate unpremeditated utterance. It consists of replies, repartee; it is a chain of reactions. Monologue, by comparison, is a complex formation; the linguistic elaboration can be attended too leisurely and consciously.
In written speech, lacking situational and expressive supports, communication must be achieved only through words and their combinations; this requires the speech activity to take complicated forms - hence the use of first drafts. The evolution from the draft to the final copy reflects our mental process. Planning has an important part in written speech, even when we do not actually write out a draft. Usually we say to ourselves what we are going to write; This is also a draft, though in thought only. As we tried to show in the preceding chapter, this mental draft is inner speech. Since inner speech functions as a draft not only in written but also in oral speech, we will now compare both these forms with inner speech in respect to the tendency toward abbreviation and predication.
This tendency, never found in written speech and only some times in oral speech, arises in inner speech always. Predication is the natural form of inner speech, psychologically as it consists of predicates only. It is as much a law of inner speech to omit subjects as it is a law of written speech to contain both subjects and predicates.
The key to this experimentally established fact is the invariable, inevitable presence in inner speech of the factors that facilitate pure predication: We know what we are thinking about -, i.e., we always know the subject and the situation. Psychological contact between partners in a conversation may establish a mutual perception leading to the understanding of abbreviated speech. In inner speech, the “mutual” perception is always there, in absolute form; Therefore, a practically wordless “communisation” of even the most complicated thoughts is the rule. The predominance of predication is a product of development. In the beginning, egocentric speech is identical in structure with social speech, but in the process of its transformation into inner speech it gradually becomes less thorough and coherent as it becomes governed by the entire predicative syntax. Experiments show clearly how and why the new syntax takes hold. The child talks about the things he sees or hears or does at a given moment. As a result, he tends to leave out the subject and all words connected with it, condensing his speech frequently until only predicates are left. The more differentiated the specific function of egocentric speech becomes, the more pronounced are its syntactic peculiarities - simplification and predication. Hand in hand with this change goes decreasing vocalisation. When we converse with ourselves, we need even fewer words than Kitty and Levin did. Inner speech is speech almost without words.
With syntax and sound reduced to a minimum, meaning is more than ever in the forefront. Inner speech works with semantics, not phonetics. The specific semantic structure of inner speech also contributes to abbreviation. The syntax of meanings in inner speech is no less original than its grammatical syntax. Our investigation established three main semantic peculiarities of inner speech.
The first and basic one is the preponderance of the sense of a word over its meaning, and a distinction we accredit to Paulhan. The sense of a word, according to him, is the sum of all the psychological events aroused in our consciousness by the word. It is a dynamic, fluid, complex whole, which has several zones of unequal stability. Means is only one of the zones of sense, are the most stable and precise area. A word acquires its sense from the context in which it appears; in different contexts, it changes its sense. Meaning remains stable throughout the changes of sense. The dictionary meaning of a word is no more than a stone in the edifice of sense, no more than a potentiality that finds diversified realisation in speech.
The last words of the previously mentioned fable by Krylov, “The Dragonfly and the Ant,” is a good illustration of the difference between sense and meaning. The words “Go and dances” comprise of a definite and constant meaning, but in the context of the fable they acquire a much broader intellectual and affective sense. They mean both to “Enjoy yourself” and “Perish.” This enrichment of words by the sense they gain from the context is the fundamental law of the dynamics of word meanings. A frame in the circumstance of having to a context, it means both are more and fewer than the same word in isolation: More, because it acquires new content; less, because its meaning is limited and narrowed by the context. The sense of a word, says Paulhan, is a complex, mobile, protean phenomenon; it changes in different minds and situations and is almost unlimited. A word derives its sense from the sentence, which in turn gets its sense from the paragraph, the paragraph from the book, the book from all the works of the author.
Paulhan rendered a further service to psychology by analysing the relation between word and sense and showing that they are much more independent of each other than word and meaning. It has long been known that words can change their sense. Recently it was pointed out that sense can change words or, better, that ideas often change their names. Just as the sense of a word is connected with the whole word, and not with its single sounds, the sense of a sentence is connected with the whole sentence, and not with its individual words. Therefore, a word may sometimes be replaced by another without any change in sense. Words and sense are relatively independent of each other.
In inner speech, the predominance of sense over meaning, of the sentences over communicative words as their formalities and of context over sentences that are the rule.
This leads us to the other semantic peculiarities of inner speech. Both concern word combination. One of them is rather like agglutination, and a way of combining words fairly frequents in some languages and comparatively rare in others. German often forms one noun out of several words or phrases. In some primitive languages, such adhesion of words is a general rule. When several words are merged into one word, the new word not only expresses a rather complex idea but designates all the separate elements contained in that idea. Because the stress is always on the main root or idea, such languages are easy to understand. The egocentric speech of the child displays some analogous phenomena. As egocentric speech approaches inner speech, the child uses agglutination frequently as a way of forming compound words to express complex ideas.
The third basic semantic peculiarity of inner speech is the way in which senses of words combine and unite - a process governed by different laws from those governing combinations of meanings. When we observed this singular way of uniting words in egocentric speech, we called it “influx of sense.” The senses of different words flow into one another - literally “influence” one and another - so that the earlier ones are contained in, and modify, the later ones. Thus, a word that keeps recurring in a book or a poem sometimes absorbs all the variety of sense contained in it and becomes, in a way, equivalent to the work itself. The title of literary works expresses its content and completes its sense to a much greater degree than does the name of a painting or of a piece of music. Titles like Don Quixote, Hamlet, and Anna Karenina illustrate this very clearly - the whole sense of its operative word is contained in one name. Another excellent example is Gogol’s Dead Souls. Originally, the title referred to dead serfs whose names had not yet been removed from the official lists and who could still be bought and sold as if they were alive. It is in this sense that the words are used throughout the book, which is built up around this traffic in the dead. But through their intimate relationship with which the work as a whole, as these two words acquire the diversity of new and changing significance, an infinitely broader sense. When we reach the end of the book, “Dead Souls” means to us not so much the defunct serfs as all the characters in the story, who are alive physically but dead spiritually.
In inner speech, the phenomenon reaches its peak. A single word is so saturated with sense that many words would be required to explain it in external speech. No wonder about why egocentric speech is incomprehensible to others. Watson says that inner speech would be incomprehensible even if it could be recorded. Its opaqueness is further increased by a related phenomenon that, incidentally, Tolstoy noted in external speech: In Childhood, Adolescence, and Youth, he describes how between people in close psychological contact words acquire special meanings understood only by the initiated. In inner speech, the same kind of idiom develops – the kind that is difficult to translate into the language of external speech.
With this we will conclude our survey of the peculiarities of inner speech, which we first observed in our investigation of egocentric speech. In looking for comparisons in external speech, we found that the latter already contain, potentially at least, the traits typical of inner speech; Predication, decreases the vocalisation, and preponderance of sense over meaning, agglutinations, etc., appear under certain conditions also in external speech. This, we believe, is the best confirmation of our hypothesis that inner speech originates through the differentiation of egocentric speech from the child’s primary social speech.
All our observations indicate that inner speech is an autonomous speech function. We can confidently regard it as a distinct plane of verbal thought. It is evident that the transition from inner to external speech is not a simple translation from one language into another. It cannot be achieved by merely vocalising silent speech. It is a complex, dynamic process involving the transformation of the predicative, idiomatic structure of inner speech into syntactically articulated speech intelligible to others.
We can now return to the definition of inner speech that we proposed before presenting our analysis. Inner speech is not the interior aspect of external speech, but it is a function in itself. It remains speech, i.e., thought connected with words. But while in external speech thought is embodied in words, in inner speech words die as they bring forth thought. Inner speech is to a large extent thinking in pure meanings. It is a dynamic, shifting, unstable thing, fluttering between word and thought, as two more or less sensible stables that are more or less firmly delineated components of verbal thought. Its true nature and place can be understood only after examining the next plane of verbal thought the one still more inward than inner speech.
That plane is thought itself. As we have said, every thought creates a connection, fulfils a function, solves a problem. The flow of thought is not accompanied by a simultaneous unfolding of speech. The two processes are not identical, and there is no rigid correspondence between the units of thought and speech. This is especially obvious when a thought process miscarries - when, as Dostoevski put it, a thought “will not enter words.” Thought has its own structure, and the transition from it to speech is no easy matter. The theatre faced the problem of the thought behind the words before psychology did. In teaching his system of acting, Stanislavsky required the actors to uncover the “subtext” of their lines in a play. In Griboedov’s comedy Woe from Wit, the hero, Chatsky, says to the hero, who maintains that she has never stopped thinking of him, “Thrice blessed who believes. Believing warms the heart.” Stanislavsky interpreted this as “Let us stop this mutter”; However, to stop, it could just as well be interpreted as “I do not believe you. You say it to comfort me,” or as “Don’t you see how you torment me? I wish I could believe you. That would be bliss.” Every sentence that we say in real life has some kind of subtext, a thought hidden behind it. In the examples we gave earlier of the lack of coincidence between grammatical and psychological subject and predicate, we did not pursue our analysis to the end. Just as one sentence may express different thoughts, one thought may be expressed in different sentences. For instance, “The clock fell,” in answer to the question “Why did the clock stop?” Could mean? “It is not my fault that the clock is out of order; it fell.” The same thought, for determining the self justification, could take the form of “It is not my habit to touch other people’s things. I was just dusting here,” or a number of others.
Though, unlike speech, does not consist of separate units. When I wish to communicate the thought that today I saw a barefoot boy in a blue shirt running down the street, I do not see every item separately: the boy, the shirt, its blue colour, his running, the absence of shoes. I conceive of all this in one thought, but I put it into separate words. A speaker often takes several minutes to disclose one thought. In his mind the whole thought is present at once, but in speech it has to be developed successively. A thought may be compared with a cloud shedding a shower of words. Precisely because thought does not have its automatic counterpart in words, the transition from thought to word leads through meaning. In our speech, there is always the hidden thought, the subtext. Because a direct transition from thought to word is impossible, there have always been laments about the inexpressibility of thought: “How shall the heart express itself? How shall another understand?”
Direct communication between minds is impossible, not only physically but psychologically. Communication can be achieved only in a roundabout way. Thought must pass first through meanings and then through words.
We come now to the last step in our analysis of verbal thought. Though to be itself is too engendered by motivation, i.e., by our desires and needs, our interests and emotions. Behind every thought there is an affective-volitional tendency, which holds the answer to the last “why” in the analysis of thinking. A true and full understanding of another’s thought is possible only when we understand its affective-volitional basis. We will illustrate this by an example already used: The interpretation of parts in a play. Stanislavsky, in his instructions to actors, listed the motives behind the words of their parts.
To understand another’s speech, it is not sufficient to understand his words, but we must understand his thought. But even that is not enough - we must also know its motivation. No psychological analysis of an utterance is complete until that plane is reached.
In the end, the verbal thought appeared as a complex, dynamic entity, and the relation of thought and word within it as a movement through a series of planes. Our analysis followed the process from the outermost to the innermost plane. In reality, the development of verbal thought takes the opposite course: From the motive that engenders a thought to the shaping of the thought, first in inner speech, then in meanings of words, and finally in words. It would be a mistake, however, to imagine that this is the only road from thought to word. The development may stop at any point in its complicated course; An infinite variety of movements back and forth, of ways still unknown to us, is possible. A study of these manifold variations lies beyond the scope of our present task.
Here we have wished to study the inner workings of thought and speech, hidden from direct observation. Meaning and the whole inward aspects of language, the position of which its turning toward the person, is not toward the outer world, have been so far an almost unknown territory. No matter how they were interpreted, the relations between thought and word were always considered constant, established forever. Our investigation has shown that they are, on the contrary, delicate, changeable relations between processes, which arise during the development of verbal thought. We did not intend to, and could not, exhaust the subject of verbal thought. We tried only to give a general conception of the infinite complexity of this dynamic structure - a conception starting from experimentally documented facts.
To association psychology, thought and its inscription of words was united by external bonds, similar to the bonds between two nonsense syllables. Gestalt psychology introduced the concept of structural bonds but, like the older theory, did not account for the specific relations between thought and word. All the other theories grouped themselves around two poles - either the behaviourist concept of thought as speech minus sound or the idealistic view, held by the Wuerzburg school and Bergson, that thought could be “pure,” unrelated to language, and that it was distorted by words. Tjutchev’s “A thought once uttered is a lie” could well serve as an epigraph for the latter group. Whether inclining toward pure naturalism or extreme idealism, all these theories have one trait in common - their antihistorical bias. They study thought and speech without any reference to their developmental history.
A historical theory of inner speech can deal with this immense and complex problem. The relation between thought and word is a living process; Thought is born through words. A word devoid of thought is a dead thing, and a thought unembodied in words remains a shadow. The connection between them, however, is not a preformed and constant one. It emerges in the course of development, and it evolves. To the Biblical “In the beginning was the Word,” Goethe makes Faust reply, “In the beginning was the deed.” The intent here is to detract from the value of the word, but we can accept this version if we emphasise it differently: In the beginning was the deed. The word was not the beginning, and action was there first; it is the end of development, crowning the deed.
We cannot, without mentioning the perspectives that our investigation opens. We studied the inward aspects of speech, which were as unknown to science as the other side of the moon. We showed that a generalised reflection of reality is the basic characteristic of words. This aspect of the word brings us to the threshold of a wider and deeper subject - the general problem of consciousness. Though and language, for which reflect reality in a way different from that of perception, that which is the key to the nature of human consciousness. Words play a central part not only in the development of thought but in the historical growth of consciousness as a whole. A word is a microcosm of human consciousness
The hermetic tradition has long been concerned with the relationship between the inner world of our consciousness and the outer world of nature, between the microcosm and the macrocosm, below and the above, the material and the spiritual, the centric and the peripheral. The hermetic world view held by such as Robert Fludd, having conceived by some great chain of being linking our inner spark of consciousness with all the facets of the Great World. There were grands to see the platonic metaphysical clockwork, as it were, through which our inner world was linked by means of a hierarchy of beings and planes to the highest unity of the Divine.
This view though comforting is philosophically unsound, and the developments in thought since the early 17th century have made such a hermetic world view seems as untenable and still philosophically naive. It is impossible to try to argue the case for such an hermetic metaphysic with anyone who has had philosophical training, for they will quickly and mercilessly reveal deep philosophical contradictions in this world view.
So do we now have to abandon such a beautiful and spiritual world view and adopt the prevailing reductionist materialist conception of the world that has become accepted in the intellectual tradition of the West?
I am not so sure. There still remains the problem of our consciousness and its relationship to our material form - the Mind / Brain problem. Behavioural psychologists such as Skinner tried to reduce this to one level - the material brain - by viewing the mental or consciousness events from the outside for being merely, stimulus-response loops. This simplistic view works well for basic reflex actions - "I itch therefore I scratch" - but dissolves into absurdity when applied to any real act of the creative intellect or artistic imagination. Skinners’ determinism collapses when confronted with trying to explain the creative source of our consciousness revealing itself in an artist at work or a mathematician discovering through his thinking a new property of an abstract mathematical system. The psychologists' attempts to reduce the mind/brain problem to a merely material one of neurophysiology obviously failed. The idea that consciousness is merely a secretion or manifestation of a complex net of electrical impulses working within the mass of cells in our brain, is now discredited. The advocates of this view are strongly motivated by a desire to reduce the world to one level, to get rid of the necessity for "consciousness,” "mind" or "spirit" as a real facet of the world.
This materialistic determinism in which everything in the world (including the phenomenon of consciousness) can be reduced to simple interactions on a physical/chemical level, belongs really to the nineteenth century scientific landscape. Nineteenth century science was founded upon a "Newtonian Absolute Physics" which provided a description of the world as an interplay of forces obeying immutable laws and following a predetermined pattern. This is the "billiard ball" view of the world - one in which, provided we are given the initial state of the system (the layout of the balls on the table, and the exact trajectory, momentum and other parameters of the cue ball, etc.) then theoretically the exact layout after each interaction can be precisely calculated to absolute precision. All could be reduced to the determinate interplay of matter obeying the immutable laws of physics. The concept of the "spiritual" was unnecessary, even "mind" was dispensable, and "God" of course had no place in this scheme of things.
This comfortably solid "Newtonian" world view of the materialists has however been entirely undermined by the new physics of the twentieth century, and in particular through Quantum Theory. Physicists investigating the properties of sub-atomic matter, found that the deterministic Newtonian absolutism broke down at the foundation level of matter. An element of probability had to be introduced into the physicists' calculations, and each sub-atomic event was itself inherently unpredictably - one could only ascribe a probability to the outcome. The simple billiard ball model collapsed at the sub-atomic level. For if the billiard table was intended as a picture of a small region of space on the atomic scale and each ball was to be a particle (an electron, proton, or neutron, etc.), then physicists came to realise that this model could not represent reality on that level. For in Quantum theory one could not define the position and momentum of a particle both at the same moment. As soon as we establish the parameters of motion of a body, its position is uncertain and can only be described mathematically as a wave of probability. Our billiard table dissolved into a fluid ever-moving undulating surface, with each ball at one moment focussed to a point then at another dissolving and spreading itself out over an area of the space of the table. Trying to play billiards at this sub-atomic level was rather difficult.
In the Quantum picture of the world, each individual event cannot be determined exactly, but has to be described by a wave of probability. There is a kind of polarity between the position and energy of any particle in which they cannot be simultaneously determined. This was not a failing of experimental method but a property of the kinds of mathematical structures that physicists have to use to describe this realm of the world. The famous equation of Quantum theory embodying Heisenberg's Uncertainty Principle is: Planck's constant = (uncertainty in energy) x (uncertainty in position)
Thus if we try to fix the position of the particle (i.e., reduce the uncertainty in its position to a small factor) then as a consequence of this equation the uncertainty in the energy must increase to balance this, and therefore we cannot find a value for the energy of the particle simultaneous with fixing its position. Planck's constant being very small means that these infractions as based of the factors only become dominant on the extremely small scale, which are within the realm of the atom.
So we see that the Quantum picture of reality has at its foundation a non-deterministic view of the fundamental building Forms of matter. Of course, when dealing with large masses of particles these quantum indeterminacies effectively cancel each other out, and physicists can determine and predict the state of large systems. Obviously planets, suns, galaxies being composed of large numbers of particles do not exhibit any uncertainty in their position and energies, for when we look at such large aggregates as some of its totality, the total quantum uncertainty is a systems reduction as placed by zero, and in respect to their large scale properties can effectively be treated as deterministic systems.
Thus on the large scale we can effectively apply a deterministic physics, but when we wish to look in detail at the properties of the sub-atomic realm, lying at the root and foundation of our world, we must enter a domain of quantum uncertainties and find the neat ordered picture dissolving into a sea of ever flowing forces that we cannot tie down or set into fixed patterns.
Some people when faced with this picture of reality find comfort in dismissing the quantum world as having little to do with the "real world" of appearances. We do not live within the sub-atomic level after all. However, it does spill out into our outer world. Most of the various electronic devices of the past decades rely on the quantum tunnelling effect in transistors and silicon chips. The revolution in quantum physics has begun to influence the life sciences, and biologists and botanists are beginning to come up against quantum events as the basis of living systems, in the structure of complex molecules in the living tissues and membranes of cells for example. When we look at the blue of the sky, we are looking at a phenomenon only recently understood through quantum theory.
Although the Quantum picture of reality might seem strange indeed, I believe the picture it presents of the foundations of the material world, the ever flowing sea of forces metamorphosing and interacting through the medium of "virtual" or quantum messenger particles, has certain parallels with nature of our consciousness.
I believe that if we try to examine the nature of our consciousness we will find at its basis it exhibits "quantum" like qualities. Seen from a distant, large scale and external perspective, we seem to be able to structure our consciousness in an exact and precise way, articulating thoughts and linking them together into long chains of arguments and intricate structures. Our consciousness can build complex images through its activity and seems to have all the qualities of predictability and solidity. The consciousness of a talented architect is capable of designing and holding within itself an image of large solid structures such as great cathedrals or public buildings. A mathematician is capable of inwardly picturing an abstract mathematical system, deriving its properties from a set of axioms.
In this sense our consciousness might appear as an ordered and deterministic structure, capable of behaving like and being explicable in the same terms as other large scale structures in the world. However, this is not so. For if we through introspection try to examine the way in which we are conscious, in a sense to look at the atoms of our consciousness, this regular structure disappears. Our consciousness does not actually work in such an ordered way. We only nurture an illusion if we try to hold to the view that our consciousness is fixed by an ordered deterministic structure. True, we can create the large scale designs of the architect, the abstract mathematical systems, a cello concerto, but anyone who has built such structures within their consciousness knows that this is not achieved by a linear deterministic route.
Our consciousness is at its root a maverick, ever moving, increasing by its accommodating perception, feeling, thought, to another. We can never hold it still or focus it at a point for long. Like the quantum nature of matter, the more we try to hold our consciousness to a fixed point, the greater the uncertainty in its energy will become. So when we focus and narrow our consciousness to a fixed centre, it is all the more likely to jump with a great rush of energy to some seemingly unrelated aspect of our inner life suddenly. We all have such experiences each moment of the day. As in our daily work we try to focus our mind upon some problem only to experience a shift to another domain in ourselves suddenly, another image or emotional current intrudes then vanishes again, like an ephemeral virtual particle in quantum theory.
Those who begin to work upon their consciousness through some kinds of meditative exercises will experience these quantum uncertainties in the field of consciousness in a strong way.
In treating our consciousness as if it were a digital computer or deterministic machine after the model of 19th century science, I believe we foster a limited and false view of our inner world. We must now take the step toward a quantum view of consciousness, recognising that at its base and root our consciousness behaves like the ever flowing sea of the sub-atomic world. The ancient hermeticists foresaw consciousness as the "Inner Mercury.” Those who have experienced the paradoxical way in which the metal Mercury is both dense and metallic and yet so elusive, flowing and breaking up into small globules, and just as easily coming together again, will see how perceptive the alchemists were of the inner nature of consciousness, in choosing this analogy. Educators who treat the consciousness of children as if it were a filing cabinet to be filled with ordered arrays of knowledge are hopelessly wrong.
We can believe of the stepping stones whereby the formidable combinations await to the presence of the future, yet the nature of consciousness, and the look upon of what we see is justly to how this overlays links’ us with the mind/brain problem. The great difficulty in developing a theory of the way in which consciousness/mind is embodied in the activity of the brain, has I believe arisen out of the erroneous attempt to press a deterministic view onto our brain activity. Skinner and the behaviourist psychologists attempted to picture the activity of the brain as a computer where each cell behaved as an input/output device or a complex flip/flop. They saw nerve cells with their axons (output fibres) and dendrites (input fibres) being linked together into complex networks. An electrical impulse travelling onto a dendrite made a cell ‘fire’ and sent an impulse out along its axon so setting another nerve cell into action. The resulting patterns of nerve impulses constituted a reflex action, an impulse to move a muscle, a thought, a feeling, an intuitive experience. All could be reduced to the behaviour of this web of axons and dendrites of the nerve cells.
This simplistic picture, of course, was insufficient to explain even the behaviour of creatures like worms with primitive nervous systems, and in recent years this approach has largely been abandoned as it is becoming recognised that these events on the membranes of nerve cells are often triggered by shifts in the energy levels of sub-atomic particles such as electrons. In fact, at the root of such interactions lie quantum events, and the activity of the brain must now be seen as reflecting these quantum events.
The brain can no longer be seen as a vast piece of organic clockwork, but as a subtle device amplifying quantum events. If we trace a nerve impulse down to its root, there lies a quantum uncertainty, a sea of probability. So just how is it that this sea of probability can cast up such ordered structures and systems as the conception of a cello concerto or abstract mathematical entities? Perhaps here we may glimpse a way in which "spirit" can return into our physics.
The inner sea of quantum effects in our brain is in some way coupled to our ever flowing consciousness. When our consciousness focusses to a point, and we concentrate on some abstract problem or outer phenomenon, the physical events in our brain, the pattern of impulses, shifts in some ordered way. In a sense, the probability waves of a number of quantum systems in different parts of the brain, are brought into resonance, and our consciousness is able momentarily to create an ordered pattern that manifests physically through the brain. The thought, feeling, perception is momentarily earthed in physical reality, brought from the realm of the spiritual potential into outer actuality. This focussed ordering of the probability waves of many quantum systems requires an enormous amount of energy, but this can be borrowed in the quantum sense for a short instant of time. Thus we have through this quantum borrowing a virtual quantum state that is the physical embodiment of a thought, feeling, etc. However, as this can only be held for a short time, the quantum debt must be paid and the point of our consciousness is forced to jump to another quantum state, perhaps in another region of the brain. Thus our thoughts are jumbled up with emotions, perceptions, fantasy images.
The central point within our consciousness, our "spirit" in the hermetic sense, can now be seen as an entity that can work to control quantum probabilities. To our "spirits" our brain is a quantum sea providing a rich realm in which it can incarnate and manifest patterns down into the electrical/chemical impulses of the nervous system. (It has been calculated that the number of interconnections existing in our brains far exceeds the number of atoms in the whole universe - so in this sense the microcosm truly mirrors the macrocosm!). Our "spirit" allows the unswerving quantum, of which it borrows momentarily to press of a certain order into this sea that manifests the containment of a thought, emotion, etc. Such an ordered state can only exist momentarily, before our spirit or point of consciousness is forced to jump and move to other regions of the brain, where at that moment the pattern of probability waves for the particles in these nerve cells, can reflect the form that our spirit is trying with which to work.
This quantum borrowing to create regular patterns of probability waves is bought for a high price in that a degree of disorder must inevitably arise whenever the spirit tries to focus and reflect a linked sequential chain of patterns into the brain (such as we would experience as a logical adaption of our thought or some inward picture of some elaborate structure). Thus, it is not surprising that our consciousness sometimes brings to adrift and jumps about in a seemingly chaotic way. The quantum borrowing might also be behind our need for sleep and dream, allowing the physical brain to rid itself of the shadowy echos of these patterns pressed into it during waking consciousness. Dreaming may be that point in a cycle where consciousness and its vehicle interpenetrate and flow together, allowing the patterns and waves of probability to appear without any attempt to focus them to a point. In dream and sleep we experience our point of consciousness dissolving, decoupling and defocussing.
The central point of our consciousness, when actively thinking or feeling, must jump around the sea of patterns in our brain. (It is well known through neurophysiology that function cannot be located at a certain point in the brain, but that different areas and groups of nerve cells can take on a variety of different functions.) We all experience this when in meditation we merely let our consciousness move as it will. Then we come to sense the elusive mercurial eternal movement of the point of our consciousness within our inner space. You will find it to be a powerful and convincing experience if you try in meditation to follow the point of your consciousness moving within the space of your skull. Many religious traditions teach methods for experiencing this inner point of spirit.
I believe the movement of this point of consciousness, which appears as a pattern of probability waves in the quantum sea, must occur in extremely short segments of time, of necessity shorter than the time an electron takes to move from one state to another within the molecular structure of the nerve cell membranes. We are thus dealing in time scales significantly less than 10 to the power -16 of a second and possibly down to 10 to the power -43 of a second. During such short periods of time, the Heisenberg Uncertainty Principle that lies at the basis of quantum theory, means that this central spark of consciousness can borrow a large amount of energy, which explains how it can bring a large degree of ordering into a pattern. Although our point of consciousness lives at this enormously fast speed, our brain, which transforms this into a pattern of electro/chemical activity runs at a much slower rate. Between creating each pattern our spark of consciousness must wait almost an eternity for this to be manifested on the physical level. Perhaps this may account for the sense we all have sometimes of taking an enormous leap in consciousness, or travelling though vast realms of ideas, or flashes of images, in what is only a fleeting moment.
At around 10 to the power -43 of a second, time itself becomes quantized, that is it appears as discontinuous particles of time, for there is no way in which time can manifest in quantities less than 10 to the power -43 (the so-called Planck time). For here the borrowed quantum energies distort the fabric of space turning it back upon themselves. Their time must have a stop. At such short intervals the energies available are enormous enough to create virtual black holes and wormhole in space-time, and at this level we have only a sea of quantum probabilities - the so-called Quantum Foam. Contemporary physics suggests that through these virtual wormhole in space-time there are links with all time past and future, and through the virtual black holes even with parallel universes.
It must be somewhat above this level that our consciousness works, weaving probability waves into patterns and incarnating them in the receptive structure of our brains. Our being or spirit lives in this Quantum Foam, which is thus the Eternal Now, infinite in extent and a plenum of all possibilities. The patterns of everything that has been, that is now, and will come to be, exists latently in this quantum foam. Perhaps this is the realm though which the mystics stepped into timelessness, the eternal present, and sensed the omnipotence and omniscience of the spirit.
I believe that these exciting discoveries of modern physics could be the basis for a new view of consciousness and the way it is coupled to our physical nature in the brain. (Indeed, one of the fascinating aspects of Quantum theory which puzzles’ and mystifies contemporary physicists is the way in which their quantum description of matter requires that they recognise the consciousness of the observer as a factor in certain experiments. This enigma has caused not a few physicists to take an interest in spirituality especially inclining them to eastern traditions like Taoism or Buddhism, and in time I hope that perhaps even the hermetic traditions might prove worthy of their interest).
An important experiment carried out as recently as summer 1982 by the French physicist, Aspect, has unequivocally demonstrated the fact that physicists cannot get round the Uncertainty Principle and simultaneously determine the quantum states of particles, and confirmed that physicists cannot divorce the consciousness of the observer from the events observed. This experiment (in disproving the separability of quantum measurements) has confirmed what Einstein, Bohr and Heisenberg were only able to debate over philosophically - that with quantum theory we have to leave behind our naive picture of reality under which there happens as some unvaryingly compound structure if only to support its pictured clockwork. We are challenged by quantum theory to build new ways in which to picture reality, a physics, moreover, in which consciousness plays a central role, in which the observer is inextricably interwoven in the fabric of reality.
In a sense it may now be possible to build a new model of quantum consciousness, compatible with contemporary physics and which allows a space for the inclusion of the hermetic idea of the spirit. It may be that science has taken a long roundabout route through the reductionist determinism of the 19th century and returned to a more hermetic conception of our inner world.
In this short essay, incompletely argued though it may be, I hope I have at least presented some of the challenging ideas that lie behind the seeming negativity of our present age. For behind the hopelessness and despair of our times we stand on the brink of a great breakthrough to a new recognition of the vast spiritual depths that live within us all as human beings.
The idea that people may create devices that are conscious is known as artificial consciousness (AC). This is an ancient idea, perhaps dating back to the ancient Greek Promethean myth in which conscious people were supposedly manufactured from clay, pottery being an advanced technology in those days. In modern science fiction artificial people or conscious beings are described for being manufactured from electronic components. The idea of artificial consciousness (which is also known as machine consciousness (MC) or synthetic consciousness) is an interesting philosophical problem in the twenty first century because, with increased understanding of genetics, neuroscience and information processing it may soon be possible to create an entity that is conscious. It may be possible biologically to create a being by manufacturing a genome that had the genes necessary for a human brain, and to inject this into a suitable host germ cell. Such a creature, when implanted and born from a suitable womb, would very possibly be conscious and artificial. But what properties of this organism would be responsible for its consciousness? Could such a being be made from non-biological components? Can, technological technique be used in the design of computers and be to adapt and create a conscious entity? Would it ever be ethical to do such a thing? Neuroscience hypothesizes that consciousness is the synergy generated with the inter-operation of various parts of our brain, what have come to be called the neuronal correlates of consciousness, or NCC. The brain seems to do this while avoiding the problem described in the Homunculus fallacy and overcoming the problems described below in the section on the nature of consciousness. A quest for proponents of artificial consciousness is therefore to manufacture a machine to emulate this inter-operation, which no one yet claims fully to understand.
Consciousness is described at length in the consciousness article in Wikipedia. Wherefore, some informal type of naivete has to the structural foundation of realism and the direct of realism are that we perceive things in the world directly and our brains perform processing. On the other hand, according to indirect realism and dualism our brains contain data about the world that is obtained by processing but what we perceive is some sort of mental model or state that appears to overlay physical things as a result of projective geometry (such as the point observation in Rene Descartes dualism). Which of these general approaches to consciousness is correct has not been resolved and is the subject of fierce debate. The theory of direct perception is problematical because it would seem to require some new physical theory that allows conscious experience to supervene directly on the world outside the brain. On the other hand, if we perceive things indirectly, via a model of the world in our brains, then some new physical phenomenon, other than the endless further flow of data, would be needed to explain how the model becomes experience. If we perceive things directly self-awareness is difficult to explain because one of the principal reasons for proposing direct perception is to avoid Ryle's regress where internal processing becomes an infinite loop or recursion. The belief in direct perception also demands that we cannot 'really' be aware of dreams, imagination, mental images or any inner life because these would involve recursion. Self awareness is less problematic for entities that perceive indirectly because, by definition, they are perceiving their own state. However, as mentioned above, proponents of indirect perception must suggest some phenomenon, either physical or dialyzed to prevent Ryle's regress. If we perceive things indirectly then self awareness might result from the extension of experience in time described by Immanuel Kant, William James and Descartes. Unfortunately this extension in time may not be consistent with our current understanding of physics.
Information processing consists of encoding a state, such as the geometry of an image, on a carrier such as a stream of electrons, and then submitting this encoded state to a series of transformations specified by a set of instructions called a program. In principle the carrier could be anything, even steel balls or onions, and the machine that implement the instructions need not be electronic, it could be mechanical or fluids. Digital computers implement information processing. From the earliest days of digital computers people have suggested that these devices may one day be conscious. One of the earliest workers to consider this idea seriously was Alan Turing. The Wikipedia article on Artificial Intelligence (AI) considers this problem in depth. If technologists were limited to the use of the principles of digital computing when creating a conscious entity, they would have the problems associated with the philosophy of strong AI. The most serious problem is John Searle's Chinese room argument in which it is demonstrated that the contents of an information processor have no intrinsic meaning - at any moment they are just a set of electrons or steel balls etc. Searle's objection does not convince those who believe in direct perception because they would maintain that 'meaning' is only to be found in the objects of perception, which they believe is the world itself. The objection is also countered by the concept of emergence in which it is proposed that some unspecified new physical phenomenon arise in very complex processors as a result of their complexity. It is interesting that the misnomer digital sentience is sometimes used in the context of artificial intelligence research. Sentience means the ability to feel or perceive in the absence of thoughts, especially inner speech. It draws attention to the way that conscious experience is a state rather than a process that might occur in processors.
The debate about whether a machine could be conscious under any circumstances is usually described as the conflict between physicalism and dualism. Dualities believe that there is something nonphysical about consciousness while physicalist hold that all things are physical. Those who believe that consciousness is physical are not limited to those who hold that consciousness is a property of encoded information on carrier signals. Several indirect realist philosophers and scientists have proposed that, although information processing might deliver the content of consciousness, the state that is consciousness be due to another physical phenomenon. The eminent neurologist Wilder Penfield was of this opinion and scientists such as Arthur Stanley Eddington, Roger Penrose, Herman Weyl, Karl Pribram and Henry Stapp among many others, have also proposed that consciousness involve physical phenomena that are more subtle than simple information processing. Even some of the most ardent supporters of consciousness in information processors such as Dennett suggests that some new, emergent, scientific theory may be required to account for consciousness. As was mentioned above, neither the ideas that involve direct perception nor those that involve models of the world in the brain seem to be compatible with current physical theory. It seems that new physical theory may be required and the possibility of dualism is not, as yet, ruled out.
Some technologists working in the field of artificial consciousness are trying to create devices that appear conscious. These devices might simulate consciousness or actually be conscious but provided are those that appear conscious in the desired result that has been achieved. In computer science, the term digital sentience is used to describe the concept of digital numeration could someday be capable of independent thought. Digital sentience, if it ever comes to exist, is likely to be a form of artificial intelligence. A generally accepted criterion for sentience is self-awareness and this is also one of the definitions of consciousness. To support the concept of self-awareness, a definition of conscious can be cited: "having an awareness of one's environment and one's own existence, sensations, and thoughts.” In more general terms, an AC system should be theoretically capable of achieving various or by a more strict view all verifiable, known, objective, and observable aspects of consciousness so that the device appears conscious. Another, but less to agree about, that its responsible and corresponding definition as extracted in the word of “conscious,” slowly emerges as to be inferred through the avenue in being of, "Possessing knowledge by the seismical provisions that allow whether are by means through which ane existently internal and/or externally is given to its observable property, whereas of becoming labelled for reasons that posit in themselves to any assemblage that has forwarded by ways of the conscious experience. Although, the observably existing provinces are those that are by their own nature the given properties from each that occasion to natural properties of a properly ordered approving for which knowledgeable entities must somehow endure to exist in the awarenesses of sensibility.
There are various aspects and/or abilities that are generally considered necessary for an AC system, or an AC system should be able to learn them; These are very useful as criteria to determine whether a certain machine is artificially conscious. These are only the most cited, however, there are many others that are not covered. The ability to predict (or anticipate) foreseeable events is considered a highly desirable attribute of AC by Igor Aleksander: He writes in Artificial Neuro-consciousness: An Update: "Prediction is one of the key functions of consciousness. An organism that cannot predict would have itself its own serious hamper of consciousness." The emergent’s multiple draft’s principle proposed by Daniel Dennett in Consciousness Explained may be useful for prediction: It involves the evaluation and selection of the most appropriate "draft" to fit the current environment. Consciousness is sometimes defined as self-awareness. While self-awareness is very important, it may be subjective and is generally difficult to test. Another test of AC, in the opinion of some, should include a demonstration that machines can learn the ability to filter out certain stimuli in its environment, to focus on certain stimuli, and to show attention toward its environment in general. The mechanisms that govern how human attention is driven are not yet fully understood by scientists. This absence of knowledge could be exploited by engineers of AC; Since we don't understand attentiveness in humans, we do not have specific and known criteria to measure it in machines. Since unconsciousness in humans equates to total inattentiveness, the AC should have outputs that indicate where its attention is focussed at anyone time, at least during the aforementioned test. By Antonio Chella from University of Palermo "The mapping between the conceptual and the linguistic areas gives the interpretation of linguistic symbols in terms of conceptual structures. It is achieved through a focus of attentive mechanistic implementation, by means of suitable recurrent neural networks with internal states. A sequential attentive mechanism is hypothesized that suitably scans the conceptual representation and, according to the hypotheses generated on the basis of previous knowledge, it predicts and detects the interesting events occurring in the scene. Hence, starting from the incoming information, such a mechanism generates expectations and it makes contexts in which hypotheses may be verified and, if necessary, adjusted. "Awareness could be another required aspect. However, again, there are some problems with the exact definition of awareness. To illustrate this point is the philosopher David Chalmers (1996) controversially puts forward the panpsychist argument that a thermostat could be considered conscious: it has states corresponding too hot, too cold, or at the correct temperature. The results of the experiments of neuro-scanning on monkeys suggest that a process, not a state or object activate neurons. For such reaction there must be created a model of the process based on the information received through the senses, creating models in such that its way demands a lot of flexibility, and is also useful for making predictions. Personality is another characteristic that is generally considered vital for a machine to appear conscious. In the area of behavioural psychology, there is a somewhat popular theory that personality is an illusion created by the brain in order to interact with other people. It is argued that without other people to interact with, humans (and possibly other animals) would have no need of personalities, and human personality would never have evolved. An artificially conscious machine may need to have a personality capable of expression such that human observers can interact with it in a meaningful way. However, this is often questioned by computer scientists; The Turing test, which measures by a machine's personality, is not considered generally useful anymore. Learning is also considered necessary for AC. By engineering consciousness, a summary by Ron Chrisley, studying at the University of Sussex, says that of consciousness is and involves self, transparency, learning (of dynamics), planning, heterophenomenology, split of attentional signal, action selection, attention and timing management. Daniel Dennett said in his article "Consciousness in Human and Robotic Minds" are said that, "It might be vastly easier to make an initial unconscious or nonconscious "infant, as a, robot and let it "grow up" into consciousness, is more or less the way we all do. Chrisley explained that the robot Cog, is easily described, "Will did not bring about the adult at first, in spite of its adult size. But it is being designed to pass through an extended period of artificial infancy, during which it will have to learn from experience, experience it will gain in the rough-and-tumble environment of the real world, and in addition, ‘nobody doubts that any agent capable of interacting intelligently with a human being on human terms must have access too literally millions if not billions of logically independent items of world knowledge. In that of either of these must be hand-coded individually by human programmers-a tactic being pursued, notoriously, by Douglas Lenat and his CYC team in Dallas-or some way must be found for the artificial agent to learn its world knowledge from (real) interactions with the (real) world. An interesting article about learning is Implicit learning and consciousness by Axel Cleeremans, University of Brussels and Luis Jiménez, University of Santiago, where learning is defined as “a set of phylogenetically advanced adaptation processes that critically depend on an evolved sensitivity to subjective experience so as to enable agents to afford flexible control over their actions in complex, unpredictable environments. Anticipation is the final characteristic that could possibly be used to make a machine appear conscious. An artificially conscious machine should be able to anticipate events correctly in order to be ready to respond to them when they occur. The implication here is that the machine needs real-time components, making it possible to demonstrate that it possesses artificial consciousness in the present and not just in the past. In order to do this, the machine being tested must operate coherently in an unpredictable environment, to simulate the real world.
Newborn babies have been trying for centuries to convince us they are, like the rest of us, sensing, feeling, thinking human beings. Struggling by implies of its position, but now seems as contrary to thousands of years of ignorant supposition that newborns are partly human, sub-human, or not-yet human, the vast majority of babies arrive in hospitals today, greeted by medical specialists who are still sceptical as to whether they can actually see, feel pain, learn, and remember what happens to them. Physicians, immersed in protocol, employ painful procedures, confident no permanent impression, certainly no lasting damage, will result from the manner in which babies are received into this world.
The way "standard medicine" sees infants-by no means universally shared by women or by the midwives who used to assist them at birth-has taken on increasing importance in a country where more than 95% are hospitals born and a quarter of these surgically delivered. While this radical change was occurring, the psychological aspects of birth were little considered. In fact, for most of the century, medical beliefs about the infants nervous system prevailed in psychology as well. However, in the last three decades, research psychology has invested heavily in infant studies and uncovered many previously hidden talents of both the fetus and the newborn baby. The findings are surprising: Babies are more sensitive, more emotional, and more cognitive than we used to believe. They are not what we thought. Babies are so different that we must create new paradigms to describe accurately who they are and what they can do.
Not long ago, experts in pediatrics and psychology were teaching that babies were virtually blind, had no sense of colour, could not recognize their mothers, and heard in "echoes.” They believed babies cared little about sharp changes in temperature at birth and had only a crude sense of smell and taste. Their pain was "not like our pain" yet, their cries not meaningful, their smiles were "gas," and their emotion’s undeveloped. Worst of all, most professionals believed babies were not equipped with enough brain matter to permit them to remember, learn, or find meaning in their experiences.
These false and unflattering views are still widely spread between both professionals and the public. No wonder people find it hard to believe that a traumatic birth, whether it is by cesarean section or vaginal, has significant, on-going effects.
Unfortunately, today these unfounded prejudices still have the weight of "science" behind them, but the harmful results to babies are hardly better than the rank superstitions of the past. The resistance of "experts" who continue to see infants in terms of their traditional incapacities may be the last great obstacle for babies to leap over before being embraced for whom they really are. Old ideas are bound to die under the sheer weight of new evidence, but not before millions of babies suffer unnecessarily because their parents and their doctors do not know they are fully human.
As the light of research reaches into the dark corners of prejudice, we may thank those in the emerging field of prenatal/perinatal psychology. Since this field is often an enter professional collaboration and does not fit conveniently to accepted academic departments, the field is not yet recognized in the academic world by endowed chairs or even by formal courses. At present only a few courses exist throughout the world. Yet research teams have achieved a succession of breakthroughs that challenge standard "scientific" ideas of human development.
Scholars in this field respect the full range of evidence of infant capabilities, whether from personal reports contributed by parents, revelations arising from therapeutic work, or from formal experiments. Putting together all the bits and pieces of information gathered from around the globe yields a fundamentally different picture of a baby.
The main way information about sentient, conscious babies has reached the public, especially pregnant parents, has been via popular media: books, movies, magazine features, and television. Among the most outstanding have been The Secret Life of the Unborn Child by Canadian psychiatrist Thomas Verny (now in 25 languages), movies like Look Who's Talking, and several talk shows, including Oprah Winfrey, where a program on therapeutic treatment of womb and birth traumas probably reached 25 million viewers in 25 countries. Two scholarly journals are devoted entirely to prenatal/perinatal psychology, one in North America that began in 1986, and one in Europe beginning in 1989. The Association for Pre- and Perinatal Psychology and Health (APPPAH) is a gathering place for people interested in this field and who keep informed through newsletters, journals, and conferences.
Evidence that babies are sensitive, cognitive, and are affected by their birth experiences may come from various sources. The oldest evidence is anecdotal and intuitive. Mothers are the principal contributors to the idea of baby as a person, one you can talk to, and one who can talk back as well. This process, potentially available to any mother, is better explained in psychic terms than in word-based language. This exchange of thoughts is probably telepathic rather than linguistic.
Mothers who communicate with their infants know that the baby is a person, mind and soul, with understanding, wisdom, and purpose. This phenomenon is cross-cultural, probably universal, although all mothers do not necessarily engage in this dialogue. In an age of "science," a mother's intuitive knowledge is too often dismissed. What mothers know has not been considered as valid data. What mothers say about their infants must be venal, self-serving, or imaginary, and can never be equal to what is known by "experts" or "scientists."
This prejudice extends into a second category of information about babies, and the evidence derived from clinical work. Although the work of psychotherapy is usually done by formally educated, scientifically trained, licensed persons who are considered expert in their field, the information they listen to is anecdotal and their methods are the blending of science and art.
Their testimony of infant intelligence, based on the recollections of clients, is often compelling. Therapists are privy to clients' surprising revelations, many of which show a direct connection between traumas surrounding birth and later disabilities of heart and mind. Although it is possible for these connections to be purely imaginary, we know they are not when hospital records and eyewitness reports confirm the validity of the memories. Obstetrician David Cheek, using hypnosis with a series of subjects, discovered that they could accurately report the full set of left and right turns and sequences involved in their own deliveries. This is technical information that no ordinary person would have unless his memories are accurate.
Psychologists using hypnosis, have found it necessary to test the reliability of memories people gave me about their traumas during the birth process, memories that had not previously been conscious. I hypnotized mother and child pairs who said they had never spoken in any detail about that child's birth. I received a detailed report of what happened from the now-adult child that I compared with the mother's report, given also in hypnosis.
The reports dovetailed at many points and were clearly reports of the same birth. By comparing one story with the other, I could see when the adult child was fantasizing, rather than having accurate recall, but fantasy was rare. It is to conclude that these birth memories were real memories, and were a reliable guide to what had happened.
Some of the first indications that babies are sentient came from the practice of psychoanalysis, stretching back to the beginning of the century to the pioneering work of Sigmund Freud. Although Freud himself was sceptical about the operation of the infant mind, his clients kept bringing him information that seemed to link their anxieties and fears to events surrounding their births. He theorized that birth might be the original trauma upon which later anxiety was constructed.
Otto Rank, Freud's associate, was more certain that birth traumas underlay many later neuroses, so he reorganized psychoanalysis around the assumption of birth trauma. He was rewarded by the rapid recovery of his clients who were "cured" in far less time than was required for a customary psychoanalysis. In the second half of the century, important advances have been made in resolving early trauma and memories of trauma.
Hypnotherapy, primal therapy, psychedelic therapies, various combinations of body work with breathing and sound stimulation, sand tray therapy, and art effects have all proved useful in accessing important imprints, decisions, and memories stored by the infant mind. If there had been no working mind in infancy, of course there would be no need to return to it to heal bad impressions, change decisions, and otherwise resolve mental and emotional problems.
A third burgeoning source of information about the conscious nature of babies comes from scientific experiments and systematic observations utilizing breakthrough technologies. In our culture, with its preference for refined measurement and strict protocols, these are the studies that get funding. And the results are surprising from this contemporary line of empirical research.
We have learned so much about babies in the last twenty years that most of what we thought we knew before is suspect, and much of it is obsolete. I will highlight the new knowledge in three sections: development of the physical senses, beginnings of self-expression, and evidence of active mental life.
First, we have a much better idea of our physical development, the process of the embodiment from conception to birth. Our focus here is on the senses and when they become available during gestation. Touch is our first sense and perhaps our last. Sensitivity to touch begins in our faces about seven weeks gestational age. Tactile sensitivity expands steadily to include most parts of the fetal body by 17 weeks. In the normal womb, touch is never rough, and temperature is relatively constant. At birth, this placid environment ends with dramatic new experiences of touch that no baby can overlook.
By only 14 weeks gestational age, the taste buds are formed, and ultrasound shows both sucking and swallowing. A fetus controls the frequency of swallowing amniotic fluid, and will speed up or slow in reaction to sweet and bitter tastes. Studies show babies have a definite preference for sweet tastes. Hearing begins earlier than anyone thought possible at 16 weeks. The ear is not complete until about 24 weeks, a fact revealing the complex nature of listening, which includes reception of vibes through our skin, skeleton, and vestibular system as well as the ear. Babies in the womb are listening to maternal sounds and to the immediate environment for almost six months. By birth, their hearing is about as good as ours.
Our sense of sight also develops before birth, although our eyelids remain fused from week 10 through 26. Nevertheless, babies in the womb will react to light flashed on the mother's abdomen. By the time of birth, vision is well-advanced, though not yet perfect. Babies have no trouble focussing at the intimate 16-inch distance where the faces of mothers and fathers are usually found.
Mechanisms for pain perception like those for touch, develop early. By about three month, if babies are accidentally struck by a needle inserted into the womb to withdraw fluid during amniocentesis, they quickly twist away and try to escape from the needle. Intrauterine surgery, a new aspect of fetal medicine made possibly in part by our new ability to see inside the womb, means new opportunities for fetal pain.
Although surgeons have long denied prenates experience pain, a recent experiment in London proved unborn babies feel pain. Babies who were needled for intrauterine transfusions showed a 600% increase in beta-endorphins, hormones generated to deal with stress. In just ten minutes of needling, even 23 week old fetuses were mounting a full-scale stress response. Needling at the intrahapatic vein provokes vigorous body and breathing movements.
Finally, our muscle systems develop under buoyant conditions in the fluid environment of the womb and are regularly used in navigating the area. However, after birth, in the dry world of normal gravity, our muscle systems look feeble. As everyone knows, babies cannot walk, and they struggle, usually in vain, to hold up their own heads. Because the muscles are still relatively undeveloped, babies give a misleading appearance of incompetence. In truth, babies have remarkably useful sensory equipment very much like our own.
A second category of evidence for baby consciousness comes from empirical research on bodily movement in utero. Except for the movement a mother and father could sometimes feel, we have had almost no knowledge of the extent and variety of movement inside the womb. This changed with the advent of real-time ultrasound imaging, giving us moment by moment pictures of fetal activity.
One of the surprises is that movement commences between eight and ten weeks gestational age. This has been determined with the aid of the latest round of ultrasound improvements. Fetal movement is voluntary, spontaneous, and graceful, not jerky and reflexive as previously reported. By ten weeks, babies move their hands to their heads, face, and mouth; they flex and extend their arms and legs; They open and close their mouths and rotate longitudinally. From 10 to 12 weeks onward, the repertoire of body language is largely complete and continues throughout gestation. Periodic exercise alternates with rest periods on a voluntary basis reflecting individual needs and interests. Movement is self-expression and expressional personalities.
Twins viewed periodically via ultrasound during gestation often show highly independent motor profiles, and, over time continue to distinguish themselves through movement both inside and outside the womb. They are expressing their individuality.
Close observation has brought many unexpected behaviours to light. By 16 weeks, male babies are having their first erections. As soon as they have hands, they are busy exploring everywhere and everything, feet, toes, mouth, and the umbilical cord: these are their first toys.
By 30 weeks, babies have an intense dream life, spending more time in the dream state of sleep than they ever do after they are born. This is significant because dreaming is definitely a cognitive activity, a creative exercise of the mind, and because it is a spontaneous and personal activity.
Observations of the fetus also reveal a number of reactions to conditions in the womb. Such are the reactions to provocative circumstances is a further sign of selfhood. Consciousness of danger and manoeuver of the self-defence are visible in fetal reactions to amniocentesis. Even when things go normally and babies are not struck by needles, they react with wild variations of normal heart activity, alter their breathing movements, may "hide" from the needle, and often remain motionless for a time - suggesting fear and shock.
Babies react with alarm to loud noises, car accidents, earthquakes, and even to their mother's watching terrifying scenes on television. They swallow less when they do not like the taste of amniotic fluid, and they stop their usual breathing movements when their mothers drink alcohol or smoke cigarettes.
In a documented report of work via ultrasound, a baby struck accidentally by a needle not only twisted away, but located the needle barrel and collide repeatedly-surely an aggressive and angry behaviours. Similarly, ultrasound experts have reported seeing twins hitting each other, while others have seen twins playing together, gently awakening one-another, rendering cheek-to-cheek, and even kissing. Such scenes, some at only 20 weeks, were never anticipated in developmental psychology. No one anticipated sociable behaviour nor emotional behaviour until months after a baby's birth.
We can see emotion expressed in crying and smiling long before 40 weeks, the usual time of birth. We see first smiles on the faces of premature infants who are dreaming. Smiles and pleasant looks, along with a variety of unhappy facial expressions, tell us dreams have pleasant or unpleasant contents to which babies are reacting. Mental activity is causing emotional activity. Audible crying has been reported by 23 weeks, in cases of abortion, revealing that babies are experiencing very appropriate emotion by that time. Close to the time of birth, medical personnel have documented crying from within the womb, in association with obstetrical procedures that have allowed air to enter the space around the fetal larynx.
Finally, a third source of evidence for infant consciousness is the research that confirms various forms of learning and memory both in the fetus and the newborn. Since infant consciousness was considered impossible until recently, experts have had to accept a growing body of experimental findings illustrating that babies learn from their experiences. In studies that began in Europe in 1925 and America in 1938, babies have demonstrated all the types of learning formally recognized in psychology at the time: classical conditioning, habituation, and reinforcement conditioning, both inside and outside the womb.
In modern times, as learning has been understood more broadly, experiments have shown a range of learning abilities. Immediately after birth, babies show recognition of musical passages, which they have heard repeatedly before birth, whether it is the bassoon passage in Peter and the Wolf, "Mary Had a Little Lamb," or the theme music of a popular soap opera.
Language acquisition begins in the womb as babies listen repeatedly to their mothers' intonations and learn their mother tongue. As early as 25 weeks, the recording of a baby's first cry contains so many rhythms, intonations, and other features common to their mother's speech that their spectrographs can be matched. In experiments shortly after birth, babies recognize their mother's voice and prefer her voice to other female voices. In the delivery room, babies recognize their father's voice and recognize specific sentences their fathers have spoken, especially if the babies have heard these sentences frequently while they were in the womb. After birth, babies show special regard for their native language, preferring it to a foreign language.
Fetal learning and memory also consist of stories that are read aloud to them repeatedly before birth. At birth, babies will alter their sucking behaviour to obtain recordings of the familiar stories. In a recent experiment, a French and American team had mothers repeat a particular children's rhyme each day from week 33 to week 37. After four weeks of exposure, babies reacted to the target rhymes and not to other rhymes, proving they recognize specific language patterns while they are in the womb.
Newborn babies quickly learn to distinguish their mother's face from other female faces, their mother's breast pads from other breast pads, their mother's distinctive underarm odour, and their mother's perfume if she has worn the same perfume consistently.
Premature babies learn from their unfortunate experiences in neonatal intensive care units. One boy, who endured surgery parlayed with curare, but was given no pain-killing anaesthetics, of developed and pervading fear of doctors and hospitals that remains undiminished in his teens. He also learned to fear the sound and sight of adhesive bandages. This was in reaction to having some of his skin pulled off with adhesive tape during his stay in the premature nursery.
Confirmation that early experiences of pain have serious consequences later has come from recent studies of babies at the time of first vaccinations. Researchers who studied infants being vaccinated four to six months after birth discovered that babies who had experienced the pain of circumcision had higher pain scores and cried longer. The painful ordeal of circumcision had apparently conditioned them to pain and set their pain threshold lower. This is an example of learning from experience: Perinatal pain.
Happily, there are other things to learn besides pain and torture. The Prenatal Classroom is a popular program of prenatal stimulation for parents who want to establish strong bonds of communication with a baby in the womb. One of the many exercises is the "Kick Game," which you play by responding to the child's kick by touching the spot your baby just kicked, and saying "kick, baby kick." Babies quickly learn to respond to this kind of attention: They do kick again and they learn to kick anywhere their parents touch. One father taught his baby to kick in a complete circle.
Babies also remember consciously the big event of birth itself, at least during the first years of their lives. Proof of this comes from little children just learning to talk. Usually around two or three years of age, when children are first able to speak about their experiences, some spontaneously recall what their birth was like. They tell what happened in plain language, sometimes accompanied by pantomime, pointing and sound effects. They describe water, black and red colours, the coming light, or dazzling light, and the squeezing sensations. Cesarean babies tell about a door or window suddenly opening, or a zipper that zipped open and let them out. Some babies remember fear and danger. They also remember and can reveal secrets.
One of my favourite stories of a secret birth memory came from Cathy, a midwife's assistant. With the birth completed, she found herself alone with a hungry, restless baby after her mother had gone to bathe and the chief midwife was busy in another room. Instinctively, Cathy offered the baby her own breast for a short time: then she wondered if this were appropriate and stopped feeding the infant without telling anyone what had happened. Years later, when the little young woman was almost four, Cathy was babysitting her. In a quiet moment, she asked the child if she remembered her birth. The child did, and volunteered various accurate details. Then, moving closer to whisper a secret, she said "You held me and gave me titty when I cried, and Mommy wasn't there." Cathy said to herself, "Nobody can tell me babies don't remember their births"
Is a baby a conscious and real person? To me it is no longer appropriate to speculate. It is too late to speculate when so much is known. The range of evidence now available in the form of knowledge of the fetal sensory system, observations of fetal behaviour in the womb, and experimental proof of learning and memory - all of this evidence-amply verifies what some mothers and fathers have sensed from time immemorial, that a baby is a real person. The baby is real in having a sense of self that can be seen in creative efforts to adjust or to influence its environment. Babies show self-regulation (as in restricting swallowing and breathing), the self-defence (as in retreating from invasive needles and strong light), self-assertion, combat with a needle, or striking out at a bothersome twin.
Babies are like us in having clearly manifested feelings in their reactions to assaults, injuries, irritations, or medically inflicted pain. They smile, cry, and kick in protest, manifest fear, anger, grief, pleasure, or displeasure in ways that seem entirely appropriate in relation to their circumstances. Babies are cognitive beings, thinking their own thoughts, dreaming their own dreams, learning from their own experiences, and remembering their own experiences.
An iceberg can serve as a useful metaphor to understand the unconscious mind, its relationship to the conscious mind and how the two parts of our mind can better work together. As an iceberg floats in the water, the huge mass of it remains below the surface.
Only a small percentage of the whole iceberg is visible above the surface. In this way, the iceberg is like the mind. The conscious mind is what we notice above the surface while the unconscious mind, the largest and most powerful part, remains unseen below the surface.
In our metaphor that regards of the small amount of icebergs, far and above the surface represents the conscious mind; The huge mass below the surface, the unconscious mind. The unconscious mind holds all awareness that is not presently in the conscious mind. All memories, feelings and thoughts that are out of conscious awareness are by definition "unconscious." It is also called the subconscious and is known as the dreaming mind or deep mind.
Knowledgeable and powerful in a different way than the conscious mind, the unconscious mind handles the responsibility of keeping the body running well. It has memory of every event we've ever experienced; it is the source and storehouse of our emotions; and it is often considered our connection with Spirit and with each other.
No model of how the mind works disputes, the tremendous power, which is in constant action below the tip of the iceberg. The conscious mind is constantly supported by unconscious resources. Just think of all the things you know how to do without conscious awareness. If you drive, you use more than 30 specific skills . . . without being aware of them. These are skills, not facts; they are processes, requiring intelligence, decision-making and training.
Besides these learned resources that operate below the surface of consciousness there are important natural resources. For instance, the unconscious mind regulates all the systems of the body and keeps them in harmony with each other. It controls heart rate, blood pressure, digestion, the endocrine system and the nervous system, just to name a few of its natural, automatic duties.
The conscious mind, like the part of the iceberg above the surface, is a small portion of the whole being. The conscious mind is what we ordinarily think of when we say "my mind." It's associated with thinking, analysing and making judgments and decisions. The conscious mind is actively sorting and filtering its perceptions because only so much information can reside in consciousness at once. Everything else falls back below the water line, into unconsciousness.
Only seven bits of information, and/or minus two can be held consciously at one time. Everything else we are thinking, feeling or perceiving now . . . along with all our memories remains unconscious, until called into consciousness or until rising spontaneously.
The imagination is the medium of communication between the two parts of the mind. In the iceberg metaphor, the imagination is at the surface of the water. It functions as a medium through which content from the unconscious mind can come into conscious awareness.
Communication through the imagination is two-way. The conscious mind can also use the medium of the imagination to communicate with the unconscious mind. The conscious mind sends suggestions about what it wants through the imagination to the unconscious. It imagines things, and the subconscious intelligencer work to make them happen.
The suggestions can be words, feelings or images. Athletes commonly use images mentally to rehearse how they want to perform by picturing themselves successfully completing their competition. A tennis player may see a tennis ball striking the racket at just the right spot, at just the perfect moment in the swing. Studies show that this form of imaging improves performance.
However, the unconscious mind uses the imagination to communicate with the conscious mind far more often than the other way around. New ideas, hunches, daydreams and intuitions come from the unconscious to the conscious mind through the medium of the imagination.
An undeniable example of the power in the lower part of the iceberg is dreaming. Dream images, visions, sounds and feelings come from the unconscious. Those who are aware of their dreams know how rich and real they can be. Even filtered, as they are when remembered later by the conscious mind, dreams can be quite powerful experiences.
Many people have received workable new ideas and insights, relaxing daydreams, accurate hunches, and unexpected intuitive understandings by replaying their dreams in a waking state. These are everyday examples of what happens when unconscious intelligencer and processes communicate through the imagination with the conscious mind.
Unfortunately, the culture has discouraged us from giving this information credibility. "It's just, but your imagination" is a commonly heard dismissal of information coming from the deep mind. This kind of conditioning has served to keep us disconnected from the deep richness of our vast unconscious resources.
In the self-healing work we'll be using the faculty of the imagination in several ways. In regression processes to access previously unconscious material from childhood, perinatal experiences and past lives, and even deeper realms of the "universal unconscious." Inner dialogue is another essential tool that makes use of the imagination in process work.
To shoulder atop the iceberg metaphor forward, each of us can be represented an iceberg, with the larger part of ourselves remain deeply submerged. And there's a place in the depths where all of our icebergs come together, a place in the unconscious where we connect with each other
The psychologist Carl Jung has named this realm the "Collective Unconscious." This is the area of mind where all humanity shares experience, and from where we draw on the archetypal energies and symbols that are common to us all. "Past life" memories are drawn from this level of the unconscious.
Another, even deeper level can be termed the "Universal Unconscious" where experiences beyond just humanity's can also be accessed with regression process. It is at this level that many "core issues" begin, and where their healing needs to be accomplished.
The unconscious connection "under the iceberg" between people is often more potent than the conscious level connection, and important consideration in doing the healing work. Relationship is an area rich with triggers to deeply buried material needing healing. And some parts of us cannot be triggered in any way other than "under the iceberg."
Although the conscious mind, steeped in cognition and thought, is able to deceive another . . . the unconscious mind, based in feeling, will often give us information from under the iceberg that contradicts what is being communicated consciously.
"Sounds right but feels wrong," is an example of information from under the iceberg surfacing in the conscious mind, but conflicting with what the conscious mind was ably to attain of its own. This kind of awareness is also called "intuition."
Intuitive information comes without a searching of the conscious memory or a formulation to be filled by imagination. When we access the intuition, we seem to arrive at an insight by a path from unknown sources directly to the conscious awareness. Wham! Out of nowhere, in no time.
No matter what the precise neurological process, the ability to access and use information from the intuition is extremely valuable in the effective and creative use of the tools of self healing. In relating with others, it's important to realize that your intuition will bring you information about the other and your relationship from under the iceberg.
When your intuition is the source of your words and actions, they are usually much more appropriate and helpful than what thinking or other functions of the conscious mind could muster. What you do and say from the intuition in earnest communication will be meaningful to the other, even though it may not make sense to you.
The most skilful and comprehending way to nurture and develop your intuition is to trust all of your intuitive insights. Trust encourages the intuition to be more present. Its information is then more accessible and the conscious mind finds less reason to question, analyse or judge intuitive insights.
The primary skills needed for easy access and trust of intuitive information are: (1) The ability to get out of the way. (2) The ability to accept the information without judgment.
Two easy ways to access intuition and help the conscious mind get out of the way occur: (3) Focus your attention in your abdominal area and imagine you have a "belly brain.” As you feel into and sense this area, "listen" to what your belly brain has to say. This is often referred to as listening to our "gut feelings." (4) With your eyes looking down and to your left and slightly de focussed, simply feel into what to say next.
Once the intuition is flowing, it will continue easily, unless it is Formed. The most usual Formage is for which we may become of, and only because the conscious mind's finds within to all judgments of the intuitive information. The best way to avoid this is to get the cooperation of the conscious mind so it will step aside and become the observer when intuition is being accessed. Cosmic Consciousness is an ultra high state of illumination in the human Mind that is beyond that of "self-awareness," and "ego-awareness." In the attainment of Cosmic Consciousness, the human Mind has entered a state of Knowledge instead of mere beliefs, a state of "I know," instead of "I believe." This state of Mind is beyond that of the sense reasoning in that it has attained an awareness of the Universe and its relation to being and a recognition of the Oneness in all things that is not easily shared with others who have not personally experienced this state of Mind. The attainment of Cosmic illumination will cause an individual to seek solitude from the multitude, and isolation from the noisy world of mental pollution.
Carl Jung was a student and follower of Freud. He was born in a small town in Switzerland in 1875 and all his life was fascinated by folk tales, myths and religious stories. Nonetheless, he had a close friendship with Freud early in their relationship, his independent and questioning mind soon caused a break.
Jung did not accept Freud’s contention that the primary motivations behind behaviour was sexual urges. Instead of Freud’s instinctual drives of sex and aggression, Jung believed that people are motivated by a more general psychological energy that pushes them to achieve psychological growth, self-realization, psychic wholeness and harmony. Also, unlike Freud, he believed that personality continues to develop throughout the lifespan.
It is for his ideas of the collective unconscious that students of literature and mythology are indebted to Jung. In studying different cultures, he was struck by the universality of many themes, patterns, stories and images. These same images, he found, frequently appeared in the dreams of his patients. From these observations, Jung developed his theory of the collective unconscious and the archetypes.
Like Freud, Jung posited the existence of a conscious and an unconscious mind. A model that psychologists frequently use here is an iceberg. The part of the iceberg that is above the surface of the water is seen as the conscious mind. Consciousness is the part of the mind we know directly. It is where we think, feel, sense and intuit. It is through conscious activity that the person becomes an individual. It’s the part of the mind that we “live in” most of the time, and contains information that is in our immediate awareness, the level of the conscious mind, and the bulk of the ice berg, is what Freud would call the unconscious, and what Jung would call the “personal unconscious.” Here we will find thoughts, feelings, urges and other information that is difficult to bring to consciousness. Experiences that do not reach consciousness, experiences that are not congruent with whom we think we are, and things that have become “repressed” would make up the material at this level. The contents of the personal unconscious are available through hypnosis, guided imagery, and especially dreams. Although not directly accessible, material in the personal unconscious has gotten there sometime during our lifetime. For example, the reason you are going to school now, why you picked a particular shirt to wear or your choice of a career may be a choice you reached consciously. But it is also possible that education, career, or clothing style has been influenced by a great deal of unconscious material: Parents’ preferences, childhood experiences, even movies you have seen but about which you do not think when you make choices or decisions. Thus, the depth psychologist would say that many decisions, indeed some of the most important ones that have to do with choosing a mate or a career, are determined by unconscious factors. But still, material in the personal unconscious has been environmentally determined.
The collective unconscious is different. It’s like eye colour. If someone were to ask you, “How did you get your eye colour,” you would have to say that there was no choice involved – conscious or unconscious. You inherited it. Material in the collective unconscious is like a dramatization for this as self bequeathed. It never came from our current environment. It is the part of the mind that is determined by heredity. So we inherit, as part of our humanity, a collective unconscious; the mind is pre-figured by evolution just as is the body. The individual is linked to the past of the whole species and the long stretch of evolution of the organism. Jung thus placed the psyche within the evolutionary process.
What’s in the collective unconscious? Psychological archetypes. This idea of psychological archetypes is among Jung’s most important contributions to Western thought. An ancient idea somewhat like Plato’s idea of Forms or “patterns” in the divine mind that determine the form material objects will take, and the archetype is in all of us. The word “archetype” comes from the Greek “Arche” meaning first, and type meant to “imprinting or patterns.” Psychological archetypes are thus first prints, or patterns that form the basic blueprint for major dynamic counterparts of the human personality. For Jung, archetypes pre-exist in the collective unconscious of humanity. They repeat themselves eternally in the psyches of human beings and they determine how we both perceive and behave. These patterns are inborn within us. They are part of our inheritance as human beings. They reside as energy within the collective unconscious and are part the psychological life of all peoples everywhere at all times. They are inside us and they are outside us. We can meet them by going inward to our dreams or fantasies. We can meet them by going outward to our myths, legends, literature and religions. The archetype can be a pattern, such as a kind of story. Or it can be a figure, such as a kind of character.
In her book Awakening the Heroes Within, Carolyn Pearson identifies twelve archetypes that are fairly easy to understand. These are the Innocent, the Orphan, the Warrior, the Caregiver, the Seeker, the Destroyer, the Lover, the Creator, the Ruler, the Magician, the Sage, and the Fool. If we look at art, literature, mythology and the media, we can easily identify some of these patterns. One familiarized is the contemporary western culture is the Warrior. We find the warrior myth encoded in all the great heroes whoever took on the dragon, stood up to the tyrant, fought the sorcerer, or did battle with the monster: And in so doing rescued himself and others. The true Warrior is not just overbearing. The aggressive man (or women) fights to feel superior to others, to keep them down. The warrior fights to protect and ennoble others. The warrior protects the perimeters of the castle or the family or the psyche. The warrior’s myth is active in each of us any time we stand up against unfair authority, be it a boss, teacher or parent. The highest level warrior has at some time confronted his or her own inner dragons. We see the Warrior’s archetype in the form of pagan deities, for example the Greek god of war, Mars. David, who fights Goliath, or Michael, who casts Satan out of Heaven is familiar Biblical warrior. Hercules, Xena (warrior princess) and Conan the Barbarian are more contemporary media forms the warrior takes. And it is in this widely historical variety that we can find an important point about the archetype. It really is unconscious. The archetype is like the invisible man in famous story. In the story, a man invents a potion that, when ingested, renders him invisible. He becomes visible only when he puts on clothes. The archetype is like this. It remains invisible until it unfolds within the Dawn of its particular culture: in the Middle Ages this was King Arthur; in modern America, it may be Luke Skywalker. But if the archetype were not a universal pattern imprinted on our collective psyche, we would not be able to continue to recognize it over and over. The love goddess is another familiar archetypal pattern. Aphrodite to the Greeks, Venus to the Romans, she now appears in the form of familiar models in magazines like “Elle” and “Vanity Fair.” And whereas in ancient Greece her place of worship was the temple, today is it the movie theatre and the cosmetics counter at Nordstrom’s. The archetype remains; the garments it dawns are those of its particular time and place.
This brings us to our discussion of the Shadow as archetype. The clearest and most articulate discussion of this subject is contained in Johnson’s book Owning Your Own Shadow. The Shadow is not a difficult concept. It is merely the “dark side” of the psyche. It’s everything that doesn’t fit into the Persona. The word “persona” comes from the theatre. In the Roman theatre, characters would put on a mask that represented who the character was in the drama. The word “persona” literally means “mask.” Johnson says that the persona is how we would like to be seen by the world, a kind of psychological clothing that “mediates between our true elves and our environment” in much the same way that clothing gives an image. The Shadow is what doesn’t fit into this Persona. These “refused and unacceptable” characteristics don’t go away; They are stuffed or repressed and can, if unattended to, begin to take on a life of their own. One Jungian likens the process to that of filling a bag. We learn at a very young age that there are certain ways of thinking, being and relating that are not acceptable in our culture, and so we stuff them into the shadow bag. In our Western culture, particularly in the United States, thoughts about sex are among the most prevalent that are unacceptable and so sex gets stuffed into the bag. The shadow side of sexuality is quite evident in our culture in the form of pornography, prostitution, and topless bars. Psychic energy that is not dealt within a healthy way takes a dark or shadow form and begins to take on a life of its own. As children our bag is fairly small, but as we get older, it becomes larger and more difficult to drag.
Therefore, it is not difficult to see that there is a shadow side to the Archetypes discussed earlier. The shadow side to the warrior is the tyrant, the villain, the Darth Vader, who uses his or her skills for power and ego enhancement. And whereas the Seeker Archetype quests after truth and purity, the shadow Seeker is controlled by pride, ambition, and addictions. If the Lover follows his/her bliss, commits and bonds, the shadow lover signifies a seducer a sex addict or interestingly enough, a puritan.
But we can use the term “shadow” in a more general sense. It is not merely the dark side of a particular archetypal pattern or form. Wherever Persona is, Shadow is also. Wherever good is, is evil. We first know the shadow as the personal unconsciousness, for in all that we abhor, deny and repressing power, greed, cruel and murderous thoughts, unacceptable impulses, morally and ethically wrong actions. All the demonic things by which human beings betray their inhumanity to other beings are shadow. Shadow is unconscious. This is a very important idea. Since it is unconscious, we know it only indirectly, projection, just as we know the other Archetypes of Warrior, Seeker and Lover. We encounter the shadow in other people, things, and places where we project it. The scape goat is a perfect example of shadow projection. The Nazi’s projection of shadow onto the Jews gives us some insight into how powerful and horrific the archetype is. Jung says that when you are in the grips of the archetype, you don’t have it, it has you.
This idea of projection raises an interesting point. It means that the shadow stuff isn’t “out there” at all; it is really “in here”; that is inside us. We only know it is inside us because we see it outside. Shadow projections have a fateful attraction to us. It seems that we have discovered where the bad stuff really is: in him, in her, in that place, there! There it is! We have found the beast, the demon, the bad guy. But does Obscenity really exist, or is what we see as evil all merely projection of our own shadow side? Jung would say that there really is such a thing as evil, but that most of what we see as evil, particularly collectively, is shadow projection. The difficulty is separating the two. And we can only do that when we discover where the projection ends. Hence, Johnson’s book title “Owning Your own Shadow.”
Amid all the talk about the "Collective Unconscious" and other sexy issues, most readers are likely to miss the fact that C.G. Jung was a good Kantian. His famous theory of Synchronicity, "an accusal connecting principle," is based on Kant's distinction between phenomena and things-in-themselves and on Kant's theory that causality will not operate among thing-in-themselves the way it does in phenomena. Thus, Kant could allow for free will (unconditioned causes) among things-in-themselves, as Jung allows for synchronicity ("meaningful coincidences"). Next to Kant, Jung is close to Schopenhauer, praising him as the first philosopher he had read, "who had the courage to see that all was not for the best in the fundamentalists of the universe" [Memories, Dreams, Reflections, p. 69]. Jung was probably unaware of the Friesian background of Otto's term "numinosity" when he began to use it for his Archetypes, but it is unlikely that he would object to the way in which Otto's theory, through Fries, fits into Kantian epistemology and metaphysics.
Jung's place in the Kant-Friesian tradition is on a side that would have been distasteful to Kant, Fries, and Nelson, whose systems were basically rationalistic. Thus Kant saw religion as properly a rational expression of morality, and Fries and Nelson, although allowing an aesthetic content to religion different from morality, nevertheless did not expect religion to embody much more than good morality and good art. Schopenhauer, Otto, and Jung all represent an awareness that more exists to religion and to human psychological life than this. The terrifying, uncanny, and fascinating elements of religion and ordinary life are beneath the notice of Kant, Fries, and Nelson, while they are indisputable and irreducible elements of life, for which there must be an account, with Schopenhauer, Otto, and Jung. As Jung once, again said of Schopenhauer: "He was the first to speak of the suffering of the world, which visibly and glaringly surrounds us, and of confusion, passion, evil - all those things that the others hardly seemed to notice and always tried to resolve into all-embracing harmony and comprehensibility." It is an awareness of this aspect of the world that renders the religious ideas of "salvation" meaningful; yet "salvation" as such is always missing from moralistic or aesthetic renderings of religion. Only Jung could have written his Answer to Job.
Jung's great Answer to Job, indeed, represents an approach to religion that is all but unique. Placing God in the Unconscious might strike most people as reducing him to a mere psychological object; Nevertheless, that is to overlook Jung's Kantianism. The unconscious, and especially the Collective Unconscious, belongs to Kantian things-in-themselves, or to the transcendent Will of Schopenhauer. Jung was often at pains not to complicate his theory of the Archetypes by committing himself to a metaphysical theory - he wanted the theory to work whether he was talking about the brain or about the Transcendent - but that was merely a concession to the materialistic bias of contemporary science. He had no materialistic commitment himself and, when it came down to it, was not going to accept such naive reductionism. Instead, he was willing to rethink how the Transcendent might operate. Thus, he says about Schopenhauer: I felt sure that by "Will" he really meant God, the Creator, and that he was saying that God was blind. Since I knew from experience that God was not offended by any blasphemy, which on the contrary, he could even encourages it on the account that He wished to evoke not only man's bright and positive side but also his darkness and ungodliness, Schopenhauer's view did not distress me.
The Problem of Evil, which for so many people simply dehumanizes religion, and which Schopenhauer used to reject the value of the world, became a challenge for Jung in the psychoanalysis of God. The God of the Bible is indeed a personality, and seemingly not always the same one. God as a morally evolving personality is the extraordinary conception of Answer to Job. What Otto saw as the evolution of human moral consciousness, Jung turns right around on the basis of the principle that the human unconscious, expressed spontaneously in religious practice and literature, transcends mere human subjectivity. But the transcendent reality in the unconscious is different in kind from consciousness. As Jung said in Memories, Dreams, Reflections again: If the Creator were conscious of Himself, He would not need conscious creatures; nor is it probable that the extremely indirect methods of creation, which squander millions of years upon the development of countless species and creatures, are the outcome of purposeful intention. Natural history tells us of a haphazard and casual transformation of species over hundreds of millions of years of devouring and being devoured. The biological and political history of man is an elaborate repetition of the same thing. But the history of the mind offers a different picture. Here the miracle of reflecting consciousness intervenes - the second cosmogony [ed. note: what Teilhard de Chardin called the origin of the "oosphere," the layer of "mind"]. The importance of consciousness is so great that one cannot help suspecting the element of meaning to be concealed somewhere within all the monstrous, apparently senseless biological turmoil, and that the road to its manifestation was ultimately found on the level of warm-blooded vertebrates possessed of a differentiated brain - found as if by chance, unintended and unforeseen, and yet somehow sensed, felt and groped for out of some dark urge.
In other words, a "meaningful coincidence." Jung also says, As far as we can discern, the sole purpose of human existence is to kindle a light in the darkness of mere being. It may even be assumed that just as the unconscious affects us, so the increase in our consciousness affects the unconscious.
However, Jung has missed something there. If consciousness is "the light in the darkness of mere being," consciousness alone cannot be the "sole purpose of human existence," since consciousness as such could appear as just a place of "mere being" and so would easily become an empty, absurd, and meaningless Existentialist existence. Instead, consciousness allows for the meaningful instantiation of existence, both through Jung's process of Individuation, by which the Archetypes are given unique expression in a specific human life, and from the historic process that Jung examines in Answer to Job, by which interaction with the unconscious alters in turn the Archetypes that come to be instantiated. While Otto could understand Job's reaction to God, as the incomprehensible Numen, Jung thinks of God's reaction to Job, as an innocent and righteous man jerked around by God's unconsciousness. Jung's idea that the Incarnation then is the means by which God redeems Himself from His morally false position in Job is an extraordinary reversal (I hesitate to say "deconstruction") of the consciously expressed dogma that the Incarnation is to redeem humanity.
It is not too difficult to see this turn in other religions. The compassion of the Buddhas in Mahâyâna Buddhism, especially when the Buddha Shakyamuni comes to be seen as the expression of a cosmic and eternal Dharma Body, is a hand of salvation stretched out from the Transcendent, without, however, the complication that the Buddha is ever thought responsible for the nature of the world and its evils as their Creator. That complication, however, does occur with Hindu views of the divine Incarnations of Vishnu. Closer to a Jungian synthesis, on the other hand, is the Bahá'í theory that divine contact is though "Manifestations," which are neither wholly human nor wholly divine: merely human in relation to God, but entirely divine in relation to other humans. Such a theory must appear Christianizing in comparison to Islam, but it avoids the uniqueness of Christ as the only Incarnation in Christianity itself. This is conformable to the Jungian proposition that the unconscious is both a side of the human mind and a door into the Transcendent. When that door opens, the expression of the Transcendent is then conditioned by the person through which it is expressed, possessing that person, but it is also genuinely Transcendent and reflecting the ongoing interaction that the person historically embodies. The possible "mere being" even of consciousness then becomes the place of meaning and value.
Whether "psychoanalysis” as practised by Freud or Jung is to be taken seriously and no less than questions asked; however both men will survive as philosophers long after their claims to science or medicine may be discounted. Jung's Kantianism enables him to avoid the materialism and reductionism of Freud ("all of the civilization is a substitute for incest") and, with a great breadth of learning, employs principles from Kant, Schopenhauer, and Otto that are easily conformable to the Kant-Friesian tradition. The Answer to Job, indeed, represents a considerable advance beyond Otto, into the real paradoxes that are the only way we can conceive transcendent reality.
In the state of Cosmic Consciousness has an individual developed a keen awareness of his own mental states and activities and that of others around him or her. This individual is aware of a very distinct "I" personality that empowers the individual with a powerful expression of the "I am" that is not swayed or moved by the external impressions of the trifling mental states of others. This individual stands on a "rock solid" foundation that is not easily understood by the common mind. Cosmic Consciousness is void of the "superficial" ego.
The existence of the conscious "I" and the "Subconscious Mind" on the Mental Plane is a manifestation of the seventh Hermetic principle, the Principle of Gender. Every human, male and female, is composed of the Masculine and Feminine aspect of Mind on the Mental Plane. Each male has its female element, and each female has its male element of Mental Gender from which the creation of all thoughts proceed. The "I" being the masculine aspect of Mind, and the Subconscious Mind being the feminine. The Principle of Gender manifests itself as male and female in all species of Life and Being that makes the sexual reproduction and multiplication of the species possible on the Great Physical Plane. The phenomena of this principle can be found in all three great groups of life manifestations, as questionably answered to those that are duly respected thereof, that in the Spiritual, Mental, and Physical plane of Life and Being.
On the Physical Plane, its role is recognized as sexual reproduction, while on the higher planes it takes on higher, more subtler functions of Mental and Spiritual Gender. Its role is always in the direction of reproduction, generation and regeneration. The Masculine and Feminine principles are always present and active in all phases of phenomena and every plane of Life. An understanding in the manifesting power of this Principle, will give us a greater understanding of ourselves and an awareness of the enormous latent power awaiting to be tapped.
In the Spiritual developed individual, the person who becomes aware of, and recognizes the conscious "I," or "I am" within, will be able to exert its will upon the subconscious mind with definite causation and purpose. The recognition and awareness of the "I," will enable a person to expand his or her mind into regions of consciousness that is unthinkable to the societal conditioned thinking process of the world community.
True Spiritual, or Mental development, enables the sharpening of the five bodily senses, enhancing the richness of Life as our minds are allowed to expand into advanced Spiritual knowledge. Knowledge that will enable the proper use of the five wonderful bodily senses as they report to us the external world from which we derive information to store in the memory banks of the brain to create a knowledge base of experience. The greater the Conscious awareness, the more acute the bodily senses become. At the same time, the lesser the Conscious awareness (nonmaterial sixth sense), the minor acutely of the five bodily senses become and considerably of our external world would not even be acknowledged. This difference of mental states is most likely the cause of debate between religious and scientific circles.
The "I" Consciousness in each human is the true "Higher Self." The "Higher Self" of each human exists as a constant moving whirlpool of Cosmic Consciousness, or an eddy in the Infinite Spirit of "The all," which manifest’s LIFE in all of us and all living entities of the lower and higher planes. The "I" within all of us for being apart of the Mind but not separated exists in all of us and is the instrument of the conscious "I." It is Eternal and indestructible and mortality and Immortality is not an issue in existence. There is no force in existence capable of destroying the "I." This "I" or "Higher Self" is the SOUL of the Soul and is holographically connected to The all, giving the powerful "I" the Image of its Creator. All of us are created in the image of GOD without any exceptions or exclusions and none can escape its Omnipresent Infinite Living Mind. The all, of being the Ruler of all fate, or destiny, in all peoples, nations, governments, religious institutions, suns, worlds, galaxies, planes, dimensions, and Universes. All are subject to its Wills and Efforts, and is the Law that keeps all things in relationship to their Source. There is no "existence" outside of The all.
When the particular "I" is consciously recognized within ourselves, the "Will" of "I" is powerfully exerted upon the Subconscious Mind, giving the Subconscious Mind purpose and a sense of direction in Life. The Mind is the instrument by which the conscious "I" pries open the many deep, and hidden secrets of Nature.
To cause advancement, each individual would have to initiate the effort in learning the deep secrets of their nature, setting aside all the trifling efforts of self-condemnation, low self esteem, and hurts in their daily living that is caused by allowing the ignorant brainwashing of societal conditioning and self inflicted wounds. All the brainwashing, and imagined hurts that we experience in our lives are lessons to overcome these obstacles and to learn, and recognize the powerful "I."
Only the person who created the negative state of Mind can eliminate this by making a fundamental change in the way they think and what is held in their thoughts and to allow them the Spiritual education that is needed in for advancement. There is no red carpet treatment or royal road in accomplishing this. It takes a will, a desire, diligent effort, and perseverance in cultivating this knowledge. The resulting rewards of this attainment will far exceed the greatest worldly rewards known to humanity.
Most people fail to recognize this reality and they will unconsciously and painfully race through Life from cradle to grave and not even experience a momentary glimpse of this great Truth.
The "I," when recognized in a conscious and deliberate manner, will enable a person to accomplish things in Life that is limited only to his or her own imagination. The accomplishments of educators, scientists, engineers, and leaders, who make up the smaller percentage of the world population, have to a degree recognized this "I" within themselves, mostly in an unconscious manner, nevertheless, many have accomplished successful professional careers. They have accomplished a mental focus on a subject (or object), that escaped the ability of most people, giving them a sense of direction and a meaningful purpose in society. Every human is capable of accomplishing this, if they will only learn to focus and concentrate on one subject at a time.
When the will of "I" is utilized and exerted in an unrecognized and unconscious manner, it becomes misused and abused, bringing misery to the individual and others around him or her. Often, is this reality seen in the work place between people and where persons are in a position of authority, such as supervisors, managers, directors, etc., who bring misery to themselves and to their workers because of the powerful will of the unrecognized "I" or "I am." This aspect will cause a lack of harmony in an individual corporate, or company structure and at times bring chaos to the organization when enough of these types of individuals are employed in one place. Teamwork becomes a very labouring effort as competition between employees becomes its theme causing discontent and thus reducing the efficiency of a corporate environment. There is strength in number, either positive or negative. The realm of Spirit affects all levels of our society.
When the human Mind learns to become focussed on a single object or subject at a time, without wandering, excluding all other objects/subjects waiting in line, the Mind is capable of gathering previously unknown energy and information about a given subject or object. The entire world of that person seems to revolve in such a manner that it would bring them information from the unknown regions of the Mind. This is true meditation, to gather information about the unknown while being in a focussed meditative state of Mind. Each true meditation should bring a person information that will cause his or her Mind to expand with Knowledge, especially, when the focal point of concentration is that of Spirit. A person who learns to master this mental art will find that the proper books will manifest into their Life and bring to them the missing puzzles of Life. Books that will draw the attention of an individual on a given subject, and when the new knowledge is applied to the individual's Mind, it is allowed to expand further upon the subject by allowing the Mind to gather additional information and increasing the knowledge base, causing further advancement for others as well.
The mental art of concentration by employing the exertion of the will and creating desire upon a given subject or object is very rare because the lazy human mind is content with wandering twirlingly through Life. The untrained average human Mind is constantly rapidly wandering from one subject/object to another and is unable to focus on a single subject because of the constant carousel of external impressions of objects from the surrounding material world. The untrained mind is constantly jumping from one subject/object to another, like the jumping around of a wild monkey, never able to pause for a moment, to concentrate, and focalize long enough to allow the Mind to gather information about a given subject or object. This is what thinking is. To allow the Mind to gather information about the unknown. When this is disallowed, a person will wander aimlessly through Life and maintaining an ignorant state of Mind.
Wandering aimlessly through Life is a dangerous mental state to maintain because of the possible danger of other minds with stronger wills and efforts to manipulate the person who has not taken responsibility in the discipline and control of their own mind. A person having no control of their own responsibilities are more to wander of mind, having no control in Life's destiny because of the lack of focus and direction in Life. It can be compared with a rudderless ship that is constantly tossed by the rise and fall of the waves from the powerful ocean.
When the Mind becomes trained and learns to concentrate and focalize on a single object or subject at a time, that state of Mind will bring the individual Universal Knowledge and Wisdom. This is how genius is created by applying the mental art of concentration and focalizing on any worthwhile subject. The famous theories and hypothesis come into being such as Einstein's theory of relativity, man's ability to fly through the air, space travel, etc., by applying the mental art of concentration. It is an unbending mental aspect of the human mind as it continues to expand and gathers ever more information about all known and unknown subjects and objects, constantly causing change and advancement in Spirituality and technology. Unbiased, Spiritual Wisdom enables the proper use of technology and is the catalyst for its increasingly rapid advancement. It may be difficult, however, to conceive that Spirituality and technology go hand in hand, but are nonetheless, the lack of Spiritual Wisdom will dampen the infinite possibilities because of a limited, diminutive belief system.
Technology ends where the mortal barrier begins, then, it becomes a necessity to look into the realm of Spirit in order to continue human evolution. Without the continuous advancement of evolution, this civilization will become dissolved and perish off the face of the earth, like the many previous civilizations before us. The mortal barrier begins when science and technology will reach the limitation of the atomic and sub-atomic particles and a quantum leap into the realm of the Waveform (Spirit) becomes a necessity in order to continue upward progress
When a person learns to find a quiet moment in their lives to be able to become mentally focussed and entered on their profession, job, Spirituality, whatever the endeavour, they will find the answers and renewed energy to solve problems and create new knowledge and ideas.
When a person (no matter who) learns to focus and concentrate on Spirit, their Mind will gather from their Cosmic Consciousness, the deepest secrets of the Universe, as to how it is composed, by what means, and to what end. But, the enigma of the deepest inner secret Nature of The all, or God will always remain unknowable to us by reason of its Infinite stature to which no human qualities can, or should, ever be ascribed.
There is more on the subject of the powerful "I" consciousness the "I Am," the "Higher Self," which is, each one of us.
In what could turn out to be one of the most important discoveries in cognitive studies of our decade, it has been found that there are five million magnetite crystals per gram in the human brain. Interestingly, The meninges, (the membrane that envelops the brain), has twenty times that number. These ‘bio magnetite' crystals demonstrate two interesting features. The first is that their shapes do not occur in nature, suggesting that they were formed in the tissue, rather than being absorbed from outside. The other is that these crystals appear to be oriented so as to maximize their magnetic moment, which tends to give groups of these crystals the capacity to act as a system. The brain has also been found to emit very low intensity magnetic fields, a phenomenon that forms the basis of a whole diagnostic field, Magnetoencephalography.
Unfortunately for the present discussion, there is no way to ‘read' any signals that might be carried by the brain’s magnetic emissions at present. We expect that subtle enough means of detecting such signals will eventually appear, as there is compelling evidence that they do exist, and constitute a means whereby communication happens between various parts of the brain. This system, we speculate, is what makes the selection of which neural areas to recruit, so that States (of consciousness) can elicit the appropriate Phenomenological, behavioural, and affective responses.
While there have been many studies that have examined the effects of magnetic fields on human consciousness, none have yielded findings more germane to understanding the role of neuromagnetic signalling than the work of the Laurentian University Behavioural Neuroscience group. They have pursued a course of experiments that rely on stimulating the brain, especially the temporal lobes, with complex low intensity magnetic signals. It turns out that different signal’s produce different phenomena.
One example of such phenomenons is vestibular sensation, in which one's normal sense of balance is replaced by illusions of motion similar to the feelings of levitation reported in spiritual literature as well as the sensation of vertigo. Transient ‘visions', whose content includes motifs that also appear in near-death experiences and alien abduction scenarios have also appeared. Positive effectual parasthesias (electric-like buzzes in the body) have occurred. Another experiences that has been elicited neuromagnetically is bursts of emotion, most commonly of fear and joy. Although the content of these experiences can be quite striking, the way they present themselves is much more ordinary. It approximates the ‘twilight state' between waking and sleep called hypnogogia. This can produce brief, fleeting visions, feelings that the bed is moving, rocking, floating or sinking. Electric-buzz like somatic sensations and hearing an inner voice call one's name can also occur in hypnogogia. The range of experiences it can produce is quite broad. If all signals produced the same phenomena, then it would be difficult to conclude that these magnetic signals approximate the postulated endogenous neuromagnetic signals that create alterations in State. In fact, the former produces a wide variety of phenomena. One such signal makes some women apprehensive, but another doesn't. One signal creates such strong vestibular sensations that one can't stand up. Another doesn't.
The temporal lobes are the parts of the brain that mediate states of consciousness. EEG readouts from the temporal lobes are markedly different when a person is asleep, having a hallucinogenic seizure, or on LSD. Siezural disorders confined to the temporal lobes (complex partial seizures) have been characterized as impairments of consciousness. There was also a study done in which monkeys were given LSD after having various parts of their brains removed. The monkeys continued to ‘trip' no matter what part or parts of their brains were missing until both temporal lobes were taken out. In these cases, the substance did not seem to affect the monkeys at all. The conclusion seems unavoidable. In addition to all their other functions (aspects of memory, language, music, etc.), the temporal lobes mediate states of consciousness.
If exposing the temporal lobes to magnetic signals can induce alterations in States, then it seems reasonable to suppose that States find part of their neural basis in our postulated neuromagnetic signals, arising out of the temporal lobes.
Hallucinations are known to be the Phenomenological correlates of altered States. Alterations in state of consciousness leads, following input, and phenomena, whether hallucinatory or not, follows in response. We can offer two reasons for drawing this conclusion.
The first is one of the results obtained by a study of hallucinations caused by electrical stimulation deep in the brain. In this study, the content of the hallucinations was found to be related to the circumstances in which they occurred, so that the same stimulations could produce different hallucinations. The conclusion was that the stimulation induced altered states, and the states facilitated the hallucinations.
The second has to do with the relative speeds of the operant neural processes.
Neurochemical response times are limited by the time required for their transmission across the synaptic gap, .5 to 2msec.
By comparison, the propagation of action potentials is much faster. For example, an action potential can travel a full centimetre (a couple of orders of magnitude larger than a synaptic gap) in about 1.3 msec. The brain's electrical responses, therefore, happen orders of magnitude more quickly than do its chemical ones.
Magnetic signals are propagated with greater speeds than those of action potentials moving through neurons. Contemporary physics requires that magnetic signals be propagated at a significant fraction of the velocity of light, so that the entire brain could be exposed to a neuromagnetic signal in vanishingly small amounts of time.
It seems possible that neuromagnetic signals arise from structures that mediate our various sensory and cognitive modalities. These signals then recruit those functions (primarily in the limbic system) that adjust the changes in state. These temporal lobe signals, we speculate, then initiate signals to structures that mediate modalities that are enhanced or suppressed as the state changes.
The problem of defining the phrase ‘state of consciousness' has plagued the field of cognitive studies for some time. Without going into the history of studies in the area, we would like to outline a hypothesis concerning states of consciousness in which the management of states gives rise to the phenomenon of consciousness
There are theories that suggest that cognitive modalities (such as memory, affect, ideation and attention) may be seen as analogs to sensory modalities.
We hypothesize that the entire set of modalities, cognitive and sensory, may be heuristically compared with a sound mixing board. In this metaphor, all the various modalities are represented as vertical rheostats with enhanced functioning increasing towards the top, and suppressed function increasing toward the bottom. Further, the act of becoming conscious of phenomena in any given modality involves the adjustment of that modality's ‘rheostat'
Sensory input from any modality can alter one's state. The sight of a sexy person, the smell of fire, the unexpected sensation of movement against one's skin (there's a bug on me!), a sudden bitter taste experienced while eating ice cream, or the sound of one's child screaming in pain; all of these phenomena can induce alterations in State. Although the phrase ‘altered states' has come to be associated with dramatic, otherworldly experiences, alterations in state, as we will be using the phrase, refer primarily to those alterations that take us from one normal state to another.
Alterations in state can create changes within the various sensory and cognitive modalities. An increase in arousal following the sight of a predator will typically suppress the sense of smell (very few are able to stop and ‘smell the roses' while a jaguar is chasing them), suppressive introspection (nobody wants to know ‘who I really am?' Nonetheless, an anaconda breeds for wrapping itself around them, suppresses sexual arousal, and alters vision so that the centre of the visual field is better attended then one's peripheral vision allowing one to see the predator's movement better? The sight of a predator will also introduce a host of other changes, all of which reflect the State.
In the Hindu epic, the Mahabharata, there is a dialogue between the legendary warrior, Arjuna, and his archery teacher. Arjuna was told by his teacher to train his bow on a straw bird used as a target. Arjuna was asked to describe the bird. He answered ‘I can't'. ‘Why not?', Asked his teacher. ‘I can only see its eye', he answered. ‘Release your arrow', commanded the teacher. Arjuna did, and hit the target in the eye. ‘I'll make you the finest archer in the world', said his teacher.
In this story, attention to peripheral vision had ceased so completely that only the very centre of his visual field received any. Our model of states would be constrained to interpret Arjuna's (mythical) feat as a behaviour specific to a state. The unique combination of sensory enhancement, heightened attention, and sufficient suppression of emotion, ideation, and introspection that support such an act suggests specific settings for our metaphorical rheostats.
Changes in state make changes in sensory and cognitive modalities, and they in turn, trigger changes in state. We can reasonably conclude that there is a feedback mechanism whereby each modality is connected to the others.
States also create tendencies to behave in specific ways in specific circumstances, maximizing the adaptivity of behaviour in those circumstances; behaviour that tends to meet our needs and respond to threats to our ability to meet those needs.
Each circumstance adjusts each modality’s setting, tending to maximize that modality's contribution to adaptive behaviour in that circumstance. The mechanism may function by using both learned and inherited default settings for each circumstance and then repeating those settings in similar circumstances later on. Sadly, this often makes states maladaptive. Habitually to alteration in State, in response to threats from an abusive parent, for example, can make for self-defeating responses to stress in other circumstances, where theses same responses are no longer advantageous.
Because different States are going to be dominated by specific combinations of modalities, it makes sense that a possible strategy for aligning the rheostats (making alterations in state) is to move them in tandem, so that after a person associates the sound of a scream to the concept of a threat, that sound, with its unique auditory signature, will cause all the affected modalities (most likely most of them in most cases) to take the positions they had at the time the association was made.
hen we say changing states, we are referring to much more than the dramatic states created by LSD, isolation tanks, REM. sleep, etc. We are also including normal states of consciousness, which we can imagine as kindled ‘default settings' of our various modalities. When any one of these settings returns to one of its default settings, it will, we conjecture, tend to entrain all the other modalities to the settings they habitually take in that state.
To accomplish this, we must suggest that each modality be connected to every other one. A sight, a smell, a sound, or a tactile feeling can all inspire fear. Fear can motivate ideation. Ideation can inspire arousal. Changes in effect can initiate alterations in introspection. Introspection alters affect. State specific settings of individual modalities could initiate settings for other modalities.
Our main hypothesis here is that all these intermodal connections, as operating as a single system, have a single Phenomenological correlate. The phenomena of subjective awareness.
The structures associated with that modality then broadcasts are neuromagnetic signals to the temporal lobes, which then produces signals that then recruits various structures throughout the brain. Specifically, those structures whose associated modalities' values must be changed in order to accomplish the appropriate alteration in state. In the second section, we found the possibility that states are settings for the variable aspects of cognitive and sensory modalities. We also offered the suggestion that consciousness is the Phenomenological correlate of the feedback between the management of states on the one hand, and the various cognitive and sensory modalities, on the other. If all of these conclusions were to stand up to testing, we could conclude that the content of the brain's hypothesized endogenous magnetic signals might consist of a set of values for adjusting each sensory and cognitive rheostat. We might also conclude that neuromagnetic signalling is the context in which consciousness occurs.
The specific mechanism whereby subjectivity is generated is out of the reach of this work. Nevertheless, we can say that the fact that multiple modalities are experienced simultaneously, together with our model's implication that they are ‘reset,' all at once, with each alteration in state suggests that our postulated neuromagnetic signals may come in pairs, with the two signals running slightly out of condition with one another. In this way, neuromagnetic signals, like the two laser beams used to produce a hologram, might be able to store information in a similar way, as has already been explored by Karl Pibhram. The speed at which neuromagnetic signals continue to propagate, and together with their capacity to recruit/alter multiple modalities suggests that the underlying mechanism have been selected to make instant choices on which specific portions to recruit in order to facilitate the behaviours acted out of the State, and to do so quickly.
In this way, the onset time for the initiation of States is kept to a minimum, and with it, the times needed to make the initial, cognitive response to stimuli. When it comes to response to threats, or sighting prey, the evolutionary advantages are obvious.
Higher-order theories of consciousness try to explain the distinctive properties of consciousness in terms of some relation obtaining between the conscious state in question and a higher-order representation of some sort (either a higher-order experience of that state, or a higher-order thought or belief about it). The most challenging properties to explain are those involved in phenomenal consciousness - the sort of state that has a subjective dimension, which has ‘feel’, or which it is like something to undergo.
One of the advances made in recent years has been in distinguishing between different questions concerning consciousness. Not everyone agrees on quite which distinctions need to be drawn. But all are agreeing that we should distinguish creature consciousness from mental-state consciousness. It is one thing to say of an individual or organism that it is conscious (either in general or of something in particular). It is quite another thing to say of one of the mental states of a creature that it is conscious.
It is also agreed that within creature-consciousness itself we should distinguish between intransitive and transitive variants. To say of an organism that it is conscious, and finds of its own sorted simplicities (intransitive) is to say just that it is awake, as opposing to an ever vanquishing state of unconsciousness, only to premises the fact, that the unconscious is literally resting, not of an awakening state. There do not appear to be any deep philosophical difficulties lurking here (or at least, they are not difficulties specific to the topic of consciousness, as opposed to mentality in general). But to say of an organism that it is conscious of such-and-such (transitive) is normally to say at least that it is perceiving such-and-such, or aware of such-and-such. So we say of the mouse that it is conscious of the cat outside its hole, in explaining why it does not come out is, perhaps, to mean that it perceives the cat's presence. To provide an account of transitive creature-consciousness would thus be to attempt a theory of perception.
There is a choice to be made concerning transitive creature-consciousness, failure to notice which may be a potential source of confusion. For we have to decide whether the perceptual state in virtue of which an organism may be said to be transitively-conscious of something must itself be a conscious one (state-conscious). If we say ‘Yes’ then we will need to know more about the mouse than merely that it perceives the cat if we are to be assured that it is conscious of the cat - we will need to establish that its percept of the cat is itself conscious. If we say ‘No’, on the other hand, then the mouse's perception of the cat will be sufficient for the mouse to count as conscious of the cat, but we may have to say that although it is conscious of the cat, the mental state in virtue of which it is so conscious is not itself a conscious one! It may be best to by-pass any danger of confusion here by avoiding the language of transitive-creature-consciousness altogether. Nothing of importance would be lost to us by doing this. We can say simply that organism O observes or perceives x. We can then assert, explicitly, that if we wish, that its percept be or is not conscious.
Turning now to the notion of mental-state consciousness, the major distinction here is between phenomenal consciousness, on the one hand - which is a property of states that it is like something to be in, which have a distinctive ‘feel’ (Nagel, 1974) - and various functionally-definable forms of access consciousness, on the other. Most theorists believe that there are mental states - such as occurrent thoughts or judgments - which are access-conscious (in whatever is the correct functionally-definable sense), but which are not phenomenally conscious. In contrast, there is considerable dispute as to whether mental states can be phenomenally-conscious without also being conscious in the functionally-definable sense - and even more dispute about whether phenomenal consciousness can be reductively explained in functional and/or representational terms.
It seems plain that there is nothing deeply problematic about functionally-definable notions of mental-state consciousness, from a naturalistic perspective. For mental functions and mental representations are the staple fares of naturalistic accounts of the mind. But this leaves plenty of room for dispute about the form that the correct functional account should take. Some claim that for a state to be conscious in the relevant sense is for it to be poised to have an impact on the organism's decision-making processes, perhaps also with the additional requirement that those processes should be distinctively rational ones. Others think that the relevant requirement for access-consciousness is that the state should be suitably related to higher-order representations - experiences and/or beliefs - of that very state.
What is often thought to be naturalistically problematic, in contrast, is phenomenal consciousness. And what is really and deeply controversial is whether phenomenal consciousness can be explained in terms of some or other functionally-definable notion. Cognitive (or representational) theories maintain that it can. Higher-order cognitive theories maintain that phenomenal consciousness can be reductively explained in terms of representations (either experiences or beliefs) which are higher-order. Such theories concern us here.
Higher-order theories, like cognitive/representational theories in general, assume that the right level at which to seek an explanation of phenomenal consciousness is a cognitive one, providing an explanation in terms of some combination of causal role and intentional content. All such theories claim that phenomenal consciousness consists in a certain kind of intentional or representational content (analog or ‘fine-grained’ in comparison with any concepts we may possess) figuring in a certain distinctive position in the causal architecture of the mind. They must therefore maintain that these latter sorts of mental property do not already implicate or presuppose phenomenal consciousness. In fact, all cognitive accounts are united in rejecting the thesis that the very properties of mind or mentality already presuppose phenomenal consciousness, as proposed by Searle (1992, 1997) for example.
The major divides among representational theories of phenomenal consciousness in general, is between accounts that are provided in purely first-order terms and those that implicate higher-order representations of one sort or another (see below). These higher-order theorists will allow that first-order accounts - of the sort defended by Dretske (1995) and Tye (1995), for example - can already make some progress with the problem of consciousness. According to first-order views, phenomenal consciousness consists in analog or fine-grained contents that are available to the first-order processes that guide thought and action. So a phenomenally-conscious percept of red, for example, consisting in a state, with which the parallel contentual representations are red under which are betokened in such a way as to take food into thoughts about red, or into actions that are in one way or another guide by way of redness. Now, the point to note in favour of such an account is that it can explain the natural temptation to think that phenomenal consciousness is in some sense ineffable, or indescribable. This will be because such states have fine-grained contents that can slip through the mesh of any conceptual net. We can always distinguish many more shades of red than we have concepts for, or could describe in language (other than indexically -, e.g., ‘That shade’)
The main motivation behind higher-order theories of consciousness, in contrast, derives from the belief that all (or at least most) mental-state types admit of both conscious and non-conscious varieties. Almost everyone now accepts, for example, (post-Freud) that beliefs and desires can be activated non-consciously. (Think, here, of the way in which problems can apparently become resolved during sleep, or while one's attention is directed to other tasks. Notice, that appearance to non-conscious intentional states is now routine in cognitive science.) And then if we ask what makes the difference between a conscious and a non-conscious mental state, one natural answer is that consciously states are states we are aware of them but not as to their actualization as based upon its nature. And if awareness is thought to be a form of creature-consciousness, then this will translate into the view that conscious states are states of which the subject is aware, or states of which the subject is creature-conscious. That is to say, these are states that are the objects of some sort of higher-order representation - whether to some higher-order of perception or experience, or a higher-order of belief or thought.
One crucial question, then, is whether perceptual states as well as beliefs admit of both conscious and non-conscious varieties. Can there be, for example, such a thing as a non-conscious visual perceptual state? Higher-order theorists are united in thinking that there can. Armstrong (1968) uses the example of absent-minded driving to make the point. Most of us at some time have had the rather unnerving experience of ‘coming to’ after having been driving on ‘automatic pilot’ while our attention was directed elsewhere - perhaps having been day-dreaming or engaged in intense conversation with a passenger. We were apparently not consciously aware of any of the route we have recently taken, nor of any of the obstacles we avoided on the way. Yet we must surely have been seeing, or we would have crashed the car. Others have used the example of blindsight. This is a condition in which subjects have had a portion of their primary visual cortex destroyed, and apparently become blind in a region of their visual field as a result. But it has now been known for some time that if subjects are asked to guess at the properties of their ‘blind’ field (e.g., whether it contains a horizontal or vertical grating, or whether it contains an ‘X’ or an ‘O’), they prove remarkably accurate. Subjects can also reach out and grasp objects in their ‘blind’ field with something like 80% or more of normal accuracy, and can catch a ball thrown from their ‘blind’ side, all without conscious awareness.
More recently, a powerful case for the existence of non-conscious visual experience has been generated by the two-systems theory of vision proposed and defended by Milner and Goodale (1995). They review a wide variety of kinds of neurological and neuro-psychological evidence for the substantial independence of two distinct visual systems, instantiated in the temporal and parietal lobes respectively. They conclude that the parietal lobes provide a set of specialized semi-independent modules for the on-line visual control of action; Though the temporal lobes are primarily concerned with subsequent off-line functioning, such as visual learning and object recognition. And only the experiences generated by the temporal-lobe system are phenomenally conscious, on their account.
(Note that this is not the familiar distinction between what and where visual systems, but is rather a successor to it. For the temporal-lobe system is supposed to have access both to property information and to spatial information. Instead, it is a distinction between a combined what-where system located in the temporal lobes and a how-to or action-guiding system located in the parietal lobes.)
To get the flavour of Milner and Goodale's hypothesis, consider just one strand from the wealth of evidence they provide. This is a neurological syndrome called visual form agnosia, which results from damage localized to both temporal lobes, leaving primary visual cortex and the parietal lobes composed. (Visual form agnosia is normally caused by carbon monoxide poisoning, for reasons that are little understood.) Such patients cannot recognize objects or shapes, and may be capable of little conscious visual experience; still, their sensorimotor abilities remain largely intact
One particular patient has now been examined in considerable detail. While D.F. is severely agnosia, she is not completely lacking in conscious visual experience. Her capacities to perceive colours and textures are almost completely preserved. (Why just these sub-modules in her temporal cortex should have been spared is not known.) As a result, she can sometimes guess the identity of a presented object - recognizing a banana, say, from its yellow Collor and the distinctive texture of its surface. Nevertheless, she is unable to perceive the shape of the banana (whether straight or curved, say); Nor its orientation (upright or horizontal), nor of many of her sensorimotor abilities are close too normal - she would be able to reach out and grasp the banana, orienting her hand and wrist appropriately for its position and orientation, and using a normal and appropriate finger grip. Under experimental conditions it turns out that although D.F. is at chance in identifying the orientation of a broad line or letter box, she is almost normal when posting a letter through a similarly-shaped slot oriented at random angles. In the same way, although she is at chance when trying to choose as between the rectangular Forms of very different sizes, her reaching and grasping behaviours when asked to pick up such a Form are virtually indistinguishable from those of normal controls. It is very hard to make sense of this data without supposing that the sensorimotor perceptual system is functionally and anatomically distinct from the object-recognition/conscious system.
There is a powerful case, then, for thinking that there are non-conscious as well as conscious visual percepts. While the perceptions that ground your thoughts when you plan in relation to the perceived environment (‘I'll pick up that one’) may be conscious, and while you will continue to enjoy conscious perceptions of what you are doing while you act, the perceptual states that actually guide the details of your movements when you reach out and grab the object will not be conscious ones, if Milner and Goodale (1995) are correct
But what implication does this have for phenomenal consciousness? Must these non-conscious percepts also be lacking in phenomenal properties? Most people think so. While it may be possible to get oneself to believe that the perceptions of the absent-minded car driver can remain phenomenally conscious (perhaps lying outside of the focus of attention, or being instantly forgotten), it is very hard to believe that either blindsight percepts or D.F.'s sensorimotor perceptual states might be phenomenally conscious ones. For these perceptions are ones to which the subjects of those states are blind, and of which they cannot be aware. And the question, then, is what makes the relevant difference? What is it about a conscious perception that renders it phenomenal, which a blindsight perceptual state would correspondingly lack? Higher-order theorists are united in thinking that the relevant difference consists in the presence of something higher-order in the first case that is absent in the second. The core intuition is that a phenomenally conscious state will be a state of which the subject is aware.
What options does a first-order theorist have to resist this conclusion? One is to deny the data, it can be said that the non-conscious states in question lack the kind of fineness of grain and richness of content necessary to count as genuinely perceptual states. On this view, the contrast discussed above isn't really a difference between conscious and non-conscious perceptions, but rather between conscious perceptions, on the one hand, and non-conscious belief-like states, on the other. Another option is to accept the distinction between conscious and non-conscious perceptions, and then to explain that distinction in first-order terms. It might be said, for example, that conscious perceptions are those that are available to belief and thought, whereas non-conscious ones are those that are available to guide movement. A final option is to bite the bullet, and insist that blindsight and sensorimotor perceptual states are indeed phenomenally conscious while not being access-conscious. On this account, blindsight percepts are phenomenally conscious states to which the subjects of those states are blind. Higher-order theorists will argue, of course, that none of these alternatives is acceptable.
In general, then, higher-order theories of phenomenal consciousness claim the following: A phenomenally conscious mental state is a mental state (of a certain sort - see below) which either is, or is disposed to be, the object of a higher-order representation of a certain sort. Higher-order theorists will allow, of course, that mental states can be targets of higher-order representation without being phenomenally conscious. For example, a belief can give rise to a higher-order belief without thereby being phenomenally conscious. What is distinctive of phenomenal consciousness is that the states in question should be perceptual or quasi-perceptual ones (e.g., visual images as well as visual percepts). Moreover, most cognitive/representational theorists will maintain that these states must possess a certain kind of analog (fine-grained) or non-conceptual intentional content. What makes perceptual states, mental images, bodily sensations, and emotions phenomenally conscious, on this approach, is that they are conscious states with analog or non-conceptual contents. So putting these points together, we get the view that phenomenally conscious states are those states that possess fine-grained intentional contents of which the subject is aware, being the target or potential target of some sort of higher-order representation.
There are then two main dimensions along which higher-order theorists disagree among themselves. One relate to whether the higher-order states in question are belief-like or perception-like. That taking to the former option is higher-order thought theorists, and those taking the latter are higher-order experience or ‘inner-sense’ theorists. The other disagreement is internal to higher-order thought approaches, and concerns whether the relevant relation between the first-order state and the higher-order thought is one of availability or not. That is, the question is whether a state is conscious by virtue of being disposed to give rise to a higher-order thought, or rather by virtue of being the actual target of such a thought. These are the options that will now concern us.
According to this view, humans not only have first-order non-conceptual and/or analog perceptions of states of their environments and bodies, they also have second-order non-conceptual and/or analog perceptions of their first-order states of perception. Humans (and perhaps other animals) not only have sense-organs that scan the environment/body to produce fine-grained representations that can then serve to ground thoughts and action-planning, but they also have inner senses, charged with scanning the outputs of the first-order senses (i.e., perceptual experiences) to produce equally fine-grained, but higher-order, representations of those outputs (i.e., to produce higher-order experiences). A version of this view was first proposed by the British Empiricist philosopher John Locke (1690). In our own time it has been defended especially by Armstrong.
(A terminological point: this view is sometimes called a ‘higher-order experience (HOE) theory’ of phenomenal consciousness; But the term ‘inner-sense theory’ is more accurate. For as we will see in section 5, there are versions of a higher-order thought (HOT) approaches that also implicate higher-order perceptions, but without needing to appeal to any organs of inner sense.
(Another terminological point: ‘Inner-sense theory’ should more strictly be called ‘higher-order-sense theory’, since we of course have senses that are physically ‘inner’, such as pain-perception and internal touch-perception, which are not intended to fall under its scope. For these are first-order senses on a par with vision and hearing, differing only in that their purpose is to detect properties of the body rather than of the external world. According to the sort of higher-order theory under discussion in this section, these senses, too, determine what needs have their outputs scanned to produce higher-order analog contents in order for them to become phenomenally conscious. In what follows, however, the term ‘inner sense’ will be used to mean, more strictly, ‘higher-order sense’, since this terminology is now pretty firmly established.)
A phenomenally conscious mental state is a state with analog/non-conceptual intentional content, which is in turn the target of a higher-order analog/non-conceptual intentional state, via the operations of a faculty of ‘inner sense’.
On this account, the difference between a phenomenally conscious percept of red and the sort of non-conscious percepts of red that guide the guesses of a blindsighter and the activity of sensorimotor system, is as follows. The former is scanned by our inner senses to produce a higher-order analog state with the content experience of red or seems red, whereas the latter states are not - they remain merely first-order states with the analog content red. In so remaining, they lack any dimension of seeming or subjectivity. According to inner-sense theory, it is our higher-order experiential themes produced by the operations of our inner-senses which make some mental states with analog contents, but not others, available to their subjects. And these same higher-order contents constitute the subjective dimension or ‘feel’ of the former set of states, thus rendering them phenomenally conscious.
One of the main advantages of inner-sense theory is that it can explain how it is possible for us to acquire purely recognisable concepts of experience. For if we possess higher-order perceptual contents, then it should be possible for us to learn to recognize the occurrence of our own perceptual states immediately - or ‘straight off’ - grounded in those higher-order analog contents. And this should be possible without those recognizable concepts thereby having any conceptual connections with our beliefs about the nature or content of the states recognized, nor with any of our surrounding mental concepts. This is then how inner-sense theory will claim to explain the familiar philosophical thought-experiments concerning one's own experiences, which are supposed to cause such problems for physicalist/naturalistic accounts of the mind.
For example, I can think, ‘This type of experience [as of red] might have occurred in me, or might normally occur in others, in the absence of any of its actual causes and effects.’ So on any view of intentional content that sees content as tied to normal causes (i.e., to information carried) and/or to normal effects (i.e., teleological or an inferential role), this type of experience might occur without representing red. In the same sort of way, I will be able to think, ‘This type of experience [pain] might have occurred in me, or might occur in others, in the absence of any of the usual causes and effects of pains. There could be someone in whom these experiences occur but who isn't bothered by them, and where those experiences are never caused by tissue damage or other forms of a bodily insult. And conversely, there could be someone who behaves and acts just as I do when in pain, and in response to the same physical causes, but who is never subject to this type of experience.’ If we possess purely recognitional concepts of experience, grounded in higher-order percepts of those experiences, then the thinkability of such thoughts is both readily explicable, and apparently unthreatening to a naturalistic approach to the mind.
Inner-sense theory does face a number of difficulties, however. If inner-sense theory were true, then how is it that there is no phenomenology distinctive of inner sense, in the way that there is a phenomenology associated with each outer sense? Since each of the outer senses gives rise to a distinctive set of Phenomenological properties, you might expect that if there were such a thing as inner sense, then there would also be a phenomenology distinctive of its operation. But there doesn't appear to be any.
This point turns on the so-called ‘transparency’ of our perceptual experience (Harman, 1990). Concentrate as hard as you like on your ‘outer’ (first-order) experiences - you will not find any further Phenomenological properties arising out of the attention you pay to them, beyond those already belonging to the contents of the experiences themselves. Paying close attention to your experience of the Collor of the red rose, for example, just produces attention to the redness - a property of the rose. But put like this, however, the objection just seems to beg the question in favour of first-order theories of phenomenal consciousness. It assumes that first-order - ‘outer’ - perceptions already have a phenomenology independently of their targeting by inner sense. But this is just what an inner-sense theorist will deny. And then in order to explain the absence of any kind of higher-order phenomenology, an inner-sense theorist only needs to maintain that our higher-order experiences are never themselves targeted by an inner-sense-organ that might produce third-order analog representations of them in turn.
Another objection to inner-sense theory is as follows if there really were an organ of inner sense, then it ought to be possible for it to malfunction, just as our first-order senses sometimes do. And in that case, it ought to be possible for someone to have a first-order percept with the analog content red causing a higher-order percept with the analog content seems-orange. Someone in this situation would be disposed to judge, ‘It is rouge red, but, till, it immediately stands as non-inferential (i.e., not influenced by beliefs about the object's normal Collor or their own physical state). But at the same time they would be disposed to judge, ‘It seems orange’. Not only does this sort of thing never apparently occur, but the idea that it might do so conflicts with a powerful intuition. This is that our awareness of our own experiences is immediate, in such a way that to believe that you are undergoing an experience of a certain sort is to be undergoing an experience of that sort. But if inner-sense theory is correct, then it ought to be possible for someone to believe that they are in a state of seeming-orange when they are actually in a state of seeming-red.
A different sort of objection to inner-sense theory is developed by Carruthers (2000). It starts from the fact that the internal monitors postulated by such theories would need to have considerable computational complexity in order to generate the requisite higher-order experiences. In order to perceive an experience, the organism would need to have mechanisms to generate a set of internal representations with an analog or non-conceptual content representing the content of that experience, in all its richness and fine-grained detail. And notice that any inner scanner would have to be a physical device (just as the visual system of itself is) which depends upon the detection of those physical events in the brain that is the output of the various sensory systems (just as the visual system is a physical device that depends upon detection of physical properties of surfaces via the reflection of light). For it is hard to see how any inner scanner could detect the presence of an experience as experience. Rather, it would have to detect the physical realizations of experiences in the brain, and construct the requisite higher-order representation of the experiences that those physical events realize, on the basis of that physical-information input. This makes is seem inevitable that the scanning device that supposedly generates higher-order experiences of our first-order visual experience would have to be almost as sophisticated and complex as the visual system itself
Now the problem that arises here is this. Given this complexity in the operations of our organs of inner sense, there had better be some plausible story to tell about the evolutionary pressures that led to their construction. For natural selection is the only theory that can explain the existence of organized functional complexity in nature. But there would seem to be no such stories on the market. The most plausible suggestion is that inner-sense might have evolved to subserve our capacity to think about the mental states of conspecific, thus enabling us to predict their actions and manipulate their responses. (This is the so-called ‘Machiavellian hypothesis’ to explain the evolution of intelligence in the great-ape lineage. But this suggestion presupposes that the organism must already have some capacity for higher-order thought, since such thoughts in which an inner sense is supposed to subserve. And yet, some higher-order thought theories can claim all of the advantages of inner-sense theory as an explanation of phenomenal consciousness, but without the need to postulate any ‘inner scanners’. At any rate, the ‘computational complexity objection’ to inner-sense theories remains as a challenge to be answered.
Non-dispositionalist higher-order thought (HOT) theory is a proposal about the nature of state-consciousness in general, of which phenomenal consciousness is but one species. Its main proponent has been Rosenthal. The proposal is this: a conscious mental state M, of mine, is a state that is actually causing an activated belief (generally a non-conscious one) that I have M, and causing it non-inferentially. (The qualification concerning non-inferential causation is included to avoid one having to say that my non-conscious motives become conscious when I learn of them under psychoanalysis, or that my jealousy is conscious when I learn of it by interpreting my own behaviour.) An account of phenomenal consciousness can then be generated by stipulating that the mental state M should have an analog content in order to count as an experience, and that when M is an experience (or a mental image, bodily sensation, or emotion), it will be phenomenally conscious when (and only when) suitably targeted.
A phenomenally conscious mental state is a state with analog/non-conceptual intentional content, which is the object of a higher-order thought, and which causes that thought non-inferentially.
This account avoids some of the difficulties inherent in inner-sense theory, while retaining the latter's ability to explain the distinction between conscious and non-conscious perceptions. (Conscious perceptions will be analog states that are targeted by a higher-order thought, whereas perceptions such as those involved in blindsight will be non-conscious by virtue of not being so targeted.) In particular, it is easy to see a function for higher-order thoughts, in general, and to tell a story about their likely evolution. A capacity to entertain higher-order thoughts about experiences would enable a creature to negotiate the is and seems distinction, perhaps learning not to trust its own experiences in certain circumstances, and to induce appearances in others, by deceit. And a capacity to entertain higher-order thoughts about thoughts (beliefs and desires) would enable a creature to reflect on, and to alter, its own beliefs and patterns of reasoning, as well as to predict and manipulate the thoughts and behaviours of others. Indeed, it can plausibly be claimed that it is our capacity to target higher-order thoughts on our own mental state in which underlies our status as rational agents. One well-known objection to this sort of higher-order thought theory is due to Dretske (1993). We are asked to imagine a case in which we carefully examine two line-drawings, say (or in Dretske's example, two patterns of differently-sized spots). These drawings are similar in almost all respects, but differ in just one aspect - in Dretske's example, one of the pictures contains a black spot that the other lacks. It is surely plausible that, in the course of examining these two pictures, one will have enjoyed a conscious visual experience of the respect in which they differ -, e.g., of the offending spot. But, as is familiar, one can be in this position while not knowing that the two pictures are different, or in what way they are different. In which case, since one can have a conscious experience (e.g., of the spot) without being aware that one is having it, consciousness cannot require higher-order awareness.
Replies to this objection have been made by Seager (1994) and by Byrne (1997). They point out that it is one thing to have a conscious experience of the aspect that differentiates the two pictures, and quite another to experience consciously that the two pictures are differentiated by that aspect. That is, seeing the extra spot in one picture needn't mean seeing that this is the difference between the two pictures. So while scanning the two pictures one will enjoy conscious experience of the extra spot. A higher-order thought theorist will say that this means undergoing a percept with the content spot here which forms the target of a higher-order belief that one is undergoing a perception with that content. But this can perfectly well be true without undergoing a percept with the content spot here in this picture but absent here in that one. And it can also be true without forming any higher-order belief to the effect that one is undergoing a perception with the content spot here when looking at a given picture but not when looking at the other. In which case the purported counter-example isn't really a counter-example.
A different sort of problem with the Non-dispositionalist version of higher-order thought theory relates to the huge number of beliefs that would have to be caused by any given phenomenally conscious experience. (This is the analogue of the ‘computational complexity’ objection to inner-sense theory, Consider just how rich and detailed a conscious experience can be. It would seem that there can be an immense amount of which we can be consciously aware at any-one time. Imagine looking down on a city from a window high up in a tower-Form, for example. In such a case you can have phenomenally conscious percepts of a complex distribution of trees, roads, and buildings, colours on the ground and in the sky above, moving cars and pedestrians, . . . and so on. And you can - it seems - be conscious of all of this simultaneously. According to Non-dispositionalist higher-order thought theory, then, you would need to have a distinct activated higher-order belief for each distinct aspect of your experience is that, of just a few such beliefs with immensely complex contents. By contrast, the objection is the same, for which it seems implausible that all of this higher-order activity should be taking place, even if non-consciously, in every time someone is the subject of a complex conscious experience. For what would be the point? And think of the amount of cognitive space that these beliefs would take up,
This objection to Non-dispositionalist forms of higher-order thought theory is considered at some length in Carruthers (2000), where a variety of possible replies are discussed and evaluated. Perhaps the most plausible and challenging such replies would be to deny the main premise lying behind the objection, concerning the rich and integrated nature of phenomenally conscious experience. Rather, the theory could align itself with Dennett's (1991) conception of consciousness as highly fragmented, with multiple streams of perceptual content being processed in parallel in different regions of the brain, and with no stage at which all of these contents are routinely integrated into a phenomenally conscious perceptual manifold. Rather, contents become conscious on a piecemeal basis, as a result of internal or external probing that gives rise to a higher-order belief about the content in question. (Dennett himself sees this process as essentially linguistic, with both probes and higher-order thoughts being formulated in natural language. This variant of the view, although important in its own right, is not relevant to our present concerns.) This serves to convey to us the mere illusion of riches, because wherever we direct our attention, there we find a conscious perceptual content. It is doubtful whether this sort of ‘fragmental’ account can really explain the phenomenology of our experience, however. For it still faces the objection that the objects of attention can be immensely rich and varied at any given moment, hence requiring there to be an equally rich and varied repertoire of higher-order thoughts tokened at the same time. Think of immersing yourself in the colours and textures of a Van Gogh painting, for example, or the scene as your look out at your garden - it would seem that one can be phenomenally conscious of a highly complex set of properties, which one could not even begin to describe or conceptualize in any detail. However, since the issues here are large and controversial, it cannot yet be concluded that Non-dispositionalist forms of higher-order thought theory have been decisively refuted.
According to all forms of dispositionalist higher-order thought theory, the conscious status of an experience consists in its availability to higher-order thought (Dennett, 1978). As with the Non-dispositionalist version of the theory, in its simplest form we have here a quite general proposal concerning the conscious status of any type of occurrent mental state, which becomes an account of phenomenal consciousness when the states in question are experiences (or images, emotions, etc.) with analog content. The proposal is this: a conscious mental event M, of mine, is one that is disposed to cause an activated belief (generally a non-conscious one) that I have M, and to cause it non-inferentially.
A phenomenally conscious mental state is a state with analog/non-conceptual intentional content, which is held in a special-purpose short-term memory store in such a way as to be available to cause (non-inferentially) higher-order thoughts about any of the contents of that store.
In contrast with the Non-dispositionalist form of theory, the higher-order thoughts that render a percept conscious are not necessarily actual, but potential, on this account. So the objection now disappears, that an unbelievable amount of cognitive space would have to be taken up with every conscious experience. (There need not actually be any higher-order thought occurring, in order for a given perceptual state to count as phenomenally conscious, on this view.) So we can retain our belief in the rich and integrated nature of phenomenally conscious experience - we just have to suppose that all of the contents in question are simultaneously available to higher-order thought. Nor will there be any problem in explaining why our faculty of higher-order thought should have evolved, nor why it should have access to perceptual contents in the first place - this can be the standard sort of story in terms of Machiavellian intelligence.
It might be wondered how their mere availability to higher-order thoughts could confer on our perceptual states the positive properties distinctive of phenomenal consciousness - that is, of states having a subjective dimension, or a distinctive subjective feel. The answer may lie in the theory of content. Suppose that one agrees with Millikan (1984) that the representational content of a state depends, in part, upon the powers of the systems that consume that state. That is, suppose one thinks that what a state represents will depend, in part, on the kinds of inferences that the cognitive system is prepared to make in the presence of that state, or on the kinds of behavioural control that it can exert. In which case the presence of first-order perceptual representations to a consumer-system that can deploy a ‘theory of mind’, and which is capable of recognitizable applications of theoretically-embedded concepts of experience, may be sufficient to render those representations at the same time as higher-order ones. This would be what confers on our phenomenally conscious experiences the dimension of subjectivity. Each experience would at the same time (while also representing some state of the world, or of our own bodies) be a representation that we are undergoing just such an experience, by virtue of the powers of the ‘theory of mind’ consumer-system. Each percept of green, for example, would at one and the same time be an analog representation of green and an analog representation of seems green or experience of green. In fact, the attachment of a ‘theory of mind’ faculty to our perceptual systems may completely transform the contents of the latter's outputs.
This account might seem to achieve all of the benefits of inner-sense theory, but without the associated costs. (Some potential drawbacks will be noted in a moment.) In particular, we can endorse the claim that phenomenal consciousness consists in a set of higher-order perceptions. This enables us to explain, not only the difference between conscious and non-conscious perception, but also how analog states come to acquire a subjective dimension or ‘feel’. And we can also explain how it can be possible for us to acquire some purely recognitizable concepts of experience (thus explaining the standard philosophical thought-experiments). But we don't have to appeal to the existence of any ‘inner scanners’ or organs of inner sense (together with their associated problems) in order to do this. Moreover, it should also be obvious why there can be no question of our higher-order contents getting out of line with their first-order counterparts, in such a way that one might be disposed to make recognitizable judgments of red and seems orange at the same time. This is because the content of the higher-order experience is parasitic on the content of the first-order one, being formed from it by virtue of the latter's availability to a ‘theory of mind’ system.
On the downside, for which the account is not neutral on questions of semantic theory. On the contrary, it requires us to reject any form of pure input-semantics, in favour of some sort of consumer-semantics. We cannot then accept that intentional content reduces to informational content, nor that it can be explicated purely in terms of causal covariance relations to the environment. So anyone who finds such views attractive will think that the account is a hard one to swallow.
What will no doubt be seen by most people as the biggest difficulty with dispositionalist higher-order thought theory, however, is that it may have to deny phenomenal consciousness to most species of non-human animals. This objection will be discussed, among others, in the section following, since it can arguably also be raised against any form of higher-order theory.
There has been the whole host of objections raised against higher-order theories of phenomenal consciousness. Unfortunately, many of these objections, although perhaps intended as objections to higher-order theories as such, are often framed in terms of one or another particular version of such a theory. One general moral to be taken away from the present discussion should then be this: the different versions of a higher-order theory of phenomenal consciousness need to be kept distinct from one another, and critics should take care to state which version of the approach is under attack, or to frame objections that turn merely on the higher-order character of all of these approaches.
One generic objection is that higher-order theory, when combined with plausible empirical claims about the representational powers of non-human animals, will conflict with our commonsense intuition that such animals enjoy phenomenally conscious experience. This objection can be pressed most forcefully against higher-order thought theories, of either variety; However it is also faced by inner-sense theory (depending on what account can be offered of the evolutionary function of organs of inner sense). Since there is considerable dispute as to whether even chimpanzees have the kind of sophisticated ‘theory of mind’ which would enable them to entertain thoughts about experiential states as such (Byrne and Whiten, 1988, 1998; Povinelli, 2000), it seems most implausible that many other species of a mammal (let alone reptiles, birds and fish) would qualify as phenomenally conscious, on these accounts. Yet the intuition that such creatures enjoy phenomenally conscious experiences is a powerful and deep-seated one, for many people.
The grounds for this commonsense intuition can be challenged, however. (How, after all, are we supposed to know whether it is like something to be a bat?) And that intuition can perhaps be explained away as a mere by-product of imaginative identification with the animal. (Since our images of their experiences are phenomenally conscious, that the experience’s imageable is similarly conscious. But there is no doubt that one crux of resistance to higher-order theories will lie here, for many people.
Another generic objection is that higher-order approaches cannot really explain the distinctive properties of phenomenal consciousness. Whereas the argument from animals is that higher-order representations aren't necessary for phenomenal consciousness, the argument here is that such representations aren't sufficient. It is claimed, for example, that we can easily conceive of creatures who enjoy the postulated kinds of higher-order representation, related in the right sort of way to their first-order perceptual states, but where those creatures are wholly lacking in phenomenal consciousness.
In response to this objection, higher-order theorists will join forces with first-order theorists and others in claiming that these objectors pitch the standards for explaining phenomenal consciousness too high. We will insist that a reductive explanation of something - and of phenomenal consciousness in particular - don’t have to be such that we cannot conceive of the explanandum (that which is being explained) in the absence of the explanans (that which does the explaining). Rather, we just need to have good reason to think that the explained properties are constituted by the explaining ones, in such a way that nothing else needed to be added to the world once the explaining properties were present, in order for the world to contain the target phenomenon. But this is disputed territory. And it is on this ground that the battle for phenomenal consciousness may ultimately be won or lost
While orthodox medical research adheres to a linear, deterministic physical model, alternative therapist typically theorize upon that which is indeterminately nonphysical and nonlinear relationships are significant to outcome and patient satisfaction. The concept of nonlocal reality as nuocontinuum helps resolve the differences in therapeutic approach, and lets us frame a worldview that recognizes the great value of both reductive science and holistic integration. It helps distinguish the levels of description appropriate to the discussion of each, and helps in examining the relationships among consciousness, nonlocal reality, and healing.
Most recently addressed is to some informal discussion for which the problems of evaluating alternative therapies, but Dossey highlighted the stark philosophic division between orthodox and alternative health care models. While orthodox medical research adheres to a linear, deterministic physical model, alternative therapist typically postulates that indeterminate nonphysical and nonlinear relationships are significant to outcome and patient satisfaction. As Dossey summarizes that position, "Everything that counts cannot be counted."
The problems, of course, go beyond the research issues. The respective models bring different attitudes and approaches to the therapeutic encounter. Further, their different philosophic languages limit discussions among practitioners. Rapproachment becomes all the more unlikely when each camp considers the other, "wrong." It is believed to be helpful if we were to visualize the conflict as deriving from different frames of reference. Our collective task then becomes the finding of a common frame of reference a "cosmos in common," to echo Heraclitus sufficiently broad and deep to encompass both linear and nonlinear, local and nonlocal therapeutic points of view.
If we are to remain true to science, we must integrate the data that science provides us, and be willing to follow where the process leads. It is increasingly apparent that physics requires us to acknowledge meta considerations, that is, considerations that lie above and beyond physics. Those of us biomedical practitioners who base our work on physics cannot disparage as "merely metaphysics" a meta physics to which physics itself points.
As a point of departure, I would like to "frame" in general outlines a worldview that recognizes the great value of both reductive science and holistic integration, and which helps distinguish the levels of description appropriate to the discussion of each. In doing so, I will suggest a new and unweighted ecumenical term for discussing the relationships among consciousness, nonlocal reality, and healing.
The cosmos is the general descriptive term for all-that-is, which we have come to understand as an organic system of interrelated nested subsystems. Yet its most ancient representation in art is a circle. In our ordinary positivist view of things conditioned by science, the term denotes only the material nature of the universe, governed by the laws of physics. In the ordinary local cause-effect world, time-distance relationships apply, and the speed limit is that of light. Actions are mediated through a field, and forces are dissipated over distance.
However, Bell's Theorem in quantum physics establishes that "underneath" ordinary space-time phenomena there lies a deep nonlocal reality in which none of these limitations applies. To diagram cosmos one must find an appropriate way to divide the one circle. We might add an inner concentric circle, but the cosmos, as the term is currently used, would identify only the outer material "shell" of our experience of physical things. We have no agreed technical term for that which is "more" than matter, or beyond or outside it, or inside it. Syche has scientific validity as a psychological term. It denotes an inner personal dimension representing that aspect of experience that is normally unconscious to us, but which nevertheless influences individual human behaviour. However, in ordinary usage, the term psyche (soul, spirits) has no meaning apart from the individual human personality. To speak of the soul or spirit of matter (one hardly dare do so publicly) does not compute. Yet, now physics says there is a nonlocal more to the matter-work of the cosmos, and that domain is somehow related to the existence of consciousness.
But there needs to be still another inner concentric circle, or at least a centre-point. Cosmologists are beginning to speak more openly about a purposeful cosmos. For example, Hawking has asked, "Why does the universe go to the integral of the bother of existing?" If science is to ask "Why" as Hawking does, it must seek the "meaning" of matter. But Meaning ordinarily has no significance in science. To speak of meaning is to speak of significance or order beyond superficial appearances. To speak of meaning in relation to the cosmos is to speak of metaphysics, the realm of religion and philosophy.
Yet, such meaning is implicit in the anthropic principle of physics, and in the strange attractors by which order emerges from chaotic chemical and nonlinear mathematical systems. Though such meaning is an idea new to modern science, religion and philosophy have variously described it as logos, to Way, and Word. In that of residing in "lure" of an orienting change, as mentioned by Whitehead, and in the function of the radial energy of which Teilhard spoke.
Now, on scientific grounds alone, we must devise a "cosmorama" of at least three compartments, if it is to encompass the phenomena of the universe. Resolving and explaining these relationships may be quite complex; or it may be surprisingly simple. In any case, there are a number of questions to be answered, and a number of problems in physics and psychology that invite us to frame a unification theory.
One principal problems in quantum physics is the question of observer effect. What is the role of consciousness in resolving the uncertainties of actions at the quantum level? Before an observation, the question of whether a quantum event has occurred can be resolved only by calculating a probability. The unconscious reality of the event is that it is a mix of the probabilities that it has happened and that it has not. That "wave function" of probabilities is said to "collapse" only at the point of observation, that is, only in the interaction of unconsciousness with consciousness.
Schrödinger illustrated the problem by describing a thought experiment involving a cat in a sealed box: If the quantum event happened, the cat would be poisoned; if not, when the box was opened, the cat would be found alive. Until then, we could know the result only as a calculation of probabilities. Under the condition’s Schrödinger described, we may think of the cat's condition only mathematically: the cat is both dead and alive, with equal probability. Only by the interaction of event with observer is the "wave function collapsed."
If a tree falls in the forest when there is no one present to hear it, has there been a sound? That question can be resolved by adjusting the definition of sound. In the question of the quantum "event in the box" we are dealing with something much more fundamental. Can creation occur without an observer? Without consciousness? Or without at least the prospect of consciousness emerging from the act of creation? That may be the most basic question that begs resolving.
Another of our unification problems is the virtual particle phenomenon. Some particles appear unpredictably, exist for extremely short periods of time, then disappear. Why does a particle appear in the force field suddenly, without apparent cause? What distinguishes stable particles from the temporary ones? Something in the force field? Something related to the act of observation?
Another major concern of physics is the unification of the elemental physical forces. Study of the "several" forces has progressively merged them. Electricity and magnetism came to be understood as one force, not two. More recently, effects associated with the weak nuclear force were reconciled with electromagnetism, so that now we recognize one electroweak force. Further, there have been mathematical demonstrations that unify the electroweak and the strong nuclear force.
If it could be demonstrated that the "electronuclear" force and the force of gravity are one super force (as has been widely expected), energy effects at the largest and the smallest scales of the universe would be explained. That unification process has led to a theory of a multidimensional universe, in which there are at least seven "extra" dimensions that account for the forces and the conservation laws (symmetries) of physics. They are not extra dimensions of space-time, for which one could devise bizarre travel itineraries, but abstract mathematical dimensions that in some sense constitute the nonlocal (non-space time) reality within which cosmos resides.
However, the search for a unified theory has led to an apparent impasse, for theories of unification seem also to require a continuing proliferation of particles. A new messenger particle (or class of particles) called the Higgs boson, seems to be needed to explain how particles acquire mass, and to avoid having infinity terms (the result of a division by zero) crop up in the formulas that unify the forces. Leon Lederman, experimental physicist and Nobelist, calls it "The God Particle." He writes, "The Higgs field, the standard model, and our picture of how God made the universe depend on finding the Higgs boson."
Still, major questions remain. To some others, particle physics has seemed to reach its limit, theoretically as well as experimentally. Oxford physicist Roger Penrose has written: If there is to be a final theory, it could only be a scheme of a very different nature. Rather than being a physical theory in the ordinary sense, it would have to remain a principle, as a mathematical principle of whose implementation might have itself involve nonmechanical subtlety.
Perhaps the time has come for us to accept that cosmos has "infinity terms" after all.
Psychology is conventionally defined as the study of behaviour, but for our purposes, it must be returned to the meaning implied in the roots of the word: the study of soul and spirit. Of course, the most obvious phenomenon of psychology is the emergence of consciousness. In the light of the anthropic principle of physics, we now must ask, as a distinctively psychological question, what purpose for the cosmos does consciousness serve?
Another question: Jung has presented the evidence for an archetypal collective unconscious that, on the basis of current understandings, must certainly be inherited as the base-content of human nature. Archetypal genetics has yet to be defined. Symbol processing certainly does have its "local" physical aspect, in the function of the brain and the whole-body physiology that supports it. Nonetheless, that there is a nonlocal reality undergirding psyche is readily evident.
The reality of the dream experience is nonlocal, unconfined by rules of time and space and normal effect. Further, it is nonlocal in that the reality extends beyond the individual, consistently following patterns evident throughout the recorded history of dream and myth. The psyche functions as though the brain, or at least its mechanisms of consciousness, is "observer" for the dream "event in the box" of an unconscious nonlocal collective reality. The archetypal unconscious suggests that there be a psychological substrate from which consciousness and its content have emerged.
In the emergence of consciousness primally, and in the extension of consciousness in modern people through the dreaming process, the collective unconscious (self) seems to serve a nonlocal integrating function, yielding images that the conscious (ego) must differentiate from its "local" observations of the external space-time world. Thus, is consciousness extended.
In that process, however, the ego must self-reflectively also "keep in mind" that our perception of the external physical world is not the reality of the physical world, but an interpretation of it; Nor is the external phenomenal world the only reality. To keep our interpretations of the physical world "honest," we must subject observation to tests of consistency and reason, but the calculus of consciousness is the calculus of whole process, both differential and integral. Consciousness cannot be extended, but is diminished, when it denies the reality of the unconscious.
Jung has also pointed to certain meaningful associations between events in psyche and events in the physical world, but which are not related causally. He called such an association a synchronicity, which he defines as "an accusal connecting principle." These are simultaneous or closely associated conversions that not have connected physically, in any ordinary cause-effect way. However, they are connected meaningfully, that is, psychically. They may have very powerful impact on a person's psychic state and on the subsequent unfolding of personality. Jung studied them with Wolfgang Pauli, a quantum physicists in whose life such phenomena were overly frequent.
A synchronicity seems to suggest that a nonlocal psychological reality either communicate with or is identical to the nonlocal reality known in physics. Since it is inconceivable to have two nonlocal realities coexisting separately from one, another, we can confidently assert that there is indeed, only one nonlocal reality.
Another set of phenomena inviting consideration is that which includes group hysteria and mob action. A classic example is that of a high school band on a bus trip, on which all members get "food poisoning" simultaneously before a big game. After exhaustive epidemiological work, no evidence of infection or toxins is found, and the "cause" is attributed to significant amounts where stress and the power of suggestion lay. The mechanisms are entirely unconscious to the band members; it is as though their psyches have "communicated" in a way that makes them act together. Similarly, in mob action, though the members may be conscious of the anger that moves them, generally the event seems to be loaded with an unconscious dynamic within the group that prepares the way for the event itself.
Physicist Paul Davy has written that one of the basic problems is constructing an adequate definition of the dimensionality. The ordinary dictionary definition describes a dimension in terms of magnitude or direction (height, depth, width), and we ordinarily think of the dimensions as perpendicular to each other. But that works only for the familiar spatial dimensions and the actions of ordinary objects. Imagine compressing all three-dimensional space toward a single point; As it comes close to a point, the concept of being perpendicular loses all meaning. Another problem is that it does not really make sense to think of time (which is a dimension, too) as perpendicular to anything.
A dimension is one of the domains of action permitted to or on an object. By domain I mean something like a field of influence or action. Verticality is not a thing that acts on an object, but is rather than which permits and influences a movement in space, and which influences our description of the movement. For example, verticality is one particular aspect of abstract reality that determines the behaviour of an object. But the abstract is real! Take verticality away from three-dimensional space, and an object is permitted to move only in a way that we can analyse as a mix of horizontal and forward-backward motions. Take the horizontal away, and the object may move only along a straight line (one-dimensional space). "String" theories, which approach a "Grand Unification" of all of the physical forces, posit dimensions beyond the four of the space-time. There is no theoretical limit to the number of dimensions, for external to space-time there is no concept of "container" or limit.
Since all of the non-space time dimensions, by definition, are not extended in space or time, we must conceive of them as represented by points. Since they act together of o space-time, they must "intersect" or somehow communicate with the primal space-time point. For that reason (and because in the absence of space-time no point can be offset from another), we must imagine the dimensions as many points superimposed into one. Let's call it the SuperPaint. We may in fact imagine as many superimposed points (dimensions) as past and future experiments might require to explain the phenomena of creation.
The initial conditions of our space-time universe are defined in that one SuperPaint; the Big Bang represents the explosive expansion of four of those dimensions, space-time. The creation-energy (super force) responsible for that expansion is concentrated in and at the multidimensional SuperPaint. Yet we must also think of other changes at the SuperPaint, for as energy levels dissipate immediately after the Big Bang, the super force quickly "evolves" into the four physical forces conventionally known.
We have said that only the space-time dimensions are expanding, because the force dimensions ("contained" in the SuperPaint) are not spatial. By definition, we may not imagine non-space time points as extended in space. However, all points in expanding space-time must still "communicate" with the force dimensions (and the symmetry dimensions, but we are neglecting them for the moment). All points in space-time must intersect the force dimensions.
It is as if the force dimensions too have been expanded to the size of space-time, for they are acting on each particle of energy/matter in the universe. One might imagine that one point has been stretched as a featureless elastic sheet, a continuum in which the point is everywhere the same.
However, quantum theory deals with these forces as discrete waves/particles. For example, the force of gravity is communicated by gravitons; The strong nuclear force by gluons is the electromagnetic force by photons. If we conceive the stretched points of the dimensions as "sheets," the sheets must have waves in them. These "stretched sheets" which constitute the field in which energy interacts with particles to sustain (and indeed, to continue the creation of) the universe. As I have expressed it in a poem, it is the field "where the forces play pinball / with gravitons and gluons / and modulate / the all."
Let us imagine again that space-time (four dimensions) is compressed toward a point. It is futile to ask what is outside that small pellet of space-time, for the concept of "outside ness" has no meaning but within space-time. As the pellet becomes smaller still, it shrinks toward nothingness, for a point is an abstract concept of zero dimensions, not extended in space or time, and thus it cannot "contain" anything. At that point, nothing exists except the thinker who is trying to imagine nothingness.
If we could model thought as only an epiphenomenon of matter, reached at a certain degree of complexity, it has no fundamental reality of its own. In that case, our thought experiment to shrink the cosmos reaches a point at which thought is extinguished, and the experiment must stop, if it is to follow the "rules" that it is modelling. However, by accepting that thought might have a reality of its own, and by considering the problem from a whole-system perspective, we were able to continue the thought experiment to the point at which only the thought remains. The epiphenomenon idea is not an adequate model of reality, since we can indeed continue the experiment under the conditions outlined.
This "negative proof" is indirect, serving only to eliminate the epiphenomenon model. It does not prove that there is an independent and fundamental reality beyond space-time and matter; the experiments supporting Bell's Theorem do that. This line of thinking, however, does lead us to suggest that thought be a primary aspect of reality. It seems that the cosmos itself is saying with Descartes, "I think, therefore I am."
Because of this inescapable "relativistic" connection between cosmos and thought, I cannot imagine creation ex nihilo (from nothing), for the concept of nothing always collides with the existence of the one who is the thinker. Nothing has any meaning apart from something. The dimension of thinking is required to imagine a zero-dimensional space-time.
The epiphenomenon model posits that nothing is defined as the absence of matter. If that is so, thought is nothing; However, if it were nothing, I could not be thinking that thought, so thought must be of something. There can be no nothingness, for even if all that exists is reduced to nothingness, a dimension of reality remains. Reality requires at least one dimension in addition to space-time and that reality seems inseparable from the dimension of thought.
What is missing from our existing scheme of dimensions is a description of that dimension that we could not eliminate by playing the videotape of creation in reverse: that reality at the SuperPaint from which the dimension of thought cannot be separated. That leads to a rather extravagant and intuitive proposal, following Anaxagoras: Thought is the missing particle, the missing dimension.
Quantum physics already acknowledges the importance of consciousness as "observer." Consciousness is the substrate of thought. Thought is consciousness dimensionally extended, whether in time or some other dimension. Thought is process. Any unification of the laws of physics must necessarily take into account the thought/consciousness dimension, and thus must unify physics with psyche as well.
In his book. The self-aware Universe, Admit Goswami uses the term consciousness to mean transcendental consciousness, which forms (or is) the nonlocal reality. Other physicists seem to define the term of cautiously, and one often wonders whether a given text about observer effect is referring to ordinary individual awareness, or to some more general property of psyche.
It is useful to preserve the important distinction between consciousness and unconsciousness. Psychologically, ordinary human consciousness is the realm of ego and the cognitive functions called mind. Neurologically it refers to a patient's observed state of awareness. The clinical unconscious is the realm of psyche, with both personal and collective aspects. Perhaps a better language will come along in time. Until then, let me suggest an interim language for discussing, and perhaps a framework for someday testing, the relationship between matter and psyche. Its proposal is that there is a unit of psyche, which I designate the neon, from the Greek word nous, for mind. Nuons represent the dimensions of thought that exist in (at, as) the SuperPaint defining the initial conditions of the Big Bang. As the domain of the force dimensions, those Nuons must be imagined to expand as a field or continuum (the nuocontinuum) as the space-time continuum expands, a "stretched sheet" with "waves" which are also Nuons. The Nuons of the SuperPaint are extended in space-time in a way conceptually analogous to the action of the forces.
Yet Nuons must also be construed as the domain of the symmetries, such as the principle of conservation of energy, which are nonlocal. That is, they are everywhere in effect, without being constrained by the speed-limit of light. The nuocontinuum thus represents a multidimensional bridge between forces, symmetries, and space-time. Nuons collectively contain all potentialities, but the collective (nuocontinuum) is the unit, itself the symmetry that unifies the forces and symmetries. The Nuons is the "infinity particle" which solves the formulas.
Does the nuocontinuum represent a fractal (fractional dimensions) such as those that give the mathematical order to the "chaos" images? Does it provide the prime tone of which the symmetries and the forces are harmonics? Whether construed mathematically or poetically, the nuocontinuum contains the information necessary to create a universe, but a universe that is organically creating itself.
Human awareness, which occurs at a level of extraordinary complexity in the organization of space-time particles, would involve, not a "creation" of consciousness as an epiphenomenon, but a sensing of a quality that is already there, as the reality dimension of the cosmos. The observer effect at the quantum level (and the health of Schr”dinger's cat) is then to be understood as an interaction, not with a particle of concrete matter, but with the reality substrate from which matter arises.
If we construe the whole nuocontinuum (rather than the experimenter) to be the "observer" of the quantum event in the box, we avoid much of the confusion and exasperation that Schroedinger's thought experiment evokes. Hawking wrote, "When I hear of Schroedinger's cat, and I reach for my gun." Even Einstein was repelled by quantum uncertainty. DeBroglie especially held out for an interpretation of quantum physics which supported concreteness. We rebel against the idea of a universe based on uncertainty, and we seek to assure ourselves that what we experience is a concrete reality.
However, if the nuocontinuum is the observer that resolves the quantum uncertainty, our own individual sense of uncertainty is also resolved. The collapse of the particle wave function (the coming into being of the particle at a particular point in space-time) would be a function of the nuocontinuum acting as a whole, rather than as a local observer. The nuocontinuum is the observer who actualized creation the cosmic event in the box prior to the development of human consciousness. It is that cosmic observer who unifies the quantum effects of the electronuclear forces and the cosmic effects of gravity.
The Nuocontinuum, then, designates an unlimited, infinite connecting principle that binds all that is. Because it accounts for the material characteristics of the cosmos, it is "Creator." Because it presents itself through the agency of human consciousness, it may be sensed as Person and named Holy Spirit or Great Mystery. It is the source of that compelling "passion" of which Teilhard spoke, "to become one with the world that envelops us." Thus, though well beyond the scope of this article, the concept has implications for depth psychology and for theology. It has potential to help humans globally recapture a sense of meaning to human life, and to understand the experiences of those whose terminologies differ. Unless we do so, or at least critical masses of us do, we remain at great risk for destroying ourselves.
But its implications for the healing arts are also profound, for it makes us look at familiar concepts in quite a different light. In its affirmation of meaningful order in the cosmos as a whole, the nuocontinuum concept gives further definition and import to homeostasis as a healing, balancing principle that has more than physiological significance. When we invoke the term "placebo effect" we (usually unwittingly) are invoking a principle of the connectedness between an intervention and an effect, which now can be named and conceptualized. "Spontaneous remissions" of disease would be seen as something less than miracles but clearly more than merely chemical. After all, if physics can reach a limit to its powers of description, so too must be psychoneuroimmunology.
Practitioners, have become aware of the connectedness principle, we will become more aware that our own attitudes and approaches are significant to treatment outcomes and patient satisfaction. We will then realize that even though an experiment may be "doubly-blind" to some experimenters and to some persons being tested, there may be other influences outside the cause-effect "loop" and connections of which other persons may be conscious. Further, we will better understand that there are different levels of connectivity at work in every action, which require different levels of description to explain. And we might become more sensitive to patient's hopes and expectations that so are often stated in religious terms.
At this point in our harvest of knowledge, this synthesis is quite intuitive and speculative. However, even highly abstract drawings are often helpful in organizing thought. I hope that through some such synthesis as this, couched in whatever language, we will be given that courage to which Dossey eludes, to enter the "doorway through which we may encounter a radically new understanding of the physical world and our place in it." And, ones hope, assure the continued development of our abilities, together, to offer help to all in need of healing.
We collectively glorify our ability to think as the distinguishing characteristic of humanity; we personally and mistakenly glorify our thoughts as the distinguishing pattern of whom we are. From the inner voice of thought-as-words to the wordless images within our minds, thoughts create and limit our personal world. Through thinking we abstract and define reality, reason about it, react to it, recall past events and plan for the future. Yet thinking remains both woefully underdeveloped in most of us, as well as grossly overvalued. We can best gain some perspective on thinking in terms of energies.
We are hanging in language. We are suspended in language in such a way that we cannot say what is up and what is down, Niels Bohr lamented in the 1920s when confronted with the paradoxes, absurdities, and seeming impossibilities encountered in the then newly discovered quantum domain. The problem, he insisted, was not the quantum wonderland itself, but our language, our ways of thinking and talking about it. His colleague, Werner Heisenberg, went a step further and proclaimed that events in the quantum wonderland are not only unspeakable, they are unimaginable.
The same situation confronts today us when we try to talk about consciousness and how it relates to matter-energy. Go fishing for consciousness using the net of language and it always, inevitably, slips through the holes in our net. The limits of language-and imagination in talk about consciousness have been recently underlined, yet again, by the exchanges between philosopher Mark Woodhouse and physician Larry Dossey in the pages of Network.
Essentially, both men take opposing positions regarding the appropriateness of "energy talk" as a way of describing or explaining consciousness or mental phenomena. Woodhouse defends the use of energy talk (and proposes what he seems to think is a novel solution); Dossey denies the appropriateness of talking about consciousness in terms of energy. In for Woodhouse, consciousness is energy ("each is the other"); for Dossey, consciousness is not energy. As a philosopher passionately committed to exploring the relationship between consciousness and matter, between mind and body, and, specifically, the question "Can we have a science of consciousness?" I think the dialogue between Woodhouse and Dossey opens up a crucially important issue for philosophy of mind and for a science of consciousness. I believe the "energy question" is central to any significant advance we may make into understanding consciousness and how it relates to the physical world.
This relationship, is nevertheless, accredited by a double-aspect perspective: "Energy is the 'outside' of consciousness and consciousness is the 'inside' of energy throughout the universe." But making or that we have fallen into a fundamental philosophical error. As of urging to entice us for which we hold to bind of a particularly atypical sensibility for engaging the encounter with the narratives that belong to some "energy talk" about consciousness. But this study as at times happens to be of something to mention as a double-prospective that foregoes the most important point, and thereby fails to acknowledge what it is of true philosophically and by virtue of its existing character whose value we model.
A major challenge facing philosophers and scientists of consciousness (and anybody else who wishes to talk about it) is finding appropriate concepts, words and metaphors. So much of our language is derived from our two most dominant senses: vision and touch. Vision feeds language with spatial metaphors, while touch-or rather, kinesthetics-feeds language with muscular push-pull metaphors. The visuo-muscular senses dominate our perception and interaction with the world, and consequently metaphors derived from these senses dominate our ways of conceiving and talking about the world. It is no accident that spatial and mechanical descriptions and explanations predominate in physics-the paradigm science (and our culture's paradigm for all knowledge). Given our evolutionary heritage, with its selective bias toward vision and kinesthetics, we live predominantly in a spatial-push-pull world-the world of classical mechanics, a "billiard-ball" universe of moving, colliding, and recoiling massive bodies. Ours is a world of matter in motion, of things in space acted on by physical forces.
It should not be surprising, then, that when we come to talk about consciousness, our grooves of thinking channels us toward physics-talk-expressed today as "energy talk." Forces are felt-experienced in the body and we are tempted to think that the experience of force is identical to the energy exchanges between bodies described by physics. But this is to confuse the feeler's feeling (the subject) with what is felt (the object). More on this later.
Previously mentioned, was that the Woodhouse-Dossey debate highlights yet again the limits of language when we try to talk about consciousness. This problem is at least as old as Descartes' mind-body dualism (though, as we will see, it is not confined to Cartesian dualism-it is there, too, in forms of idealism known as the "Perennial Philosophy"). When Descartes made his famous distinction between mind and matter, he found himself "suspended in the language" of physics. He could find no better way to define mind than negatively in the terminology of physics. He defined matter as that which occupies space"res’ extensa," extended things. He defined the mental world as "res comitans," thinking things-and thinking things differ from physical things in that they do not occupy space. The problem was how could material, physical, things interact with nonphysical things? What conceivably could be the nature of their point of contact-material or mental? Centuries later, Freud, too, resorted to physics-energy talk when to specify the "mechanisms" and dynamics of the psyche-e.g. his concept of the libido. Today, the same tendency to use energy technologically to converse in talking, as Dossey points out, is rife in much new age talk about consciousness, soul, and spirit, exemplified in Woodhouse's article and his book Paradigm Wars.
Because of our reliance on the senses of vision and kinesthetics, we have an evolutionary predisposition, it seems, to talk in the language of physics or mechanics-and by that I mean "matter talk," or "energy talk." Yet all such talk seems to miss something essential when we come to speak of phenomena in the domain of the mind-for example, emotions, desires, beliefs, pains, and other felt qualities of consciousness. The inappropriate chunkiness of mechanistic metaphors borrowed from classical physics seems obvious enough. The mind just isn't at all like matter or machines, as Descartes was keenly aware. But then came Einstein's relativity, and the quantum revolution. First, Einstein's E = mc2 showed that matter was a form of energy, and so, with the advent of quantum theory, the material world began to dissolve into unimaginable, paradoxical bundles of energy or action. Matter itself was now understood to be a ghostly swirl of energy, and began to take on qualities formerly associated with mind. A great physicist, Sir James Jeans, even declared that "universe begins to look more like a great thought." Quantum events were so tiny, so undetermined, so un-mechanical in the classical sense, they seemed just the sort of thing that could respond to the influence of the mind.
The quantum-consciousness connection was boosted further by the need (at least in one interpretation of quantum theory) to include the observer (and his/her consciousness) in any complete description of the collapse of the quantum wave function. According to this view, the quantum system must include the consciousness of the observer. Ghostly energy fields from relativity and the quantum-consciousness connection triggered the imaginations of pop-science writers and dabblers in new age pseudo-science: Quantum theory, many believe, has finally opened the way for science to explore and talk about the mind. But the excitement was-and is-premature. It involves the linguistic and conceptual sleight-of-hand, whereas the clucky mechanical language that is in fact a matter that was obviously at best in metaphoric principles, just when applied to consciousness, it now seemed more reasonable to use the language of energy literally-particularly if cloaked in the "spooky" garb of quantum physics. But this shift from "metaphorical matter" to "literal energy" was unwarranted, unfounded, and deceptive.
Dissolving matter into energy makes neither of them are less conceptual. And the mark of the physical, as Descartes had pointed out, is that it is extended in space. Despite the insuperable problems with his dualism, Descartes' key insight remains valid: What distinguishes mind from matter is precisely that it does not occupy space. And this distinction holds just as fast between mind and energy-even so-called subtle energy (hypothetical "subtle energy" bodies are described as having extension, and other spatial attributes such as waves, vibrations, frequencies). Energy, even in the form of infinitesimal quanta or "subtle vibrations," still occupies space. And any theory of energy as a field clearly makes it spatial. Notions of "quantum consciousness" or "field consciousness"-and Woodhouse's "vibrations," "ripples," or "waves" of consciousness-therefore, are no more than vacuous jargon because they continue to fail to address the very distinction that Descartes formulated nearly four hundred years ago.
But that's not even the most troublesome deficiency of energy talk. It is equitably to suppose that physicists were proficient to show that quanta of energy did not occupy space; Suppose the behaviour of quanta was so bizarre that they could do all sorts of "non-physical" things-such as transcend space and time; Suppose that even if it could be shown that quanta were not "physical" in Descartes' sense . . . even supposing all of this, any proposed identity between energy and consciousness would still be invalid.
Energies talk fails to account for what is fundamentally most characteristic about consciousness, namely its subjectivity. No matter how fine-grained, or "subtle," energy could become, as an objective phenomenon it could never account for the fact of subjectivity-the "what-it-feels-like-from-within-experience." Ontologically, subjectivity cannot just emerge from wholly objective reality. Unless energy, at its ontologically most fundamental level, already came with some form of proto-consciousness, proto-experience, or proto-subjectivity, consciousness, experience, or subjectivity would never emerge or evolve in the universe.
Which brings us to Woodhouse's "energy monism" model, and the notion that "consciousness is the 'inside' of energy throughout the universe." Despite Dossey's criticism of this position, I think Woodhouse is here proposing a version of the only ontology that can account for a universe where both matter-energy and consciousness are real. He briefly summarizes why dualism, idealism, and materialism cannot adequately account for a universe consisting of both matter/energy and consciousness. (He adds "Epiphenomenalism" to these three as though it were distinctly ontological. It is not. Epiphenomenalism is a form of property dualism, which in turn is a form of materialism.) He then proceeds to outline a "fifth" alternative: "Energy monism." And although I believe his fundamental insight is correct, his discussion of this model in terms of double-aspectism falls victim to a common error in metaphysics: He confuses epistemology with ontology.
Woodhouse proposes that the weaknesses of the other ontologies-dualism, idealism, and materialism-can be avoided by adopting a "double-aspect theory that does not attempt to reduce either energy or consciousness to the other." And he goes on to build his alternative ontology on a double-aspect foundation. Now, I happen to be highly sympathetic with double-aspectism: It is a coherent and comprehensive (even "holistic") epistemology. As a way of knowing the world, double-aspectism opens up the possibility of a complementarity of subjective and objective perspectives.
But a perspective on the world yields epistemology-it reveals’ something about how we know what we know about the world. It does not reveal the nature of the world, which is the aim of ontology. Woodhouse makes an illegitimate leap from epistemology to ontology when he says, "This [energy monism] is a dualism of perspective, not of fundamental stuff," and concludes that "each is the other." Given his epistemological double-aspectism, the best Woodhouse can claim to be an ontological agnostic (as, in fact, Dossey does). He can talk about viewing the world from two complementary perspectives, but he cannot talk about the nature of the world in itself. Certainly, he cannot legitimately conclude from talk about aspects or perspectives that the ultimate nature of the world is "energy monism" or that "consciousness is energy." Epistemology talk cannot yield ontology talk-as Kant, and later Bohr, were well aware. Kant said we cannot know the thing-in-itself. The best we can hope for is to know some details about the instrument of knowing. Bohr said that the task of quantum physics is not to describe reality as it is in itself, but to describe what we can say about reality.
The issue of whether energy talk is appropriate for consciousness is to resolve ontologically not epistemological ly. At issue is whether consciousness is or is not a form of energy-not whether it can be known from different perspectives. If it is a form of energy, then energy talk is legitimate. If not, energy talk is illegitimate. But the nature of consciousness is not to be "determined by perspective," as Woodhouse states: "insides and outsides are determined by perspectives." If "insides" (or "outsides") were merely a matter of perspective, then any ontology would do, as long as we allowed for epistemological dualism or complementarity (though, of course, the meaning of "inside" and "outside" would differ according to each ontology). What Woodhouse doesn't do (which he needs to do to make his epistemology grow ontological legs) has established an ontology compatible with his epistemology of "inside" and "outside." In short, he needs to establish an ontological distinction between consciousness and energy. But this is precisely what Woodhouse aim to avoid with his model of energy monism. Dossey is right, I think, to describe energy talk about consciousness as a legacy of Newtonian physics (i.e., of visuo-kinesthetic mechanics). This applies equally to "classical energy talk," "quantum-energy talk," "subtle-energy talk," and Woodhouse's "dual-aspect energy talk." In an effort to defend energy talk about consciousness, Woodhouse substitutes epistemology for ontology, and leaves the crucial issue unresolved.
Unless Woodhouse is willing to ground his double-aspect epistemology in an ontological complementarity that distinguishes mind from matter, but does not separate them, he runs the risk of unwittingly committing "reductionism all over again"-despite his best intentions. In fact, Woodhouse comes very close to proposing just the kind of complementary ontology his model needs: "Consciousness isn't just a different level or wave form of vibrating energy; it is the 'inside' of energy-the pole of interiority perfectly understandable to every person who has had a subjective experience of any kind" (emphasis added). This is ontology talk, not epistemology talk. Woodhouse's error is to claim that the distinction "inside" (consciousness) and "outside" (energy) is merely a matter of perspective.
In order to defend his thesis of "energy monism," Woodhouse seems to want it both ways. On the one hand, he talks of being conscious and energy being ontologically identical"each is the other"; on the other, he makes a distinction between consciousness and energy: Energy is the 'outside' of consciousness and consciousness is the 'inside' of energy. He attempts to avoid the looming contradiction of consciousness and energy being both "identical yet distinct" by claiming that the identity is ontological while the distinction is epistemological. But the distinction cannot be merely epistemological-otherwise, as already pointed out, any ontology would do. But this is clearly not Woodhouse's position. Energy monism, as proposed by Woodhouse, is an ontological claim. Woodhouse admits as much when he calls energy monism "a fifth alternative" to the ontologism of dualism, idealism, materialism (and Epiphenomenalism [sic]) which he previously dismissed.
Furthermore, Woodhouse "inside" and "outside" are not merely epistemological when he means them to be synonyms for "subjectivity" and "objectivity" respectively. Although subjectivity and objectivity are epistemological perspectives, they are not only that. Subjectivity and objectivity can have epistemological meaning only if they refer to some implications of a primary ontological distinction-between what Sartre (1956) called the "for-itself" and the "in-itself," between that which feels and that which is felt. Despite his claims to the contrary, Woodhouse's distinction between "inside" and "outside" is ontological-not mere epistemological. And as an ontological distinction between consciousness and energy, it is illegitimate to conclude from his double-aspect epistemology the identity claim that "consciousness is energy." Woodhouse's consciousness-energy monism confusion, it seems to me, is a result of: (1) a failure to distinguish between non-identity and separation, and (2) a desire to avoid the pitfalls of Cartesian dualism. The first is a mistake, the second is not-but he conflates the two. He seems to think that if he allows for a non-identity between consciousness and energy this is tantamount to their being ontologically separate (as in Cartesian dualism). But (1) does not encompass that of (2): Ontological distinction does not entail separation. It is possible to distinguish two phenomena (such as the form and substance of a thing), yet recognize them as inseparable elements of a unity. Unity does not mean identity, and distinction does not mean separation. (I will return to this point shortly.) This muddle between epistemology and ontology is my major criticism of Woodhouse's position. Though if he had the courage or foresight to follow through on his epistemological convictions, and recognize that his position is compatible with (and would be grounded by) an ontological complementarity of consciousness and energy.
The ontological level of understanding (though explicitly denied) in Woodhouse's double-aspect model-where consciousness ("inside") and energy (“outside”) is actual throughout the universe is none other than panpsychism, or what has been variously called pan experientialism (Griffin, 1997) and radical materialism (de Quincey, 1997). It is the fourth alternative to the major ontologism of dualism, idealism, and materialism, and has a very long lineage in the Western philosophical tradition-going all the way back to Aristotle and beyond to the Presocratics. Woodhouse does not acknowledge any of this lineage, as if his double-aspect model was a novel contribution to the mind-matter debate. Besides Aristotle's hylemorphism, he could have referred to Leibniz' monads, Whitehead "actual occasion," and de Chardin's "tangential energy" and the "within" as precursors to the distinction he makes between "inside" and "outside." This oversight weakens the presentation of his case. Of course, to have introduced any or all of these mind-body theories would have made Woodhouse's ontological omission all the more noticeable.
One other weakness in Woodhouse's article is his reference to the Perennial Philosophy and the Great Chain of Being as supportive of energy talk that unites spiritual and physical realities. "The non-dual Source of some spiritual traditions . . . is said to express itself energetically (outwardly) on different levels in the Great Chain of Being (matter being the densest form of energy) . . ." Woodhouse is here referring to the many variations of idealist emanationism, where spirit is said to pour itself forth through a sequence of ontological levels and condense into matter. But just as I would say Woodhouse's energy monism unwittingly ultimately entails physicalist reductionism, my criticism of emanationism is that it, too, ultimately "physicalizes" spirit-which no idealist worth his or her salt would want to claim. Energy monism runs the same risk of "physicalizing" spirit as emanationism. So I see no support for Woodhouse's position as an alternative to dualism or materialism coming from the Perennial Philosophy. Both run the risk of covert dualism or covert materialism.
Dossey's critique of Woodhouse's energy monism and energy talk, particularly his caution not to assume that the "nonlocal" phenomena of quantum physics are related to the "nonlocal" phenomena of consciousness and distant healing other than a commonalty of terminology is sound. The caution is wise. However, his critique of Woodhouse's "inside" and "outside" fails to address Woodhouse's confusing epistemology and ontology. If Dossey saw that Woodhouse's intent was to confine the "inside/outside" distinction to epistemology, he might not have couched his critique in ontological terms. Dossey says, "By emphasizing inside and outside, interior and exterior, we merely create new boundaries and interfaces that require their own explanations." The "boundaries and interfaces" Dossey is talking about being ontological, not epistemological. And to this extent, Dossey's critique misses the fact that Woodhouse is explicitly engaged in epistemology talk. On the other hand, Dossey is correct to assume that Woodhouse's epistemological distinction between "inside and outside" necessarily implies an ontological distinction-between "inside" (consciousness) and "outside" energy.
Dossey's criticism of Woodhouse's energy monism, thus, rests on an ontological objection: Even if we do not yet have any idea of how to talk ontologically about consciousness, we at least know that (despite Woodhouse's contrary claim) consciousness and energy are not ontologically identical. There is an ontological distinction between "inside/consciousness" and "outside/energy." Thus, Dossey concludes, energy talk (which is ontological talk) is inappropriate for consciousness. On this, I agree with Dossey, and disagree with Woodhouse. However, Dossey goes on to take issue with Woodhouse's "inside/outside" distinction as a solution to the mind-body relation. If taken literally, Dossey's criticism is valid: "Instead of grappling with the nature of the connection between energy and consciousness, we are now obliged to clarify the nature of the boundary between 'inside' and 'outside' . . ." But I suspect that Woodhouse uses the spatial concepts "inside/outside" metaphorically because like the rest of us he finds our language short on nonphysical metaphors (though, as we will see, nonspatial metaphors are available).
It may be, of course, that Woodhouse has not carefully thought through the implications of this spatial metaphor, and how it leaves him open to just the sort of critique that Dossey levels. Dossey, I presume, is as much concerned with Woodhouse's claim that "consciousness is energy," meaning it is the "inside" of energy, as he is about the difficulties in taking the spatial metaphor of "inside/outside" literally. On the first point, I share Dossey's concern. I am less concerned about the second. As long as we remember that talk of "interiority" and "exteriority" are metaphors, I believe they can be very useful ways of pointing toward a crucial distinction between consciousness and energy.
The metaphor becomes a problem if we slip into thinking that it points to a literal distinction between two kinds of "stuff" (as Descartes did), or indeed to a distinction revealing two aspects of a single kind of "stuff." This latter slip seems to be precisely the mistake that Woodhouse makes with his energy monism. By claiming that consciousness is energy, Woodhouse in effect-despite his best intentions to the contrary-succeeds in equating (and this means "reducing") consciousness to physical "stuff." His mistake-and one that Dossey may be buying into-is to use "stuff-talk" for consciousness. It is a logical error to conclude from (1) there is only one kind of fundamental "stuff" (call it energy), and (2) this "stuff" has an interiority (call it consciousness), that (3) the interiority is also composed of that same "stuff” -, i.e., that consciousness is energy. It could be that "interiority/consciousness" is not "stuff" but something more collectively distinct ontologically-for examples, feeling or process-something which is intrinsic to, and therefore inseparable from, the "stuff." It could be that the world is made up of stuff that feels, where there is an ontological distinction between the feeling (subjectivity, experience, consciousness) and what is felt (objectivity, matter-energy).
Dossey's rejection of the "inside/outside" metaphor seems to presume (à la Woodhouse) that "inside" means the interior of some "stuff" and is that "stuff"-in this case, energy-stuff. But that is not the position of panpsychist and process philosophers from Leibniz down through Bergson, James, and Whitehead, to Hartshorns and Griffin. If we make the switch from a "stuff-oriented" to a process oriented ontology, then the kind of distinction between consciousness and energy dimly implicit in Woodhouse's model avoids the kind of criticism that Dossey levels at the "inside/outside" metaphor. Process philosophers prefer to use "time-talk" over "space-talk." Instead of talking about consciousness in terms of "insides," they talk about "moments of experience" or "duration." Thus, if we view the relationship between consciousness and energy in terms of temporal processes rather than spatial stuff, we can arrive at an ontology similar to Whiteheads relationship between consciousness and energy is understood as temporal. It is the relationship between subjectivity and objectivity, where the subject is the present state of an experiential process, and the object is its prior state. Substitute "present" for "interior" and "past" or "prior" for "exterior" and we have a process ontology that avoids the "boundary" difficulties raised by Dossey. (There is no boundary between past and present-the one flows into the other; the present incorporates the past.) From the perspective of panpsychism or radical materialism, consciousness and energy, mind and matter, subject and object always go together. All matter-energy is intrinsically sentient and experiential. Sentience-consciousness and matter-energy are inseparable, but nevertheless distinct. On this view, consciousness is the process of matter-energy informing itself.
Although our language is biassed toward physics-energy talk, full of mechanistic metaphors, this is clearly not the whole story. The vernacular of the marketplace, as well as the language of science itself, is also rich with non-mechanistic metaphors, metaphors that flow direct from experience itself. Ironically, not only do we apply these consciousness metaphors to the mind and mental events, but also to the world of matter in our attempts to understand its deeper complexities and dynamics. For example, systems theory and evolutionary biology-even at the reductionist level of molecular genetics-are replete with words such as "codes," "information," "meaning," "self-organizing," and the p-word: "purpose." So we are not limited to mechanistic metaphors when describing either the world of matter or the world of mind. But-and this is the important point-because of our bias toward visuo-muscular images, we tend to forget that metaphors of the mind are sui generis, and, because of our scientific and philosophical bias in favour of a mechanism, we often attempt to reduce metaphors of the mind to metaphors of matter. My proposal for consciousness talk is this: Recognize the limitations of mechanistic metaphors, and the inappropriateness of literal energy talk, when discussing consciousness. Instead, acknowledge the richness and appropriateness of metaphors of meaning when talking about the mind. In short: Drop mechanistic metaphors (energy talk) and take up meaning metaphors (consciousness talk) when talking about consciousness.
One of the thorniest issues in "energy" and "consciousness" work is the tendency to confuse the two. Consciousness does not equal energy, yet the two are inseparable. Consciousness is the "witness" which experiences the flow of energy, but it is not the flow of energy. We might say consciousness is the felt interiority of energy/matter - but it is not energy.
If we say that consciousness is a form of energy, then we have two options. Either It is a physical form of energy (even if it is very subtle energy), or It is not a physical form of energy. If we say that consciousness is a form of energy that is physical, then we are reducing consciousness (and spirit) to physics. And few of us, unless we are materialists, want to do that. If we say that consciousness is a form of energy that is not physical, then we need to say in what way psychic energy differs from physical energy. If we cannot explain what we mean by "psychic energy" and how it is different from physical energy, in that then we should ask ourselves why use the term "energy" at all? Our third alternative is to say that consciousness is not a form of energy (physical or nonphysical). This is not to imply that consciousness has nothing to do with energy. In fact, the position I emphasize in my graduate classes is that consciousness and energy always go together. They cannot ever be separated. But this is not to say they are not distinct. They are distinct-energy is energy, consciousness is consciousness-but they are inseparable (like two sides of a coin, or, better, like the shape and substance of a tennis ball. You can't separate the shape from the substance of the ball, but shape and substance are definitely distinct).
So, for example, if someone has a kundalini experience, they may feel a rush of energy up the chakra system . . . but to say that the energy flow is consciousness is to mistake the object (energy flow) for the subject, for what perceives (consciousness) the object. Note the two importantly distinct words in the phrase "feel the rush of energy . . . " On the one hand there is the "feeling" (or the "feeler"), on the other, there is what is being felt or experienced (the energy). Even our way of talking about it reveals that we detect a distinction between feeling (consciousness) and what we feel (energy). Yes, the two go together, but they are not the same. Unity, or unification, or holism, does not equal identity. To say that one aspect of reality (say, consciousness) cannot be separated from another aspect of reality (say, matter-energy) is not to say both aspects of reality (consciousness and matter-energy) are identical.
Consciousness, is neither identical to energy (monism) nor it a separate substance or energy in addition to physical matter or energy (dualism)-it is the "interiority," the what-it-feels-like-from-within, the subjectivity that is intrinsic to the reality of all matter and energy (panpsychism or radical materialism). If you take a moment to pay attention to what's going on in your own body right now, you'll see-or feel-what I mean: The physical matter of your body, including the flow of whatever energies are pulsing through you, is the "stuff" of your organism. But there is also a part of you that is aware of, or feels, the pumping of your blood (and other energy streams). That aspect of you that feels the matter-energy in your body is your consciousness. We could express it this way: "Consciousness is the process of matter-energy informing itself." Consciousness is the ability that matter-energy has to feel, to know, and to direct itself. The universe could be (and probably is) full of energy flows, vortices, and vibrations, but without consciousness, all this activity would be completely unfelt and unknown. Only because there is consciousness can the flow of energy be felt, known, and purposefully directed.
Over the past three decades, philosophy of science has grown increasingly "local." Concerns have switched from general features of scientific practice to concepts, issues, and puzzles specific to particular disciplines. Philosophy of neuroscience is a natural result. This emerging area was also spurred by remarkable recent growth in the neuroscience. Cognitive and computational neuroscience continues to encroach upon issues traditionally addressed within the humanities, including the nature of consciousness, action, knowledge, and normativity. Empirical discoveries about brain structure and function suggest ways that "naturalistic" programs might develop in detail, beyond the abstract philosophical considerations in their favour
The literature distinguishes "philosophy of neuroscience" and "neurophilosophy." The former concern foundational issues within the neuroscience. The latter concerns application of neuroscientific concepts to traditional philosophical questions. Exploring various concepts of representation employed in neuroscientific theories is an example of the former. Examining implications of neurological syndromes for the concept of a unified self is an example of the latter. In this entry, we will assume this distinction and discuss examples of both.
Contrary to some opinion, actual neuroscientific discoveries have exerted little influence on the details of materialist philosophies of mind. The "neuroscientific milieu" of the past four decades has made it harder for philosophers to adopt dualism. But even the "type-type" or "central state" identity theories that rose to brief prominence in the late 1950s drew upon few actual details of the emerging neuroscience. Recall the favourite early example of a psychoneural identity claim: pain is identical to C-fibre firing. The "C fibres" turned out to be related to only a single aspect of pain transmission. Early identity theorists did not emphasize psychoneural identity hypotheses, admitting that their "neuro" terms were placeholder for concepts from future neuroscience. Their arguments and motivations were philosophical, even if the ultimate justification of the program was held to be empirical.
The apology for this lacuna by early identity theorists was that neuroscience at that time was too nascent to provide any plausible identities. But potential identities were afoot. David Hubel and Torsten Wiesel's (1962) electro physiological demonstrations of the receptive field properties of visual neurons had been reported with great fanfare. Using their techniques, neuro physiologists began discovering neurons throughout visual cortex responsive to increasingly abstract features of visual stimuli: from edges to motion direction to colours to properties of faces and hands. More notably, Donald Hebb had published The Organization of Behaviour (1949) a decade earlier. Therein he offered detailed explanations of psychological phenomena in terms of known neural mechanisms and anatomical circuits. His psychological explananda included features of perception, learning, memory, and even emotional disorders. He offered these explanations as potential identities. One philosopher did take note of some available neuroscientific detail was Barbara Von Eckardt-Klein (1975). She discussed the identity theory with respect to sensations of touch and pressure, and incorporated then-current hypotheses about neural coding of sensation modality, intensity, duration, and location as theorized by Mountcastle, Libet, and Jasper. Yet she was a glaring exception. Largely, available neuroscience at the time was ignored by both philosophical friends and foes of early identity theories.
Philosophical indifference to neuroscientific detail became "principled" with the rise and prominence of functionalism in the 1970s. The functionalists' favourite argument was based on multiple reliability: a given mental state or event can be realized in a wide variety of physical types (Putnam, 1967 and Fodor, 1974). So a detailed understanding of one type of realizing physical system (e.g., brains) will not shed light on the fundamental nature of mind. A psychological state-type is autonomous from any single type of its possible realizing physical mechanisms. Instead of neuroscience, scientifically-minded philosophers influenced by functionalism sought evidence and inspiration from cognitive psychology and "program-writing" artificial intelligence. These disciplines résumé being of themselves away from underlying physical mechanisms and emphasize the "information-bearing" properties and capacities of representations (Haugeland, 1985). At this same time neuroscience was delving directly into cognition, especially learning and memory. For example, Eric Kandel (1976) proposed parasynaptic mechanisms governing transmitter release rates as a cell-biological explanation of simple forms of associative learning. With Robert Hawkins (1984) he demonstrated how cognitivist aspects of associative learning (e.g., Forming, second-order conditioning, overshadowing) could be explained cell-biologically by sequences and combinations of these basic forms implemented in higher neural anatomies. Working on the postsynaptic side, neuroscientists began unravelling the cellular mechanisms of long term potentiation (LTP). Physiological psychologists quickly noted its explanatory potential for various forms of learning and memory. Yet few "materialist" philosophers paid any attention. Why should they? Most were convinced functionalists, who believed that the "engineering level" details might be important to the clinician, but were irrelevant to the theorist of mind.
A major turning point in philosophers' interest in neuroscience came with the publication of Patricia Churchland's Neurophilosophy (1986). The Churchlands (Pat and husband Paul) were already notorious for advocating eliminative materialism. In her (1986) book, Churchland distilled eliminativist arguments of the past decade, unified the pieces of the philosophy of science underlying them, and sandwiched the philosophy between a five-chapter introduction and neuroscience and a 70-page chapter on three then-current theories of brain function. She was unapologetic about her intent. She was introducing philosophy of science to neuroscientists and neuroscience to philosophers. Nothing could be more obvious, she insisted, than the relevance of empirical facts about how the brain works to concerns in the philosophy of mind. Her term for this interdisciplinary method was "co-evolution" (borrowed from biology). This method seeks resources and ideas from anywhere on the theory hierarchy above or below the question at issue. Standing on the shoulders of philosophers like Quine and Sellars, Churchland insisted that specifying some point where neuroscience ends and philosophy of science begins is hopeless because the boundaries are poorly defined. neuro philosophers would carefully choose resources from both disciplines as they saw fit.
Three themes predominate Churchlands philosophical discussion: Developing an alternative to the logical empiricist theory of intertheoretic cause to be connected to property-dualistic arguments based on subjectivity and sensory qualia, and responding to anti-reductionist multiple reliability arguments. These projects have remained central to neurophilosophy over the past decade. John Bickle (1998) extends the principal insight of Clifford Hooker's (1981) post-empiricist theory of intertheoretic reduction. He quantifies key notions using a model-theoretic account of theory structure adapted from the structuralist program in philosophy of science. He also makes explicit the form of argument scientist’s employ to draw ontological conclusions (cross-theoretic identities, revisions, or eliminations) based on the nature of the intertheoretic reduction relations obtaining in specific cases. For example, physicists concluded that visible light, a theoretical posit of optics, is electromagnetic radiation within specified wavelengths, a theoretical posit of electromagnetism: a cross-theoretic ontological identity. In another case, however, chemists concluded that phlogiston did not exist: an elimination of a kind from our scientific ontology. Bickle explicates the nature of the reduction relation in a specific case using a semi-formal account of ‘an interior theoretic approximation’ inspired by structuralist results. Paul Churchland (1996) has carried on the attack on property-dualistic arguments for the ir reducibility of conscious experience and sensory qualia. He argues that acquiring some knowledge of existing sensory neuroscience increases one's ability to ‘imagine’ or ‘conceive of’ a comprehensive neurobiological explanation of consciousness. He defends this conclusion using a thought-experiment based on the history of optics and electromagnetism. Finally, the literature critical of the multiple reliability argument has begun to flourish. Although the multiple reliability argument remains influential among nonreductive physicalists, it no longer commanded the universal acceptance it once did. Replies to the multiple reliability argument based on neuroscientific details have appeared. For example, William Bechtel and Jennifer Mundale (1997, in press) argue that neuroscientists use psychological criteria in brain mapping studies. This fact undercuts the likelihood that psychological kinds are multiplying realized.
Eliminative materialism (EM) is the conjunction of two claims. First, our common sense ‘belief-desire’ conception of mental events and processes, our ‘folk psychology,’ is a false and misleading account of the causes of human behaviour. Second, like other false conceptual frameworks from both folk theory and the history of science, it will be replaced by, rather than smoothly reduced or incorporated into, a future neuroscience. Folk psychology is the collection of common homilies about the causes of human behaviour. You ask me why Marica is not accompanying me this evening. I reply that her grant deadline is looming. You nod sympathetically. You understand my explanation because you share with me a generalization that relates beliefs about looming deadlines, desires about meeting professionally and financially significant ones, and ensuing free-time behaviour. It is the collection of these kinds of homilies that EM claims to be flawed beyond significant revision. Although this example involves only beliefs and desires, folk psychology contains an extensive repertoire of propositional attitudes in its explanatory nexus: hopes, intentions, fears, imaginings, and more. To the extent that scientific psychology (and neuroscience) retains folk concepts, EM applies to it as well.
EM is physicalist in the classical sense, postulating some future brain science as the ultimately correct account of (human) behaviour. It is eliminative in predicting the future removal of folk psychological kinds from our post-neuroscientific ontology. EM proponents often employ scientific analogies. Oxidative reactions as characterized within elemental chemistry bear no resemblance to phlogiston release. Even the "direction" of the two processes differ. Oxygen is gained when an object burns (or rusts), phlogiston was said to be lost. The result of this theoretical change was the elimination of phlogiston from our scientific ontology. There is no such thing. For the same reasons, according to EM, continuing development in neuroscience will reveal that there are no such things as beliefs and desires as characterized by common sense.
Here we focus only on the way that neuroscientific results have shaped the arguments for EM. Surprisingly, only one argument has been strongly influenced. (Most arguments for EM stress the failures of folk psychology as an explanatory theory of behaviour.) This argument is based on a development in cognitive and computational neuroscience that might provide a genuine alternative to the representations and computations implicit in folk psychological generalizations. Many eliminative materialists assume that folk psychology is committed to propositional representations and computations over their contents that mimic logical inferences. Even though discovering such an alternative has been an eliminativist goal for some time, neuroscience only began delivering on this goal over the past fifteen years. Points in and trajectories through vector spaces, as an interpretation of synaptic events and neural activity patterns in biological neural networks are key feature of this development. This argument for EM hinges on the differences between these notions of cognitive representation and the propositional attitudes of folk psychology (Churchland, 1987). However, this argument will be opaque to those with no background in contemporary cognitive and computational neuroscience, so we need to present a few scientific details. With these details in place, we will return to this argument for EM.
At one level of analysis the basic computational element of a neural network (biological or artificial) is the neuron. This analysis treats neurons as simple computational devices, transforming inputs into output. Both neuronal inputs and outputs reflect biological variables. For the remainder of this discussion, we will assume that neuronal inputs are frequencies of action potentials (neuronal "spikes") in the axons whose terminal branches synapse onto the neuron in question. Neuronal output is the frequency of action potentials in the axon of the neuron in question. A neuron computes its total input (usually treated mathematically as the sum of the products of the signal strength along each input line times the synaptic weight on that line). It then computes a new activation state based on its total input and current activation state, and a new output state based on its new activation value. The neuron's output state is transmitted as a signal strength to whatever neurons on which its axon synapses. The output state reflects systematically the neuron's new activation state.
Analysed at this level, both biological and artificial neural networks are interpreted naturally as vector-to-vector transformers. The input vector consists of values reflecting activity patterns in axons synapsing on the network's neurons from outside (e.g., from sensory transducers or other neural networks). The output vector consists of values reflecting the activity patterns generated in the network's neurons that project beyond the net (e.g., to motor effectors or other neural networks). Given that neurons' activity depends partly upon their total input, and total input depends partly on synaptic weights (e.g., parasynaptic neurotransmitter release rate, number and efficacy of postsynaptic receptors, availability of enzymes in synaptic cleft), the capacity of biological networks to change their synaptic pressures to initiate a plastic vector-to-vector transformer. In principle, a biological network with plastic synapses can come to implement any vector-to-vector transformation that its composition permits (number of input units, output units, processing layers, recurrence, cross-connections, etc.)
The anatomical organization of the cerebellum provides a clear example of a network amendable to this computational interpretation. The cerebellum is the bulbous convoluted structure dorsal to the brainstem. A variety of studies (behavioural, neuropsychological, single-cell electros), implicate this structure in motor integration and fine motor coordination. Mossy fibres (axons) from neurons outside the cerebellum synapse on cerebellular granule cells, which in turn project to parallel fibres. Activity patterns’ across the collection of mossy fibres (frequency of action potentials per time unit in each fibre projecting into the cerebellum) provide values for the input vector. Parallel fibres make multiple synapses on the dendritic trees and cell bodies of cerebellular Purkinje neurons. Each Purkinje neuron "sums" its post-synaptic potentials (PSPs) and emits a train of action potentials down its axon based (partly) on its total input and previous activation state. Purkinje axons project outside the cerebellum. The network's output vectors is thus the ordered values representing the pattern of activity generated in each Purkinje axon. Changes to the efficacy of individual synapses on the parallel fibres and the Purkinje neurons alter the resulting PSPs in Purkinje axons, generating different axonal spiking frequencies. Computationally, this amounts to a different output vector to the same input activity pattern (plasticity).
This interpretation puts the useful mathematical resources of dynamical systems into the hands of computational neuroscientists. Vector spaces are an example. For example, learning can be characterized fruitfully in terms of changes in synaptic weights in the network and subsequent reduction of error in network output. (This approach goes back to Hebb, 1949, although within the vector-space interpretation that follows.) A useful representation of this account is on a synaptic weight-error space, where one dimension represents the global error in the network's output to a given task, and all other dimensions represent the weight values of individual synapses in the network. Points in this multidimensional state space represent the global performance error correlated with each possible collection of synaptic weights in the network. As the weights change with each performance (in accordance with a biologically-implemented learning algorithm), the global error of network performance continually decreases. Learning is represented as synaptic weight changes correlated with a descent along the error dimension in the space (Churchland and Sejnowski, 1992). Representations (concepts) can be portrayed as partitions in multidimensional vector spaces. An example is a neuron activation vector space. A graph of such a space contains one dimension for the activation value of each neuron in the network (or some subset). A point in this space represents one possible pattern of activity in all neurons in the network. Activity patterns generated by input vectors that the network has learned to group together will cluster around a (hyper-) point or sub volume in the activity vector space. Any input pattern sufficiently similar to this group will produce an activity pattern lying in geometrical proximity to this point or sub volume. Paul Churchland (1989) has argued that this interpretation of network activity provides a quantitative, neurally-inspired basis for prototype theories of concepts developed recently in cognitive psychology.
Using this theoretical development, has offered a novel argument for EM. According to this approach, activity vectors are the central kind of representation and vector-to-vector transformations are the central kind of computation in the brain. This contrasts sharply with the propositional representations and logical/semantic computations postulated by folk psychology. Vectorial content is unfamiliar and alien to common sense. This cross-theoretic difference is at least as great as that between oxidative and phlogiston concepts, or kinetic-corpuscular and caloric fluid heat concepts. Phlogiston and caloric fluid are two "parade" examples of kinds eliminated from our scientific ontology due to the nature of the intertheoretic relation obtaining between the theories with which they are affiliated and the theories that replaced these. The structural and dynamic differences between the folk psychological and emerging cognitive neuroscientific kinds suggest that the theories affiliated with the latter will also correct significantly the theory affiliated with the former. This is the key premise of an eliminativist argument based on predicted intertheoretic relations. And these intertheoretic contrasts are no longer just an eliminativist's goal. Computational and cognitive neuroscience has begun to deliver an alternative kinematics for cognition, one that provides no structural analogue for the propositional attitudes.
Certainly the replacement of propositional contents by vectorial alternatives implies significant correction to folk psychology. But does it justifies EM? Even though this central feature of folk-psychologically posits in the finding of no analogues in one hot theoretical development in recent cognitive and computational neuroscience, there might be other aspects of cognition that folk psychology gets right. Within neurophilosophy, concluding that a cross-theoretic identity claim is true (e.g., folk psychological state F is identical to neural state N) or that an eliminativist claim is true (there is no such thing as folk psychological state F) depends on the nature of the intertheoretic reduction obtaining between the theories affiliated with the posits in question. But the underlying account of intertheoretic reduction recognizes a spectrum of possible reductions, ranging from relatively "smooth" through "significantly revisionary" to "extremely bumpy." Might the reduction of folk psychology and a "vectorial" neurobiology occupy the middle ground between "smooth" and "bumpy" intertheoretic reductions, and hence suggest a "revisionary" conclusion? The reduction of classical equilibrium thermodynamics to statistical mechanics to microphysics provides a potential analogy. John Bickle argues on empirical grounds that such a outcome is likely. He specifies conditions on "revisionary" reductions from historical examples and suggests that these conditions are obtaining between folk psychology and cognitive neuroscience as the latter develops. In particular, folk psychology appears to have gotten right the grossly-specified functional profile of many cognitive states, especially those closely related to sensory input and behavioural output. It also appears to get right the "intentionality" of many cognitive states - the object that the state is of or about - even though cognitive neuroscience eschews its implicit linguistic explanation of this feature. Revisionary physicalism predicts significant conceptual change to folk psychological concepts, but denies total elimination of the caloric fluid-phlogiston variety.
The philosophy of science is another area where vector space interpretations of neural network activity patterns have impacted philosophy. In the Introduction to his (1989) book, Paul Churchland asserts that it will soon be impossible to do serious work in the philosophy of science without drawing on empirical work in the brain and behavioural sciences. To justify this claim, he suggests neurocomputational reformulation of key concepts from this area. At the heart is a neurocomputational account of the structure of scientific theories. Problems with the orthodox "sets-of-sentences" view have been known for more than three decades. Churchland advocates replacing the orthodox view with one inspired by the "vectorial" interpretation of neural network activity. Representations implemented in neural networks (as discussed above) compose a system that corresponds to important distinctions in the external environment, are not explicitly represented as such within the input corpus, and allow the trained network to respond to inputs in a fashion that continually reduces error. These are exactly the functions of theories. Churchland is bold in his assertion: an individual's theory-of-the-world is a specific point in that individual's error-synaptic weight vector space. It is a configuration of synaptic weights that partitions the individual's activation vector space into subdivisions that reduce future error messages to both familiar and novel inputs.
This reformulation invites an objection, however. Churchland boasts that his theory of theories is preferable to existing alternatives to the orthodox "sets-of-sentences" account - for example, the semantic view (Suppe, 1974; van Fraassen, 1980) - because his is closer to the "buzzing brains" that use theories. But as Bickle notes, neurocomputational models based on the mathematical resources described above are a long way into the realm of abstractia. Even now, they remain little more than novel (and suggestive) applications of the mathematics of quasi-linear dynamical system to simplified schemata of brain circuitries. neuro philosophers owe some account of identifications across ontological categories before the philosophy of science community will accept the claim that theories are points in high-dimensional state spaces implemented in biological neural networks. (There is an important methodological assumption lurking in this objection.
Churchlands neurocomputational reformulation of scientific and epistemological concepts build on this account of theories. He sketches "neutralized" accounts of the theory-ladenness of perception, the nature of concept unification, the virtues of theoretical simplicity, the nature of Kuhnian paradigms, the kinematics of conceptual change, the character of abduction, the nature of explanation, and even moral knowledge and epistemological normativity. Conceptual redeployment, for example, is the activation of an already-existing prototype representation - a counterpoint or region of a partition of a high-dimensional vector space in a trained neural network - a novel type of input pattern. Obviously, we can't here do justice to Churchlands various attempts at reformulation. We urge the intrigued reader to examine his suggestions in their original form. But a word about philosophical methodology is in order. Churchland is not attempting "conceptual analysis" in anything resembling its traditional philosophical sense and neither, typically, are neuro philosophers. (This is why a discussion of neuro philosophical reformulation fits with a discussion of EM.) There are philosophers who take the discipline's ideal to be a relatively simple set of necessary and sufficient conditions, expressed in non-technical natural language, governing the application of important concepts (like justice, knowledge, theory, or explanation). These analyses should square, to the extent possible, with pre-theoretical usage. Ideally, they should preserve synonymy. Other philosophers view this ideal as sterile, misguided, and perhaps deeply mistaken about the underlying structure of human knowledge. neuro philosophers tend to reside in the latter camp. Those who dislike philosophical speculation about the promise and potential of nascent science in an effort to reformulate ("reform-ulate") traditional philosophical concepts have probably already discovered that neurophilosophy is not for them. But the charge that neurocomputational reformulation of the sort Churchland attempts are "philosophically uninteresting" or "irrelevant" because they fail to provide "adequate analyses" of theory, explanation, and the like will get ignored among many contemporary philosophers, as well as their cognitive-scientific and neuroscientific friends. Before we leave the neuro philosophical applications of this theoretical development from recent cognitive/computational neuroscience, one more point of scientific detail is in order. The popularity of treating the neuron as the basic computational unit among neural modelers, as opposed to cognitive modelers, is declining rapidly. Compartmental modelling enables computational neuroscientists to mimic activity in and interactions between patches of neuronal membrane. This endorses modelers to control and manipulate a variety of subcellular factors that determine action potentials per time unit (including the topology of membrane structure in individual neurons, variations in ion channels across membrane patches, field properties of post-synaptic potentials depending on the location of the synapse on the dendrite or soma). Modelers can "custom-build" the neurons in their target circuitry without sacrificing the ability to study circuit properties of networks. For these reasons, few serious computational neuroscientists continue to work at a level that treats neurons as unstructured computational devices. But the above interpretative points still stand. With compartmental modelling, not only are simulated neural networks interpretable as vector-to-vector transformers. The neurons composing them are, too.
The Philosophy of science, and scientific epistemology are not the only area where philosophers have lately urged the relevance of neuroscientific discoveries. Kathleen Akins argues that a "traditional" view of the senses underlies the variety of sophisticated "naturalistic" programs about intentionality. Current neuroscientific understanding of the mechanisms and coding strategies implemented by sensory receptors shows that this traditional view is mistaken. The traditional view holds that sensory systems are "veridical" in at least three ways. (1) Each signal in the system correlates with a small range of properties in the external (to the body) environment. (2) The structure in the relevant relations between the external properties the receptors are sensitive to is preserved in the structure of the relations between the resulting sensory states. And (3) the sensory system reconstructively in faithfully, without fictive additions or embellishments, the external events. Using recent neurobiological discoveries about response properties of thermal receptors in the skin as an illustration, Akins shows that sensory systems are "narcissistic" rather than "veridical." All three traditional assumptions are violated. These neurobiological details and their philosophical implications open novel questions for the philosophy of perception and for the appropriate foundations for naturalistic projects about intentionality. Armed with the known neurophysiology of sensory receptors, for example, our "philosophy of perception" or of "perceptual intentionality" will no longer focus on the search for correlations between states of sensory systems and "veridically detected" external properties. This traditional philosophical (and scientific) project rests upon a mistaken "veridical" view of the senses. Neuroscientific knowledge of sensory receptor activity also shows that sensory experience does not serve the naturalist well as a "simple paradigm case" of an intentional relation between representation and world. Once again, available scientific detail shows the naivety of some traditional philosophical projects.
Focussing on the anatomy and physiology of the pain transmission system, Valerie Hardcastle (1997) urges a similar negative implication for a popular methodological assumption. Pain experiences have long been philosophers' favourite cases for analysis and theorizing about conscious experience generally. Nevertheless, every position about pain experiences has been defended recently: eliminativist, a variety of objectivists view, relational views, and subjectivist views. Why so little agreement, despite agreement that pain experience is the place to start an analysis or theory of consciousness? Hardcastle urges two answers. First, philosophers tend to be uninformed about the neuronal complexity of our pain transmission systems, and build their analyses or theories on the outcome of a single component of a multi-component system. Second, even those who understand some of the underlying neurobiology of pain tends to advocate gate-control theories. But the best existing gate-control theories are vague about the neural mechanisms of the gates. Hardcastle instead proposes a dissociable dual system of pain transmission, consisting of a pain sensory system closely analogous in its neurobiological implementation to other sensory systems, and a descending pain inhibitory system. She argues that this dual system is consistent with recent neuroscientific discoveries and accounts for all the pain phenomena that have tempted philosophers toward particular (but limited) theories of pain experience. The neurobiological uniqueness of the pain inhibitory system, contrasted with the mechanisms of other sensory modalities, renders pain processing atypical. In particular, the pain inhibitory system dissociates pains sensation from stimulation of nociceptors (pain receptors). Hardcastle concludes from the neurobiological uniqueness of pain transmission that pain experiences are atypical conscious events, and hence not a good place to start theorizing about or analysing the general type.
Developing and defending theories of content is a central topic in current philosophy of mind. A common desideratum in this debate is a theory of cognitive representation consistent with a physical or naturalistic ontology. We'll here describe a few contributions neuro philosophers have made to this literature.
When one perceives or remembers that he is out of coffee, his brain state possesses intentionality or "aboutness." The percept or memory is about one's being out of coffee, and it represents one for being out of coffee. The representational state has content. A psychosemantics seeks to explain what it is for a representational state to be about something: to provide an account of how states and events can have specific representational content. A physicalist psychosemantics seeks to do this using resources of the physical sciences exclusively. neuro philosophers have contributed to two types of physicalist psychosemantics: the Functional Role approach and the Informational approach.
The core claim of functional roles of semantics holds that a representation has its content in virtue of relations it bears to other representations. Its paradigm application is to concepts of truth-functional logic, like the conjunctive ‘and’ or disjunctive ‘or.’ A physical event instantiates the ‘and’ function just in case it maps two true inputs onto a single true output. Thus an expression bears the relations to others that give it the semantic content of ‘and.’ Proponents of functional role semantics propose similar analyses for the content of all representations (Form 1986). A physical event represents birds, for example, if it bears the right relations to events representing feathers and others representing beaks. By contrast, informational semantics associates content to a state depending upon the causal relations obtaining between the state and the object it represents. A physical state represents birds, for example, just in case an appropriate causal relation obtains between it and birds. At the heart of informational semantics is a causal account of information. Red spots on a face carry the information that one has measles because the red spots are caused by the measles virus. A common criticism of informational semantics holds that mere causal covariation is insufficient for representation, since information (in the causal sense) is by definition, always veridical while representations can misrepresent. A popular solution to this challenge invokes a teleological analysis of ‘function.’ A brain state represents X by virtue of having the function of carrying information about being caused by X (Dretske 1988). These two approaches do not exhaust the popular options for a psychosemantics, but are the ones to which neuro philosophers have contributed.
Paul Churchlands allegiance to functional role semantics goes back to his earliest views about the semantics of terms in a language. In his (1979) book, he insists that the semantic identity (content) of a term derive from its place in the network of sentences of the entire language. The functional economies envisioned by early functional role semanticists were networks with nodes corresponding to the objects and properties denoted by expressions in a language. Thus one node, appropriately connected, might represent birds, another feathers, and another beaks. Activation of one of these would tend to spread to the others. As ‘connectionist’ network modelling developed, alternatives arose to this one-representation-per-node ‘localist’ approach. By the time Churchland provided a neuroscientific elaboration of functional role semantics for cognitive representations generally, he too had abandoned the ‘localist’ interpretation. Instead, he offered a ‘state-space semantics’.
We saw in the section just above how (vector) state spaces provide a natural interpretation for activity patterns in neural networks (biological and artificial). A state-space semantics for cognitive representations is a species of functional role semantics because the individuation of a particular state depends upon the relations obtaining between it and other states. A representation is a point in an appropriate state space, and points (or sub volumes) in a space are individuated by their relations to other points (locations, geometrical proximity). Churchland illustrates a state-space semantics for neural states by appealing to sensory systems. One popular theory in sensory neuroscience of how the brain codes for sensory qualities (like Collor) are the opponent process account. Churchland describes a three-dimensional activation vector state-space in which all Collor perceivable by humans is represented as a point (or sub value). Each dimension corresponds to activity rates in one of three classes of photoreceptors present in the human retina and their efferent paths: The red-green opponent pathway, yellow-blue opponent pathway, and black-white (contrast) opponent pathway. Photons striking the retina are transduced by the receptors, producing an activity rate in each of the segregated pathways. The characterized Cellos have a triplet of activation frequency rates. Each dimension in that three-dimensional space will represent average frequency of action potentials in the axons of one class of ganglion cells projecting out of the retina. Face-to-face, the Collor perceivable by humans will be a region of that space. For example, an orange stimulus produces a relatively low level of activity in both the red-green and yellow-blue opponent pathways (x-axis and y-axis, respectively), and middle-range activity in the black-white (contrast) opponent pathways (z-axis). Pink stimuli, on the other hand, produce low activity in the red-green opponent pathway, middle-range activity in the yellow-blue opponent pathway, and high activity in the black-white (contrast) an opponent pathway. The location of each colour in the space generates a ‘colour solid.’ Location on the solid and geometrical proximity between regions reflect structural similarities between the perceived colours. Human gustatory representations are points in a four-dimensional state space, with each dimension coding for activity rates generated by gustatory stimuli in each type of taste receptor (sweet, salty, sour, bitter) and their segregated efferent pathways. When implemented in a neural network with structural and hence computational resources as vast as the human brain, the state space approach to psychosemantics generates a theory of content for a huge number of cognitive states.
Jerry Fodor and Ernest LePore raise an important challenge to Churchlands psychosemantics. Location in a state space alone seems insufficient to fix a state's representational content. Churchland never explains why a point in a three-dimensional state space represents the Collor, as opposed to any other quality, object, or event that varies along three dimensions. Churchlands account achieves its explanatory power by the interpretation imposed on the dimensions. Fodor and LePore allege that Churchland never specifies how a dimension comes to represent, e.g., degree of saltiness, as opposed to yellow-blue wavelength opposition. One obvious answer appeals to the stimuli that form the ‘external’ inputs to the neural network in question. Then, for example, the individuating conditions on neural representations of colours are that opponent processing neurons receive input from a specific class of photoreceptors. The latter in turn have electromagnetic radiation (of a specific portion of the visible spectrum) as their activating stimuli. However, this appeal to ‘external’ stimuli as the ultimate individuating conditions for representational content makes the resulting approach a version of informational semantics. Is this approach consonant with other neurobiological details?
The neurobiological paradigm for informational semantics is the feature detector: One or more neurons that are (i) maximally responsive to a particular type of stimulus, and (ii) have the function of indicating the presence of that stimulus type. Examples of such stimulus-types for visual feature detectors include high-contrast edges, motion direction, and colours. A favourite feature detector among philosophers is the alleged fly detector in the frog. Lettvin et al. (1959) identified cells in the frog retina that responded maximally to small shapes moving across the visual field. The idea that these cells' activity functioned to detect flies rested upon knowledge of the frogs' diet. Using experimental techniques ranging from single-cell recording to sophisticated functional imaging, neuroscientists have recently discovered a host of neurons that are maximally responsive to a variety of stimuli. However, establishing condition (ii) on a feature detector is much more difficult. Even some paradigm examples have been called into question. David Hubel and Torsten Wiesel's (1962) Nobel Prize winning work establishing the receptive fields of neurons in striate cortices are often interpreted as revealing cells whose function is edge detection. However, Lehky and Sejnowski (1988) have challenged this interpretation. They trained an artificial neural network to distinguish the three-dimensional shape and orientation of an object from its two-dimensional shading pattern. Their network incorporates many features of visual neurophysiology. Nodes in the trained network turned out to be maximally responsive to edge contrasts, but did not appear to have the function of edge detection.
Kathleen Akins (1996) offers a different neuro philosophical challenge to informational semantics and its affiliated feature-detection view of sensory representation. We saw in the previous section how Akins argues that the physiology of thermoreceptor violates three necessary conditions on ‘veridical’ representation. From this fact she draws doubts about looking for feature detecting neurons to ground a psychosemantics generally, including thought contents. Human thoughts about flies, for example, are sensitive to numerical distinctions between particular flies and the particular locations they can occupy. But the ends of frog nutrition are well served without a representational system sensitive to such ontological refinements. Whether a fly seen now is numerically identical to one seen a moment ago, need not, and perhaps cannot, figure into the frog's feature detection repertoire. Akins' critique casts doubt on whether details of sensory transduction will scale up to encompass of some adequately unified psychosemantics. It also raises new questions for human intentionality. How do we get from activity patterns in "narcissistic" sensory receptors, keyed not to "objective" environmental features but rather only to effects of the stimuli on the patch of tissue innervated, to the human ontology replete with enduring objects with stable configurations of properties and relations, types and their tokens (as the "fly-thought" example presented above reveals), and the rest? And how did the development of a stable, and rich ontology confer survival advantages to human ancestors?
Consciousness has reemerged as a topic in philosophy of mind and the cognitive and brain sciences over the past three decades. Instead of ignoring it, many physicalists now seek to explain it (Dennett, 1991). Here we focus exclusively on ways those neuroscientific discoveries have impacted philosophical debates about the nature of consciousness and its relation to physical mechanisms. Thomas Nagel argues that conscious experience is subjective, and thus permanently recalcitrant to objective scientific understanding. He invites us to ponder ‘what it is like to be a bat’ and urges the intuition that no amount of physical-scientific knowledge (including neuroscientific) supplies a complete answer. Nagel's intuition pump has generated extensive philosophical discussion. At least two well-known replies make direct appeal to neurophysiology. John Biro suggests that part of the intuition pumped by Nagel, that bat experience is substantially different from human experience, presupposes systematic relations between physiology and phenomenology. Kathleen Akins (1993a) delves deeper into existing knowledge of bat physiology and reports much that is pertinent to Nagel's question. She argues that many of the questions about bat subjectivity that we still consider open hinge on questions that remain unanswered about neuroscientific details. One example of the latter is the function of various cortical activity profiles in the active bat.
More recently philosopher David Chalmers (1996) has argued that any possible brain-process account of consciousness will leave open an ‘explanatory gap’ between the brain process and properties of the conscious experience. This is because no brain-process theory can answer the "hard" question: Why should that particular brain process give rise to conscious experience? We can always imagine ("conceive of") a universe populated by creatures having those brain processes but completely lacking conscious experience. A theory of consciousness requires an explanation of how and why some brain process causes consciousness replete with all the features we commonly experience. The fact that the hard question remains unanswered shows that we will probably never get a complete explanation of consciousness at the level of neural mechanisms. Paul and Patricia Churchland have recently offered the following diagnosis and reply. Chalmers offer a conceptual argument, based on our ability to imagine creatures possessing brains like ours but wholly lacking in conscious experience. But the more one learns about how the brain produces conscious experience-and literature is beginning to emerge (e.g., Gazzaniga, 1995) - the harder it becomes to imagine a universe consisting of creatures with brain processes like ours but lacking consciousness. This is not just to bare assertions. The Churchlands appeal to some neurobiological detail. For example, Paul Churchland (1995) develops a neuroscientific account of consciousness based on recurrent connections between thalamic nuclei (particularly "diffusely projecting" nuclei like the intralaminar nuclei) and the cortex. Churchland argues that the thalamocortical recurrency accounts for the selective features of consciousness, for the effects of short-term memory on conscious experience, for vivid dreaming during REM. (rapid-eye movement) sleep, and other "core" features of conscious experience. In other words, the Churchlands are claiming that when one learns about activity patterns in these recurrent circuits, one can't "imagine" or "conceive of" this activity occurring without these core features of conscious experience. (Other than just mouthing the words, "I am now imagining activity in these circuits without selective attention/the effects of short-term memory/vivid dreaming . . . ")
A second focus of sceptical arguments about a complete neuroscientific explanation of consciousness is sensory qualia: the introspectable qualitative aspects of sensory experience, the features by which subjects discern similarities and differences among their experiences. The colours of visual sensations are a philosopher's favourite example. One famous puzzle about colour qualia is the alleged conceivability of spectral inversions. Many philosophers claim that it is conceptually possible (if perhaps physically impossible) for two humans not to differ neurophysiological, while the Collor that fire engines and tomatoes appear to have to one subject is the Collor that grass and frogs appear to have to the other (and vice versa). A large amount of neuroscientifically-informed philosophy has addressed this question. A related area where neurophilosophical considerations have emerged concerns the metaphysics of colours themselves (rather than Collor experiences). A longstanding philosophical dispute is whether colours are objective property’s Existing external to perceiver or rather identifiable as or dependent upon minds or nervous systems. Some recent work on this problem begins with characteristics of Collor experiences: For example that Collor similarity judgments produce Collor orderings that align on a circle. With this resource, one can seek mappings of phenomenology onto environmental or physiological regularities. Identifying colours with particular frequencies of electromagnetic radiation does not preserve the structure of the hue circle, whereas identifying colours with activity in opponent processing neurons does. Such a tidbit is not decisive for the Collor objectivist-subjectivist debate, but it does convey the type of neurophilosophical work being done on traditional metaphysical issues beyond the philosophy of mind.
We saw in the discussion of Hardcastle (1997) two sections above that Neurophilosophers have entered disputes about the nature and methodological import of pain experiences. Two decades earlier, Dan Dennett (1978) took up the question of whether it is possible to build a computer that feels pain. He compares and notes pressure between neurophysiological discoveries and common sense intuitions about pain experience. He suspects that the incommensurability between scientific and common sense views is due to incoherence in the latter. His attitude is wait-and-see. But foreshadowing Churchland's reply to Chalmers, Dennett favours scientific investigations over conceivability-based philosophical arguments.
Neurological deficits have attracted philosophical interest. For thirty years philosophers have found implications for the unity of the self in experiments with commissurotomy patients. In carefully controlled experiments, commissurotomy patients display two dissociable seats of consciousness. Patricia Churchland scouts philosophical implications of a variety of neurological deficits. One deficit is blindsight. Some patients with lesions to primary visual cortex report being unable to see items in regions of their visual fields, yet perform far better than chance in forced guess trials about stimuli in those regions. A variety of scientific and philosophical interpretations have been offered. Ned Form (1988) worries that many of these conflate distinct notions of consciousness. He labels these notions ‘phenomenal consciousness’ (‘P-consciousness’) and ‘access consciousness’ (‘A-consciousness’). The former is that which, ‘what it is like-ness of experience. The latter is the availability of representational content to self-initiated action and speech. Form argues that P-consciousness is not always representational whereas A-consciousness is. Dennett and Michael Tye are sceptical of non-representational analyses of consciousness in general. They provide accounts of blindsight that do not depend on Form's distinction.
Many other topics are worth neurophilosophical pursuit. We mentioned commissurotomy and the unity of consciousness and the self, which continues to generate discussion. Qualia beyond those of Collor and pain have begun to attract neurophilosophical attention has self-consciousness. The first issues to arise in the ‘philosophy of neuroscience’ (before there was a recognized area) was the localization of cognitive functions to specific neural regions. Although the ‘localization’ approach had dubious origins in the phrenology of Gall and Spurzheim, and was challenged severely by Flourens throughout the early nineteenth century, it reemerged in the study of aphasia by Bouillaud, Auburtin, Broca, and Wernicke. These neurologists made careful studies (where possible) of linguistic deficits in their aphasic patients followed by brain autopsies postmortem. Broca's initial study of twenty-two patients in the mid-nineteenth century confirmed that damage to the left cortical hemisphere was predominant, and that damage to the second and third frontal convolutions was necessary to produce speech production deficits. Although the anatomical coordinates’ Broca postulates for the ‘speech production centres do not correlate exactly with damage producing production deficits, both are that in this area of frontal cortex and speech production deficits still bear his name (‘Broca's area’ and ‘Broca's aphasia’). Less than two decades later Carl Wernicke published evidence for a second language centre. This area is anatomically distinct from Broca's area, and damage to it produced a very different set of aphasic symptoms. The cortical area that still bears his name (‘Wernicke's area’) is located around the first and second convolutions in temporal cortex, and the aphasia that bears his name (‘Wernicke's aphasia’) involves deficits in language comprehension. Wernicke's method, like Broca's, was based on lesion studies: a careful evaluation of the behavioural deficits followed by post mortem examination to find the sites of tissue damage and atrophy. Lesion studies suggesting more precise localization of specific linguistic functions remain a cornerstone to this day in aphasic research
Lesion studies have also produced evidence for the localization of other cognitive functions: for example, sensory processing and certain types of learning and memory. However, localization arguments for these other functions invariably include studies using animal models. With an animal model, one can perform careful behavioural measures in highly controlled settings, then ablate specific areas of neural tissue (or use a variety of other techniques to Form or enhance activity in these areas) and remeasure performance on the same behavioural tests. But since we lack an animal model for (human) language production and comprehension, this additional evidence isn't available to the neurologist or neurolinguist. This fact makes the study of language a paradigm case for evaluating the logic of the lesion/deficit method of inferring functional localization. Philosopher Barbara Von Eckardt (1978) attempts to make explicit the steps of reasoning involved in this common and historically important method. Her analysis begins with Robert Cummins' early analysis of functional explanation, but she extends it into a notion of structurally adequate functional analysis. These analyses break down a complex capacity C into its constituent capacity’s c1, c2, . . . cn, where the constituent capacities are consistent with the underlying structural details of the system. For example, human speech production (complex capacity C) results from formulating a speech intention, then selecting appropriate linguistic representations to capture the content of the speech intention, then formulating the motor commands to produce the appropriate sounds, then communicating these motor commands to the appropriate motor pathways (constituent capacity’s c1, c2, . . . , cn). A functional-localization hypothesis has the form: Brain structure S in an organism (type) O has constituent capacity ci, where ci is a function of some part of O. An example, Brains Broca's area (S) in humans (O) formulates motor commands to produce the appropriate sounds (one of the constituent capacities ci). Such hypotheses specify aspects of the structural realization of a functional-component model. They are part of the theory of the neural realization of the functional model.
Armed with these characterizations, Von Eckardt argues that inference to a functional-localization hypothesis proceeds in two steps. First, a functional deficit in a patient is hypothesized based on the abnormal behaviour the patient exhibits. Second, localization of function in normal brains is inferred on the basis of the functional deficit hypothesis plus the evidence about the site of brain damage. The structurally-adequate functional analysis of the capacity connects the pathological behaviour to the hypothesized functional deficit. This connection suggests four adequacy conditions on a functional deficit hypothesis. First, the pathological behaviour P (e.g., the speech deficits characteristic of Broca's aphasia) must result from failing to exercise some complex capacity C (human speech production). Second, there must be a structurally-adequate functional analysis of how people exercise capacity C that involves some constituent capacity ci (formulating motor commands to produce the appropriate sounds). Third, the operation of the steps described by the structurally-adequate functional analysis minus the operation of the component performing ci (Broca's area) must result in pathological behaviour P. Fourth, there must not be a better available explanation for why the patient does P. Arguments to a functional deficit hypothesis on the basis of pathological behaviour is thus an instance of argument to the best available explanation. When postulating a deficit in a normal functional component provides the best available explanation of the pathological data, we are justified in drawing the inference.
Von Eckardt applies this analysis to a neurological case study involving a controversial reinterpretation of agnosia. Her philosophical explication of this important neurological method reveals that most challenges to localization arguments of whether to argue only against the localization of a particular type of functional capacity or against generalizing from localization of function in one individual to all normal individuals. (She presents examples of each from the neurological literature.) Such challenges do not impugn the validity of standard arguments for functional localization from deficits. It does not follow that such arguments are unproblematic. But they face difficult factual and methodological problems, not logical ones. Furthermore, the analysis of these arguments as involving a type of functional analysis and inference to the best available explanation carries an important implication for the biological study of cognitive function. Functional analyses require functional theories, and structurally adequate functional analyses require checks imposed by the lower level sciences investigating the underlying physical mechanisms. Arguments to best available explanation are often hampered by a lack of theoretical imagination: the available explanations are often severely limited. We must seek theoretical inspiration from any level of theory and explanation. Hence making explicit the ‘logic’ of this common and historically important form of neurological explanation reveals the necessity of joint participation from all scientific levels, from cognitive psychology down to molecular neuroscience. Von Eckardt anticipated what came to be heralded as the ‘co-evolutionary research methodology,’ which remains a centerpiece of neurophilosophy to the present day.
Over the last two decades, evidence for localization of cognitive function has come increasingly from a new source: the development and refinement of neuroimaging techniques. The form of localization-of-function argument appears not to have changed from that employing lesion studies (as analysed by Von Eckardt). Instead, these imaging technologies resolve some of the methodological problems that plage lesion studies. For example, researchers do not need to wait until the patient dies, and in the meantime probably acquires additional brain damage, to find the lesion sites. Two functional imaging techniques are prominent: Positron emission tomography, or PET, and functional magnetic resonance imaging, or MRI. Although these measure different biological markers of functional activity, both now have a resolution down to around 1mm. As these techniques increase spatial and temporal resolution of functional markers and continue to be used with sophisticated behavioural methodologies, the possibility of localizing specific psychological functions to increasingly specific neural regions continues to grow
What we now know about the cellular and molecular mechanisms of neural conductance and transmission is spectacular. The same evaluation holds for all levels of explanation and theory about the mind/brain: maps, networks, systems, and behaviour. This is a natural outcome of increasing scientific specialization. We develop the technology, the experimental techniques, and the theoretical frameworks within specific disciplines to push forward our understanding. Still, a crucial aspect of the total picture gets neglected: the relationship between the levels, the ‘glue’ that binds knowledge of neuron activity to subcellular and molecular mechanisms, network activity patterns to the activity of and connectivity between single neurons, and behaviour to network activity. This problem is especially glaring when we focus on the relationship between ‘cognitivist’ psychological theories, postulating information-bearing representations and processes operating over their contents, and the activity patterns in networks of neurons. Co-evolution between explanatory levels still seems more like a distant dream rather than an operative methodology.
It is here that some neuroscientists appeal to ‘computational’ methods. If we examine the way that computational models function in more developed sciences (like physics), we find the resources of dynamical systems constantly employed. Global effects (such as large-scale meteorological patterns) are explained in terms of the interaction of ‘local’ lower-level physical phenomena, but only by dynamical, nonlinear, and often chaotic sequences and combinations. Addressing the interlocking levels of theory and explanation in the mind/brain using computational resources that have worked to bridge levels in more mature sciences might yield comparable results. This methodology is necessarily interdisciplinary, drawing on resources and researchers from a variety of levels, including higher levels like experimental psychology, ‘program-writing’ and ‘connectionist’ artificial intelligence, and philosophy of science.
However, the use of computational methods in neuroscience is not new. Hodgkin, Huxley, and Katz incorporated values of voltage-dependent potassium conductance they had measured experimentally in the squid giant axon into an equation from physics describing the time evolution of a first-order kinetic process. This equation enabled them to calculate best-fit curves for modelled conductance versus time data that reproduced the S-shaped (sigmoidal) function suggested by their experimental data. Using equations borrowed from physics, Rall (1959) developed the cable model of dendrites. This theory provided an account of how the various inputs from across the dendritic tree interact temporally and spatially to determine the input-output properties of single neurons. It remains influential today, and has been incorporated into the genesis software for programming neurally realistic networks. More recently, David Sparks and his colleagues have shown that a vector-averaging model of activity in neurons of superior colliculi correctly predicts experimental results about the amplitude and direction of saccadic eye movements. Working with a more sophisticated mathematical model, Apostolos Georgopoulos and his colleagues have predicted direction and amplitude of hand and arm movements based on averaged activity of 224 cells in motor cortices. Their predictions have borne out under a variety of experimental tests. We mention these particular studies only because we are familiar with them. We could multiply examples of the fruitful interaction of computational and experimental methods in neuroscience easily by one-hundred-fold. Many of these extend back before ‘computational neuroscience’ was a recognized research endeavour.
We've already seen one example, the vector transformation account, of neural representation and computation, under active development in cognitive neuroscience. Other approaches using ‘cognitivist’ resources are also being pursued. Many of these projects draw upon ‘cognitivist’ characterizations of the phenomena to be explained. Many exploit ‘cognitivist’ experimental techniques and methodologies. Some even attempt to derive ‘cognitivist’ explanations from cell-biological processes (e.g., Hawkins and Kandel 1984). As Stephen Kosslyn puts it, cognitive neuroscientists employ the ‘information processing’ view of the mind characteristic of cognitivism without trying to separate it from theories of brain mechanisms. Such an endeavour calls for an interdisciplinary community willing to communicate the relevant portions of the mountain of detail gathered in individual disciplines with interested nonspecialists: not just people willing to confer with those working at related levels, but researchers trained in the methods and factual details of a variety of levels. This is a daunting requirement, but it does offer some hope for philosophers wishing to contribute to future neuroscience. Thinkers trained in both the ‘synoptic vision’ afforded by philosophy and the factual and experimental basis of genuine graduate-level science would be ideally equipped for this task. Recognition of this potential niche has been slow among graduate programs in philosophy, but there is some hope that a few programs are taking steps to fill it.
In the final analysis there will be philosophers unprepared to accept that, if a given cognitive capacity is psychologically real, then there must be an explanation of how it is possible for an individual in the course of human development to acquire that cognitive capacity, or anything like it, can have a role to play in philosophical accounts of concepts and conceptual abilities. The most obvious basis for such a view would be a Frégean distrust of “psychology” that leads to a rigid division of labour between philosophy and psychology. The operative thought is that the task of a philosophical theory of concepts is to explain what a given concept is or what a given conceptual ability consist in. This, it is frequently maintained, is something that can be done in complete independence of explaining how such a concept or ability might be acquired. The underlying distinction is one between philosophical questions centring around concept possession and psychological questions centring around concept possibilities for an individual to acquire that ability, then it cannot be psychologically real. Nevertheless, this distinction is, however, strictly one does adhere to the distinction, it provides no support for a rejection of any given cognitive capacity for which is psychologically real. The neo-Frégean distinction is directly against the view that facts about how concepts are acquired have a role to play in explaining and individualizing concepts. But this view does not have to be disputed by a supporter as such, nonetheless, all that the supporter is to commit is that the principle that no satisfactory account of what a concept is should make it impossible to provide explanation of how that concept can be acquired. That is, that this principle has nothing to say about the further question of whether the psychological explanation has a role to play in a constitutive explanation of the concept, and hence is not in conflict with the neo-Frégean distinction.
A full account of the structure of consciousness, will need to illustrate those higher, conceptual forms of consciousness to which little attention on such an account will take and about how it might emerge from given points of value, is the thought that an explanation of everything that is distinctive about consciousness will emerge out of an account of what it is for a subject to be capable of thinking about himself. But, to a proper understanding of the complex phenomenon of consciousness. There are no facts about linguistic mastery that will determine or explain what might be termed the cognitive dynamics that are individual processes that have found their way forward for a theory of consciousness, it sees, to chart the characteristic features individualizing the various distinct conceptual forms of consciousness in a way that will provide a taxonomy of unconsciousness and they, to show how these manifest the characterlogical functions can determine at the level of content. What is hoped is now clear is that these forms of higher forms of consciousness emerge from a rich foundation of non-conceptual representations of thought, which can only expose and clarify their conviction that these forms of conscious thought hold the key, not just to an eventful account of how mastery of the conscious paradigms, but to a proper understanding of the plexuity of self-consciousness and/or the overall conjecture of consciousness that stands alone as to an everlasting vanquishment into the ever unchangeless state of unconsciousness, and its abysses are only held by descendable latencies.
After adaptive changes in the brains and bodies of hominids made it possible for modern humans to construct some symbolic universe using complex language systems, something that critics have endlessly debated over the formidable contours that have had a dramatic and wholly unprecedented occurrence. We began to perceive the world through the lenses of symbolic categories, to construct similarities and differences in terms of categorical oppositions, and to organize our lives according to themes and narratives. Living in this new symbolic universe, modern humans had a large compulsion to codify and then re-codify our experiences, to translate everything into representation, and to seek out the deeper hidden logic that eliminates inconsistencies and ambiguities.
The mega-narrative or frame tale that served to legitimate and rationalize the categorical oppositions and terms of relation between the myriad number of constructs in the symbolic universe of modern humans were religion. The use of religious thought for these purposes is quite apparent in the artifacts found in the fossil remains of people living in France and Spain forty thousand years ago. These artifactual evidences that are inevitably evident to the forming or affecting part of something fundamental, of what is apparently a possibility, in that, as consisting of a developed language system and most generally, had given deliverance to the contemporaries, of an administrator or a diplomat, and/or an avid student of an intricate and complex social order.
Both religious and scientific thoughts were characterized by or exhibiting the power to think. As of these analytical contemplations are the act or process of thinking that sought to frame or construct reality through origins, primary oppositions, and underlying causes. This partially explains why fundamental assumptions in the Western metaphysical tradition were eventually incorporated into a view of reality that would later be called scientific. The history of scientific thought reveals that the dialogue between assumptions about the character of spiritual reality in ordinary language and the character of physical reality in mathematical language was intimate and ongoing from the early Greek philosophers to the first scientific revolution in the seventeenth-century. Nevertheless, this dialogue did not conclude, as many have argued, with the emergence of positivism in the eighteenth and nineteenth centuries. It was perpetuated in a disguised form in the hidden ontology of classical epistemology-the central issue in the Bohr-Einstein debate.
The appending presumption that sometimes that is taken for granted as fact, however, its decisions are based on the fundamental principles whose assumptions are based on or upon the nature of which were presented the surmise contained of the one-to-one correspondence having to exist between every element of physical reality and physical theory, this may serve to bridge the gap between mind and world for those who use physical theories. But it also suggests that the Cartesian division is inseparably integrated and structurally real, least of mention, as impregnably formidable for physical reality as it is based on ordinary language, that explains in no small part why the radical separation between mind and world sanctioned by classical physics and formalized by Descartes remains, as philosophical postmodernism attests, one of the most pervasive features of Western intellectual life.
The history of science reveals that scientific knowledge and method did not spring from a fully-bloomed blossom for which the minds of the ancient Greeks did any more than language and culture emerged fully formed in the minds of Homo sapiens sapient. Scientific knowledge is an extension of ordinary language into greater levels of abstraction and precision through reliance upon geometric and numerical relationships. We speculate that the seeds of the scientific imagination were planted in ancient Greece, as opposed to Chinese or Babylonian culture, partly because the social, political, and an economic climate in Greece was more open to the pursuit of knowledge with marginal cultural utility. Another important factor was that the special character of Homeric religion allowed the Greeks to invent a conceptual framework that would prove useful in future scientific investigation. However, it was only after the inherent perceptivity that Greek philosophy was wedded to some essential features of Judeo-Christian beliefs about the origin of the cosmos that the paradigm for classical physics emerged.
The Greek philosopher we now recognize as the originators of scientific thought were mystics who probably perceived their world as replete with spiritual agencies and forces. The Greek religious heritage made it possible for these thinkers to attempt to coordinate diverse physical events within a framework of immaterial and unifying ideas. The actions to one’s servicing practicability he assembling equality that state in its quality or state of being associated in close simulations that presuppositional foundation of what we are taken over the helm, take possession on or present a false or deceptive appearance, yet to affirm as fact the assumptions that there are a persuasive, underlying substance out for which everything emerges and into which everything returns are is attributable to Thales of Miletos, as did Thales, he was apparently led to this conclusion out of the belief that the world was full of gods, and his unifying substance, water, was similarly charged with spiritual presence. Religion in this instance served the interests of science because it allowed the Greek philosophers to view 'essences' underlying and unifying physical reality as if they were 'substances'.
The last remaining feature of what would become the paradigm for the first scientific revolution in the seventeenth-century is attributed to Pythagoras. Like Parmenides, Pythagoras also held that the perceived world is illusory and that there is an exact correspondence between ideas and aspects of external reality. Pythagoras, however, had a different conception of the character of the idea that showed this correspondence. The truth about the fundamental character of the unified and unifying substance, which could be uncovered through reason and contemplation, is, claimed, mathematical in form.
Pythagoras established and was the central figure in a school of philosophy, religion, and mathematics: Pythagoras was apparently viewed by his follower ss as semi-divine. For his followers the regular solids (symmetrical three-dimensional forms in which all sides’ have aligned themselves as by their use in the same regular polygon) and whole numbers became revered essences or sacred ideas. In contrast with ordinary language, the language of mathematical and geometric forms seemed closed, precise, and pure. Providing one understood the axioms and notations. The meaning conveyed was invariant from one mind to another. The Pythagoreans felt that the language empowered the mind to leap beyond the confusion of sense experience into the realm of immutable and eternal essences. This mystical insight made Pythagoras the figure from antiquity most revered by the creators of classical physics, and it continues to have great appeal for contemporary physicists as they struggle with the epistemological implications of the quantum mechanical description of nature.
Progress was made in mathematics, and to a lesser extent in physics, from the time of classical Greek philosophy to the seventeenth-century in Europe. In Baghdad, for example, from about A.D. 750 to A.D. 1000, substantial advancement was made in medicine and chemistry, and the relics of Greek science were translated into Arabic, digested, and preserved. Eventually these relics reentered Europe via the Arabic kingdom of Spain and Sicily, and the work of figures like Aristotle and Ptolemy reached the budding universities of France, Italy, and England during the Middle Ages.
For much of this period the Church provided the institutions, like the teaching orders, needed for the rehabilitation of philosophy. Nevertheless, the social, political, and an intellectual climate in Europe was not ripe for a revolution in scientific thought until the seventeenth-century. The continuative progressive succession had entered into the nineteenth century. The works of the new class of intellectuals we call scientists were more avocations than vocation, and the word scientist did not appear in the English until around 1840.
Copernicus would have been described economics and classical literature, and, most notably, a highly honoured and placed church dignitary. Although we named a revolution after him, this conservative man not set out to create one. The placement of the Sun at the centre of the universe, which seemed right and necessary to Copernicus, was not a result of making careful astronomical observations. In fact, he made very few observations while developing his theory, and then only to ascertain in his prior conclusions seemed correct. The Copernican system was also not any more useful in making astronomical calculations than the accepted model and was, in some ways, much more difficult to implement, What, then, was his motivation for creating the model and his reasons for presuming that the model was correct?
Copernicus felt that the placement of the Sun at the centre of the universe made sense because he viewed the Sun as the symbol of the presence of a supremely intelligent and intelligible God in a man-centred world. He was apparently led to this conclusion in part because the Pythagoreans identified this fire with the fireball of the Sun. The only positive support to favour activity in the face of opposition was to supply what is needed for sustenance and maintain to hold in position by the serving as a foundation or base for that which Copernicus could offer for the greater efficacy of his model was that it represented a simpler and more mathematically harmonious model of the sort than the Creator would obviously prefer.
The belief that the mind of God as Divine Architect permeates the workings of nature was the principle of the scientific thought of Johannes Kepler. Consequently, most modern physicists would probably feel some discomfort in reading Kepler's original manuscripts. Physics and metaphysics, astronomy and astrology, geometry and theology commingle with an intensity that might offend those who practice science in the modern sense of what word. Physical laws, wrote Kepler, 'lie within the power of understanding of the human mind. God wanted us to perceive them when he created ‘us’ in His image so that we may take part in His own thoughts . . . Our knowledge of numbers and quantities are the same as that of God's, ast least insofar as we understand something of it in this mortal life'.
Believing, like Newton after him, in the literal truth of the word of the Bible, Kepler concluded that the word of God is also transcribed in the immediacy of observable nature. Kepler's discovery that the mot planets around the Sun were elliptical, as opposed perfecting circles, may have made the universe seem a less perfect creation of God in ordinary language. For Kepler, however, the new model placed the Sun, which he also viewed as the emblem of a divine agency, more at the centre of a mathematically harmonious universe than the Copernican system allowed. Communing with the perfect mind of God requires, as Kepler put it, 'knowledge on numbers and quantity'.
Since Galileo did not use, or even refer to, the planetary laws of Kepler when those laws would have made his defence of the heliocentric universe more credible, his attachment to the godlike circle was probably a more deeply rooted aesthetic and religious ideal. Nonetheless, it was Galileo, more than Newton, who was responsible for formulating the scientific idealism that quantum mechanic now forces 'us' to abandon. In, "Dialogue Concerning the Two Great Systems of the World," Galileo said the following about the followers of Pythagoras: “I know perfectly well that the Pythagoreans had the highest esteem for the science of number and that Plato himself admired the human intellect and believed that it participates in divinity solely because it has the functional distributed contributions that follow the dynamic abilities that understand the nature of numbers. I myself am inclined to make the same judgement.”
This article of faith -mathematical and geometrical ideas mirror the most basic, significant and indispensable elements, is our belief that their be-all and end-all good nor evil’s essence of physical reality. Galileo's faith is illustrated by the fact that the first mathematical law of his new science, a constant describing the acceleration of bodies in free fall, could not be confirmed by experiment. The experiments conducted by Galileo in which balls of different sizes and weights were rolled simultaneously down an inclining plane did not, as he frankly admitted, yield precise results. Since the vacuum pumps had not yet been invented, yield precise results. Vacuum pumps had not yet been invented, in that respect Galileo could not integrate of any free-falling objects, but subject to his laws were obligingly rigorous experimental proofs sustained within the seventeenth-century. Galileo believed in the absolute validity of this law in the absence of experimental proof because he also believed that movement could be subjected absolutely to the law of number. What Galileo asserted, as the French historian of science Alexander Koyré put it, was 'that the real are in its essence, geometrical and, consequently, subject to rigorous determination and measurement.
By the later part of the nineteenth-century attempts to develop a logically consistent basis for number and arithmetic not only threatened to undermine the efficacy of the classical view of correspondence debates before the advent of quantum physics. They also occasioned a debate about epistemological foundations of mathematical physics that resulted in an attempt by Edmund Husserl to eliminate or obviate the correspondence problem by grounding this physics in human subjective reality. Since, to that place is a direct line as dissenting from Husserl to existentialism to structuralism to constructionism, the linkage between philosophical postmodernism and the debate over the foundations of scientific epistemology is more direct than we had previously imagined.
A complete history of the debate over the epistemological foundations of mathematical physics should probably begin with the discovery of irrational numbers by the followers of Pythagoras, the paradoxes of Zeno and Gottfried Leibniz. Both since we are more concerned with the epistemological crisis of the later nineteenth-century, beginning with the set theory developed by the German mathematician and logician Georg Cantor. From 1878 to 1897, Cantor created a theory of abstract sets of entities that eventually became a mathematical discipline. A set, as he defined it, is a collection of definite and distinguishable objects in thought or perception conceived as a whole.
Cantors attempted to prove that the process of counting and the definition of integers could be placed on a solid mathematical foundation. His method was repeatedly to place the element in one set into 'one-to-one' correspondence with those in another. In the apparent realization of integers, Cantor showed that each integer (1, 2, 3, . . . n) could be paired with an even integer (2, 4, 6, . . . n), and, therefore, that the set of all integers was equal to the set of all even numbers.
Amazingly, Cantor discovered that some infinite sets were larger than others and that infinite set formed a hierarchy of ever greater infinities. After this failed the attempt to save the classical view of logical foundations and internal consistency of mathematical systems, a major crack had obviously appeared in the seemingly solid foundations of number and mathematics. Meanwhile, many mathematicians began to see that everything from functional analysis to the theory of real numbers depended on the problematic character of number itself.
In 1886, Nietzsche was delighted to learn the classical view of mathematics as a logically consistent and self-contained system that could prove it might be undermined. His immediate and unwarranted conclusion was that all logic and wholes of mathematics were nothing more than fictions perpetuated by those who exercised their will to power. With his characteristic sense of certainty, Nietzsche derisively proclaimed, 'Without accepting the fictions of logic, without measuring reality against the purely invented world to the unconditional and self-identical, without a constant falsification of the world by means of numbers, man could not live'.
The conditional relation, for which our conceptions of the 'way things are' given the implications of this discovery extended beyond the domain of the physical sciences, and the best efforts of many some thoughtful people will be required to understand them.
Perhaps the most startling and potentially revolutionary of these implications in human terms is a new view of the relationship between mind and world that is utterly different from that sanctioned by classical physics. René Descartes, came to realize that in positing knowledgeable considerations that support something open to question gave sensible reasons for the proposed change. That for which was to realize that mind or consciousness in the mechanistic world-view of classical physics is seemingly to exist, that in the realm of separate distinction was closed away from nature. Soon, there after, Descartes formalized his distinction in his famous dualism, artists and intellectuals in the Western world were increasingly obliged to confront a terrible prospect. The prospect was that the realm of the mental is a self-contained and self-referential island universe with no real or necessary connection with the universe itself.
The first scientific revolution of the seventeenth-century freed Western civilization from the paralysing and demeaning the fields in forces of superstition, laid the foundations for rational understanding and control of the processes of nature, and ushered in an era of technological innovation and progress that provided the distinction between heaven and earth and united the universe in a shared and communicable frame of knowledge, it presented 'us' with a view of physical reality. That was totally alien from the world of everyday life.
Descartes, the father of modern philosophy quickly realized that on that point was nothing in this view of nature that could explain or provide a foundation for the mental, or for all that we know from direct experience as distinctly human. In a mechanistic universe, he said, to that place is no single privilege or function for mind, and the separation between mind and matter is absolute. Descartes was also convinced, however, that the immaterial essences that gave form and structure to this universe were coded in geometric and mathematical ideas, and this led him to invent algebraic geometry.
A scientific understanding of these ideas could be derived, foresaid by Descartes, with the aid of precise deduction, and claimed that the contours of physical reality could be laid out in three-dimensional co-ordinates. Following the publication of Isaac Newton's "Principia Mathematica," in 1687, reductionism and mathematical modelling became the most powerful tools of modern science. The dream that the entire physical world could be known and mastered through the extension and refinement of mathematical theory became the central feature and principle of scientific knowledge.
The radical separation between mind and nature formalized by Descartes served over time to allow scientists to concentrate on developing mathematical descriptions of matter as pure mechanisms in the absence of any concern about its spiritual dimension or ontological foundations. Meanwhile, attempts to rationalize, reconcile, or eliminate Descartes's stark division between mind and matter became perhaps the most central feature of Western intellectual life.
This is the tragedy of the modern mind which 'solved the riddle of the universe', but only to replace it by another riddle: The riddle of itself. The tragedy of the Western mind, is a direct consequence of the stark Cartesian division between mind and world. We discover the 'certain principles of physical reality' said Descartes, 'not by the prejudices of the senses, but by rational analysis, and that which possesses the prodigiousness of its evidence, in that we cannot doubt of their truth'. Since the real, or that which literally exists externally to ourselves, was in his view only that which could be represented in the quantitative terms of mathematics, Descartes concluded that all quantitative aspects of reality could be traced to the deceitfulness of the senses.
It was this logical sequence that led Descartes to posit the existence of two categorically different domains of existence for immaterial ideas-the res' extensa and the res cognisant, or the 'extended substance' and the 'thinking substance'. Descartes defined the extended substance as the realm of physical reality within which primary mathematical and geometrical forms reside and the thinking substance as the realm of human subjective reality. Descartes distrusted the information from the senses to the point of doubting the perceived results of repeatable scientific experiments, how this, he concludes that our knowledge of the mathematical ideas residing only in mind or in human subjectivity was accurate, much less the absolute truth? If, on that point, that in a state of mental of physical fitness for some experience or action it remains to no real or necessary correspondence between nonmathemaical ideas in subjective reality and external physical reality, how do we know that the world in which we live, breath, love, and eventually decease, factually exists. Descartes resolution of this dilemma took the form of an exercise. He asked 'us' to direct our attention inward and to divest our consciousness of all awareness of external physical reality. If we do so, he concluded, the real existence of human subjective reality could be confirmed.
The present time is clearly a time of a major paradigm shift, but consider the last great paradigm shift, the one that resulted in the Newtonian framework. This previous paradigm shift was profoundly problematic for the human spirit. It led to the conviction that we are strangers, freaks of nature, conscious beings in a universe that is almost entirely unconscious, and that, since the universe is strictly deterministic, even the free will we feel considerations of concern, in feeling of deferential approval and liking to the account on mindful or thoughtful attention, as to the apparent movement of our bodies is an illusion. Yet going through the acceptance of such a paradigm was probably necessary for the Western mind.
The present, however, has no duration, it is merely the demarcation line between past and future. And yet we do have an awareness of periods through the intermittent intervals of time: We have an awareness of something taking a long time, and something else taking only a short time. How is such awareness possible? If that which exists, namely, the present, has no duration, how can we be aware of 'a long time'? How can we be aware of something that does not exist? Augustine's response to the question is an insight into the nature of time. As we experience 'a long time', he writes, 'It is not future time that is long but a long future is a long expectation of the future, the past time is not long, but a long past is a long remembrance of the past'. St. Augustine concludes: It is in my own mind, then, that I measure time, I must not allow my mind to insist that time be something objective'.
Meanwhile, the most fundamental aspect of the Western intellectual tradition is the assumption that there is a fundamental division between the material and immaterial world or between the realm of matter and the realm of pure mind or spirit. The metaphysical framework based on this assumption is known as ontological dualism. As the word dual implies, the framework is predicated of ontology or a conception of the nature of God or Being, that assumes reality has two distinct and separate dimensions. The concept of Being as continuous, immutable, and having a previous date as present is to its past, this accordance within a separate existence gratified from the celebrations that launched a world of change, now this dates from the ancient Greek philosopher Parmenides. The same qualities were associated with God of the Judeo-Christian tradition, and they were considerably amplified by the role played in Theology by Platonic and Neoplatonic philosophy.
Since science clearly cannot, in principle, describe the whole and that the divorce between mind and world formalized by Descartes is an illusion, we believe that of that location is a new basis for dialogue between members and the numbers of cultures. If this dialogue is open and honest, it could not only put a timely end to cultural criticisms and resuscitate the Enlightenment ideal of unifying human knowledge in the service of the customary morally justified. It could also promote a new era of cooperation and shared commitment between members of conflict, in that the effort to understand effectively and eliminate some very real threats of human survival.
Nevertheless, the Cartesian doubt is the method of investigating how much knowledge and its basis in reason or experience as used by Descartes in the first two Medications. It attempted to put knowledge upon secure foundation by first inviting us to suspend judgements on any proportion whose truth can be doubted, even as a bare possibility. The standards of acceptance are gradually raised as we are asked to doubt the deliverance of memory, the senses, and even reason, all of which can let us down. This is eventually found in the celebrated “Cogito ergo sum”: I think: therefore? I am. By locating the point of certainty in my awareness of my own self, Descartes gives a first-person twist to the theory of knowledge that dominated the following centuries. In spite of a various counter-attack for social and public starting-points, the metaphysics associated with this priority are the Cartesian dualism, or separation of mind and matter into bi-divisional points of dissimulation but an integration of interacting substances. Descartes calculably and calibrated the given aptitude for optimism, apart from optimizing of that which hampers action or progress, such that occularity takes to divine dispensation to certify any relationship between the two realms. Thus divided, and to prove the reliability of the senses invokes a “clear and distinct perception” of highly dubious proofs of the existence of a benevolent deity. This has not met general acceptance: A Hume drily puts it, “to have recourse to the veracity of the supreme Being, to prove the veracity of our senses, is surely making a very unexpected circuit.”
By dissimilarity, Descartes’s notorious denial that non-human animals are conscious is a stark illustration of dissimulation. In his conception of matter Descartes also gives preference to rational cogitation over anything from the senses. Since we can conceive of the matter of a ball of wax, surviving changes to its sensible qualities, matter is not an empirical idea, but eventually an entirely geometrical one, with extension and motion as its only physical nature.
In spite of the fact, that the supportive construct where its structural base for which Descartes’s epistemology theory of mind and theory of matter have been rejected many times. Nonetheless, their relentless exposure of the hardest issues, their exemplary clarity and even their initial plausibility, all contrives to make him the central point of reference for modern philosophy.
According to Descartes the elements of actual existence are of two kinds-material and mental. These types of existence are different and incommensurate. The table that I see in front of me is material, while my intention to go on typing is mental, the two have nothing in common. This duality of mind creates enormous difficulties. For instance, How does my intention to lift my arm (a mental event) cause the actual lifting of the arm (a material event)? So, that, a self-consistent paradigm must be based on the hypothesis that there is one basic human-centered actual existence. That if and only if there is to the exclusion of any alternative or competitors, from which only one kind must be the nature of experience. The fact that existence exists cannot be denied. Not only are we certain that we do experience, everything we believe we know about the universe, matter, are deduced from our experiences.
That the methodology of science makes it blind to a fundamental aspect of reality, namely the primacy of experience. It neglects half the evidence. Working within Descartes's dualistic framework of matter and mind as separate and incommensurate, science limits itself to the study of objectivised phenomena, neglecting the subject and the mental events that are his or her experience.
Suppose for the moment that is to say, of having accepted or advanced as true or real based on less then conclusive evidence, the supposed efficiency in question is: "If we give realism up, what will we replace it with?" If when we try to encounter that which is establish between the evidences of our engaging upon concrete facts and abstractions, are found and the eventuality of fact that realism is an abstraction. "The fallacy of misplaced concreteness," of which we have mistaken as an abstraction for a ready proof condition for which something that limits or qualifies an agreement, including the condition that would hold at rest of a concrete fact. As pointed out, this fallacy is a mistake that derailed Western philosophy.
The Cartesian doubt is the method of investigating how much knowledge and its basis in reason or experience as used by Descartes in the first two Medications. It attempted to put knowledge upon secure foundation by first inviting us to suspend judgements on any proportion whose truth can be doubted, even as a bare possibility. The standards of acceptance are gradually raised as we are asked to doubt the deliverance of memory, the senses, and eve n reason, all of which can let us down. In spite of a various counter-attack for social and public starting-points, the metaphysics associated with this priority are the Cartesian dualism, or separation of mind and matter into bi-divisional points of dissimulation but an integration of interacting substances. Descartes rigorously and rightly optimizes an ocular sight that it takes divine dispensation to certify any relationship between the two realms. Thus divided, and to prove the reliability of the senses invokes a “clear and distinct perception” of highly dubious proofs of the existence of a benevolent deity. This has not met general acceptance: A Hume drily puts it, “to have recourse to the veracity of the supreme Being, to prove the veracity of our senses, is surely making a very unexpected circuit.”
By dissimilarity, Descartes’s notorious denial that non-human animals are conscious is a stark illustration of dissimulation. In his conception of matter Descartes also gives preference to rational cogitation over anything from the senses. Since we can conceive of the matter of a ball of wax, surviving changes to its sensible qualities, matter is not an empirical idea, but eventually an entirely geometrical one, with extension and motion as its only physical nature.
Despite the fact that the structure of Descartes’s epistemology theory of mind and theory of matter have been rejected many times, however, their relentless exposure of the hardest issues, their exemplary clarity and even their initial plausibility, all contrives to make him the central point of reference for modern philosophy.
According to Descartes the elements of actual existence are of two kinds-material and mental. These types of existence are different and incommensurate. The table that I see in front of me is material, while my intention to go on typing is mental, the two have nothing in common. This duality of mind creates enormous difficulties. For instance, How does my intention to lift my arm (a mental event) cause the actual lifting of the arm (a material event)? So, that, a self-consistent paradigm must be based on the hypothesis that there is one basic human-centered actual existence. That if and only if there is only one kind, it must be the nature of experience. The fact that existence exists cannot be denied. Not only are we certain that we do experience, everything we believe we know about the universe, matter, are deduced from our experiences.
That the methodology of science makes it blind to a fundamental aspect of reality, namely the primacy of experience. It neglects half of the evidence. Working within Descartes's dualistic framework of matter and mind as separate and incommensurate, science limits itself to the study of objectivised phenomena, neglecting the subject and the mental events that are his or her experience.
Both the adoption of the Cartesian paradigm and the neglect of mental events are reason enough to suspect 'blindness', but there is no need to rely on suspicions. This blindness is clearly evident: Scientific discoveries, impressive as they are, are fundamentally superficial. Science can express regularities observed in nature, but it cannot explain the reason for their occurrences. Consider, for example, Newton's law of gravity. It shows that such apparently disparate phenomena as the falling of an apple and the revolution of the earth around the sun are aspects of the same regularity: -gravity. According t this law, the gravitational attractions between two objects decrease in proportion to the square of the distance between them. Why is that so? Newton could not provide an answer. Simpler still, why does space in the extent or capacity have three dimensions? Why is time one-dimensional? None of these laws of nature gives the slightest evidence of necessity, They are [merely] the modes of procedure which within the scale of observation do in fact prevail.
It only follows that in order to find 'the elucidation of things observed' in relation to the experiential or aliveness aspect, we cannot rely on science, we need to look elsewhere. If, instead of relying on science, we rely on our immediate observation of natures, first that this [i.e., Descartes's] sharp division between mentality and nature has no ground in our fundamental observation. We find ourselves living within nature. Secondly, in that we should conceive mental operations as among the factors which maker up the constitution of nature. Thirdly, that we should reject is the notion of idle wheels in the process of nature. Every factor which emerges makes a difference, and such that the difference can only be expressed in terms of the individual characterlogical aptitude of quality that sometimes has actual existence on or upon the elementary component as based in its cause of the determinant factor.
Any proceedings to analyze our experience is general, and our observations of nature in particular, and finishing within the mutual immanence' as a central theme. This mutual immanence is obvious in the case of human experience: I am part of the universe, and, since I experience the universe, the experienced universe it is a part of me. For example, 'I am in the room, and the room is an item in my present experience. But my present experience is what I am now'. Such that 'the world is included within the given occasion in one sense, and the occasion is included in the world in another sense', that the idea that each actual occasion appropriates its universe and follows naturally from such considerations.
The description of an actual entity for being a distinct unit is, therefore, only one part of the story. The other, complementary part is this: The very nature of each and every actual entity is one of interdependence with all the other actual entities in the universe. Each and every existent entity is a series of actions, operations, or motions involved in the accomplishment of an end, in their preceding or being soon to appear or take place of approaching all the other actual entities and creating a new entity out of them all, namely, itself.
Suppose for the moment that is to say, of having accepted or advanced as true or real on the basis of less then conclusive evidence, the supposed efficiency in question is: "If we give realism up, what will we replace it with?" If not, it is only when our endeavoring pursuit that we establish that encountering between practical facts and abstractions, just as the engaging eventuality of fact for that which realism is an abstraction. "The fallacy of misplaced concreteness," of which we have mistaken as an abstraction for a ready proof condition for which something that limits or qualifies an agreement, including the condition that would hold at rest of a concrete fact. As pointed out, this fallacy is a mistake that derailed Western philosophy.
The point is this: When we accepted the realist position, we feel that we cannot deny the 'fact' that objects exist 'from their own side', independently of consciousness. On the other hand, we cannot deny that we do have experiences, i.e., we cannot deny the existence of mind. Which is the more fundamental principle, mind or matter? Is one of them real and the other derivative? How do the two interact? A slew of unanswerable questions, unanswerable because the conceptual framework in which they arose is all wrong, all base on the fallacy of misplaced concreteness.
If the concrete fact is not the independent existence of objects, what is it? It is the experience that is concrete, for instance, by analyzing a concrete fact, 'I see a building over there'. Where is this fact taking place, and what is the relation of the fact to the presumed location of the building?
Where is the fact taking place? It is my experience. It is taking place right here, where I am. But is the building right here, so as well, the building may not even exist. How does the place where the building seems to be entering the experience? It enters it as a place of reference. My experience, which is here, where I am, has reference to the place where I see the building. I see the building in the mode of having location where it seems to be. Does it follow that something is happening at that place of reference? -Not at all-, the question of whether something is happening there or not is a separate issue. All we are trying to do now is being of a clearer cause of a start, about what is concrete and what is abstract, and, second, about the location of the concrete and the place or places it refers to. Then:
For you at 'A' there will be green, but not simply green at 'A' where you are. The green at 'A' will be green with the mode of having location at the image of the leaf behind the mirror. Then turn around and look at the leaf. You are now perceiving the green in the same way you did before, except that now the green has the mode of being located at the actual leaf.
How does the fallacy of misplaced concreteness apply to the enigma of inequalities? In the derivation of inequalities one assumes the independent existence of two particles having their own properties, which include all three spin components. The assumption of realism, which is an abstraction even when applied to people, buildings, and cars, is certainly of dubious validity when applied to subatomic entities.
Can the realization that realism is an abstraction shed light of inequalities in correlation? When using the language of realism, this language seems appropriate to situations involving measurements in the domains of classical physics and Special Relativity. In analyzing such measurements, realism is an appropriate abstraction, and the principle that 'Nothing moves faster than light'. Perhaps that 'something' seems to propagate faster than light because what is going on is not described in the proper language, as, too, the abstraction of realism no longer applies. If this is so, then the difficulties of understanding the significance of correlating inequalities are due to the application of the abstraction of realism outside of its domain of validity. Is precisely the message one can deduce from Neils Bohr's framework of complementarities?
Nonetheless, we can derive a scientific understanding of ideas with the aid of precise deduction, as Descartes continued his claim that we could lay the contours of physical reality out in three-dimensional co-ordinates. Following the publication of Isaac Newton’s ‘Principia Mathematica’ in 1687, reductionism and mathematical modeling became the most powerful tools of modern science. The dream that we could know and master the entire physical world through the extension and refinement of mathematical theory became the central feature and principals of scientific knowledge.
The radical separation between mind and nature formalized by Descartes served over time to allow scientists to concentrate on developing mathematical descriptions of matter as pure mechanisms without any concern about its spiritual dimensions or ontological foundations. Meanwhile, attempts to rationalize, reconcile or eliminate Descartes’s merging division between mind and matter became the most central feature of Western intellectual life.
Philosophers like John Locke, Thomas Hobbes, and David Hume tried to articulate some basis for linking the mathematical describable motions of matter with linguistic representations of external reality in the subjective space of mind. Descartes’ compatriot Jean-Jacques Rousseau reified nature as the ground of human consciousness in a state of innocence and proclaimed that ‘Liberty, Equality, Fraternities’ are the guiding principles of this consciousness. Rousseau also fabricated the idea of the ‘general will’ of the people to achieve these goals and declared that those who do not conform to this will were social deviants.
The Enlightenment idea of ‘deism’, which imaged the universe as a clockwork and God as the clockmaker, provided grounds for believing in a divine agency, from which the time of moment the formidable creations also imply, in of which, the exhaustion of all the creative forces of the universe at origins ends, and that the physical substrates of mind were subject to the same natural laws as matter. In that the only means of mediating the event-horizon that situates the extrication between mind and the importance of matter was to ascertain the quality, mass, extent or degree of in terms of a standard unit of fixed distributions of pure reason, causative by the traditional Judeo-Christian theism for which had previously been based on both reason and revelation. The answer for its challenge of deism is the debasing tradionality in which its test of faith and the embracing idea that we can know the truths of spiritual reality only through divine revelation. This engendered a conflict between reason and revelation that persists to this day. And laid the foundation for the fierce completion between the mega-narratives of science and religion as frame tales for mediating the relation between mind and matter and the manner in which they should ultimately define the special character of each.
The nineteenth-century Romantics in Germany, England and the United States revived Rousseau’s attempt to posit a ground for human consciousness by reifying nature in a different form. Goethe and Friedrich Schelling proposed a natural philosophy premised on ontological Monism. (The idea that coherent manifestations that govern evolutionary principles have grounded the evincing inseparability toward a spiritual Oneness) and argued God, man, and nature for the reconciliation of mind and matter with an appeal to sentiment, mystical awareness, and quasi-scientific attempts, as he afforded the efforts of mind and matter. Nature, of course, loves to hide within the worm-holes of time. Yet, seemly confronting the mindful agencies of loves’ illusion and shroud’s man in her mist and presses his or her heart and punishes those who fail to see the light. Schelling, in his version of cosmic unity, argued that scientific facts were at best partial truths and that the mindful creative spirit that unities mind and matter is progressively moving toward 'self-realization' and ‘undivided wholeness’.
The British version of Romanticism, articulated by figures like William Wordsworth and Samuel Taylor Coleridge, placed more emphasis on the primary of the imagination and the importance of rebellion and heroic vision as the grounds for freedom. As Wordsworth put it, communion with the ‘incommunicable powers’ of the ‘immortal sea’ empowers the mind to release itself from all the material constraints of the laws of nature. The founders of American transcendentalism, Ralph Waldo Emerson and Henry David Theoreau, articulated a version of Romanticism that commensurate with the ideals of American democracy.
The fatal flaw of pure reason is, of course, the absence of emotion, and purely explanations of the division between subjective reality and external reality, of which had limited appeal outside the community of intellectuals. The figure most responsible for infusing our understanding of the Cartesian dualism with our contextual understanding with emotional content was the death of God theologian Friedrich Nietzsche 1844-1900. After declaring that God and ‘divine will’, did not exist, Nietzsche reified the ‘existence’ of consciousness in the domain of subjectivity as the ground for individual ‘will’ and summarily reducing all previous philosophical attempts to articulate the ‘will to truth’. The dilemma, forth in, had seemed to mean, by the validation, . . . as accredited for doing of science, in that the claim that Nietzsche’s earlier versions to the ‘will to truth’, disguises the fact that all alleged truths were arbitrarily created in the subjective reality of the individual and are expressed or manifesting the individualism of ‘will’.
In Nietzsche’s view, the separation between mind and matter is more absolute and total than previously been imagined. Underpinning, as to supply or serve as a base for the assumption that there is no really necessary correspondence between linguistic constructions of reality in human subjectivity and external reality, he deuced that we are all locked in ‘a prison house of language’. The prison as he concluded it, was also a ‘space’ where the philosopher can examine the ‘innermost desires of his nature’ and articulate a new message of individual existence founded on ‘will’.
Those who fail to enact their existence in this space, Nietzsche says, are enticed into sacrificing their individuality on the nonexistent altars of religious beliefs and democratic or socialists’ ideals and become, therefore, members of the anonymous and docile crowd. Nietzsche also invalidated the knowledge claims of science in the examination of human subjectivity. Science, he said. Is not exclusive to natural phenomenons and favors reductionistic examination of phenomena at the expense of mind? It also seeks to reduce the separateness and uniqueness of mind with mechanistic descriptions that disallow and basis for the free exercise of individual will.
Nietzsche’s emotionally charged defence of intellectual freedom and radial empowerment of mind as the maker and transformer of the collective fictions that shape human reality in a soulless mechanistic universe proved terribly influential on twentieth-century thought. Furthermore, Nietzsche sought to reinforce his view of the externalized subjective descriptions as the notability of character of scientific knowledge by appealing to an epistemological crisis over the foundations of logic and arithmetic that arose during the last three decades of the nineteenth century. Through a curious course of events, attempted by Edmund Husserl 1859-1938, a German mathematician and a principal founder of phenomenology, wherefor to resolve this crisis resulted in a view of the character of consciousness that closely resembled that of Nietzsche.
The best-known disciple of Husserl was Martin Heidegger, and the work of both figures greatly influenced that of the French atheistic existentialist Jean-Paul Sartre. The work of Husserl, Heidegger, and Sartre became foundational to that of the principal architects of philosophical postmodernism, and deconstructionist Jacques Lacan, Roland Barthes, Michel Foucault and Jacques Derrida. It obvious attribution of a direct linkage between the nineteenth-century crisis about the epistemological foundations of mathematical physics and the origin of philosophical postmodernism served to perpetuate the Cartesian two-world dilemma in an even more oppressive form. It also allows us better to understand the origins of cultural ambience and the ways in which they could resolve that conflict.
The mechanistic paradigm of the late nineteenth century was the one Einstein came to know when he studied physics. Most physicists believed that it represented an eternal truth, but Einstein was open to fresh ideas. Inspired by Mach’s critical mind, he demolished the Newtonian ideas of space and time and replaced them with new, “relativistic” notions.
Two miraculous theories are unveiled of our world-without-end, as there be to it the over-flowing emptiness of continuatives that nothing is actualized for being or owing to its phenomenon, yet for ‘us’ too discovered or rediscovered. The launching celebrations gasifying to a greater degree that for Albert Einstein’s coincidence that conjoining the phenomenal ponderosity that was appropriately appreciated in that of the special theory of relativity (1905) and, also the calculable arranging temperamental qualities of being to withstand the fronting engagements that quantify nature by its amending to encourage the finding resolution upon which the realms of its secreted reservoir of continuous phenomenons, are for ‘us’ to discover or rediscover. In additional the continuatives as afforded efforts that prey on or upon the imagination, however, were it construed as made discretely available to any of the unsurmountable achievements, as remaining obtainable. Through with these cryptic excavations are the profound artifactual circumstances that govern of those principles categorized of derivative types of ‘form’ or ‘type’, involving the complex and the given complications so implicated by evolutionary principles that complement or acclaim that of the general theory of relativity (1915). Where the special theory gives a unified account of the laws of mechanics and of electromagnetism, including optics, before 1905 the purely relative nature of uniform motion had in part been recognized in mechanics, although Newton had considered time to be absolute and postulated absolute space.
If the universe is a seamlessly interactive system that evolves to a higher level of complexity, and if the lawful regularities of this universe are emergent properties of this system, we can assume that the cosmos is a singular point of significance as a whole that evinces the ‘progressive principal order’ of complementary intercourse with its parts. Given that this whole exists in some sense within all parts (Quanta), one can then argue that it operates in self-reflective fashion and is the ground for all emergent complexities. Since human consciousness evinces self-reflective awareness in the human brain and since this brain, like all physical phenomena can be viewed as an emergent property of the whole, it is reasonable to conclude, in philosophical terms at least, that the universe is conscious.
But since the actual character of this seamless whole cannot be represented or reduced to its parts, it lies, quite literally beyond all human representations or descriptions. If one chooses to believe that the universe be a self-reflective and self-organizing whole, this lends no support whatsoever to conceptions of design, meaning, purpose, intent, or plan associated with any mytho-religious or cultural heritage. However, If one does not accept this view of the universe, there is nothing in the scientific descriptions of nature that can be used to refute this position. On the other hand, it is no longer possible to argue that a profound sense of unity with the whole, which has long been understood as the foundation of religious experience, which can be dismissed, undermined or invalidated with appeals to scientific knowledge.
Uncertain issues surrounding certainty are especially connected with those concerning ‘scepticism’. Although Greek scepticism entered on the value of enquiry and questioning, scepticism is now the denial that knowledge or even rational belief is possible, either about some specific subject-matter, e.g., ethics, or in any area whatsoever. Classical scepticism, springs from the observation that the best methods in some area seem to fall short of giving us contact with the truth, e.g., there is a gulf between appearances and reality, it frequently cites the conflicting judgements that our methods deliver, with the result that questions of truth surmounting among measures that are profoundly undefinable. In classic thought the various examples of this conflict were systemized in the tropes of Aenesidemus. So that, the scepticism of Pyrrho and the new Academy was a system of argument and inasmuch as opposing dogmatism, and, particularly the philosophical system building of the Stoics.
As it has come down to us, particularly in the writings of Sextus Empiricus, its method was typically to cite reasons for finding our issue undecidable (sceptics devoted particular energy to undermining the Stoics conception of some truths as delivered by direct apprehension or some katalepsis). As a result the sceptics conclude eposhé, or the suspension of belief, and then go on to celebrate a way of life whose object was ataraxia, or the tranquillity resulting from suspension of belief.
Fixed by its will for and of itself, the mere mitigated scepticism which accepts every day or commonsense belief, is that, not s the delivery of reason, but as due more to custom and habit. Nonetheless, it is self-satisfied at the proper time, however, the power of reason to give us much more. Mitigated scepticism is thus closer to the attitude fostered by the accentuations from Pyrrho through to Sextus Expiricus. Descartes himself was not a sceptic, despite the fact that the phrase ‘Cartesian scepticism’ is sometimes used, however, in the ‘method of doubt’ uses a sceptical scenario in order to begin the process of finding a general distinction to mark or take note of its point of knowledge. Descartes trusts in categories of ‘clear and distinct’ ideas, not far removed from the phantasiá kataleptikê of the Stoics.
For many sceptics have traditionally held that knowledge requires certainty, artistry. And, of course, they claim that specific knowledge is not possible. In part, nonetheless, of the principle that every effect it’s a consequence of an antecedent cause or causes. For causality to be true it is not necessary for an effect to be predictable as the antecedent causes may be numerous, too complicated, or too interrelated for analysis. Nevertheless, in order to avoid scepticism, this participating sceptic has generally held that knowledge does not require certainty. It has often been thought, that any thing known must satisfy certain criteria as well for being true, except for alleged cases that are evident for just by being true. It is often taught that anything is known must satisfy certain standards. In so saying, that by ‘deduction’ or ‘induction’, there will be criteria specifying when it is. As these alleged cases of self-evident truths, the general principle specifying the sort of consideration that will make such standard in the apparent or justly conclude in accepting it warranted to some degree.
Besides, there is another view-the absolute global view that we do not have any knowledge whatsoever. In whatever manner, it is doubtful that any philosopher who frivolously, as in disposition, appearance or manner takes to entertain of an indefectable, note-perfect and unflawed scepticism. Even the Pyrrhonist sceptics, who held that we should refrain from accenting to any non-evident standards that no such hesitancy about asserting to ‘the evident’, the non-evident are any belief that requires evidences because it is warranted.
René Descartes (1596-1650), in his sceptical guise, never doubted the content of his own ideas. It’s challenging logic, inasmuch as of whether they ‘corresponded’ to anything beyond ideas.
All the same, Pyrrhonism and Cartesian form and actualized essence, yet its fundamental difference is so near that the difference is negligible, however, the comprehensive generalizations are given to globalized scepticism, in having been held and defended, that of assuming that knowledge is some form of true, sufficiently warranted belief, it is the warranted condition that provides the truth or belief conditions, in that of providing the grist for the sceptic’s mill about. The Pyrrhonist will suggest that there are no non-evident, empirically deferring the sufficiency of giving in but warranted. Whereas, a Cartesian sceptic will agree that no empirical standard about anything other than one’s own mind and its contents is sufficiently warranted, because there are always legitimate grounds for doubting it. The essential difference between the two views concerns the stringency of the requirements for a belief being sufficiently warranted to take account of as knowledge.
A Cartesian requires certainty, but a Pyrrhonist merely requires that the standards in case are more warranted then its negation.
Cartesian scepticism was unduly an in fluence with which Descartes agues for scepticism, than his reply holds, in that we do not have any knowledge of any empirical standards, in that of anything beyond the contents of our own minds. The reason is roughly in the position that there is a legitimate doubt about all such standards, only because there is no way to justifiably deny that our senses are being stimulated by some sense, for which it is radically different from the objects which we normally think, in whatever manner they affect our senses. Therefrom, if the Pyrrhonist is the agnostic, the Cartesian sceptic is the atheist.
Because the Pyrrhonist requires much less of a belief in order for it to be confirmed as knowledge than do the Cartesian, the argument for Pyrrhonism are much more difficult to construct. A Pyrrhonist must show that there is no better set of reasons for believing to any standards, of which are in case that any knowledge learnt of the mind is understood by some of its forms, that has to require certainty.
The underlying latencies that are given among the many derivative contributions as awaiting their presence to the future that of specifying to the theory of knowledge, is, but, nonetheless, the possibility to identify a set of shared doctrines, but, identity to discern two broad styles of instances to discern, in like manners, these two styles of pragmatisms, clarify the innovation that a Cartesian approval is fundamentally flawed, nonetheless, of responding very differently but not arrangingly displaced.
Repudiating the requirements of absolute certainty or knowledge, insisting on the connection of knowledge with activity, as, too, of pragmatism of a reformist distributing knowledge upon the legitimacy of traditional questions about the truth-unconductiveness of our cognitive practices, and sustain a conception of truth objectives, enough to give those questions that undergo of a unifying cluster in their own purposive latency, yet we are given to the spoken word for which a dialectic awareness sparks the aflame from the ambers of fire.
Pragmatism of a determinant revolution, by contrast, relinquishing the objectivity of youth, acknowledges no legitimate epistemological questions over and above those that are naturally kindred of our current cognitive conviction.
It seems clear that certainty is a property that can be assembled to either a person or a belief. We can say that a person, ‘S’ are certain, of constituting an independent and otherwise unidentified part of a group or whole, whereby we can say that one being such beyond a doubt that certain likeness of this survives. It’s infallible and, perhaps, a confirmable alinement is aligned as of ‘p’, is certain. The two uses can be connected by saying that ‘S’ has the right to be certain just in case the value of ‘p’ is sufficiently verified.
The radical separation between mind and nature formalized by Descartes served over time to allow scientists to concentrate on developing mathematical descriptions of matter as pure mechanisms without any concern about its spiritual dimensions or ontological foundations. Meanwhile, attempts to rationalize, reconcile or eliminate Descartes’s merging division between mind and matter became the most central feature of Western intellectual life.
In Nietzsche’s view, the separation between mind and matter is more absolute and total than previously been imagined. Based on the assumptions that there are no really necessary correspondences between linguistic constructions of reality in human subjectivity and external reality, we can deuced that we are all locked in ‘a prison house of language’. The prison as he concluded it, was also a ‘space’ where the philosopher can examine the ‘innermost desires of his nature’ and articulate a new message of individual existence founded on ‘will’.
Those who fail to enact their existence in this space, Nietzsche says, are enticed into sacrificing their individuality on the nonexistent altars of religious beliefs and democratic or socialists’ ideals and become, therefore, members of the anonymous and docile crowd. Nietzsche also invalidated the knowledge claims of science in the examination of human subjectivity. Science, he said. Is not exclusive to natural phenomenons and favours reductionistic examination of phenomena at the expense of mind? It also seeks to reduce the separateness and uniqueness of mind with mechanistic descriptions that disallow and basis for the free exercise of individual will.
Nietzsche’s emotionally charged defence of intellectual freedom and radial empowerment of mind as the maker and transformer of the collective fictions that shape human reality in a soulless mechanistic universe proved terribly influential on twentieth-century thought. Furthermore, Nietzsche sought to reinforce his view of the subjective character of scientific knowledge by appealing to an epistemological crisis over the foundations of logic and arithmetic that arose during the last three decades of the nineteenth century. Through a curious course of events, attempted by Edmund Husserl 1859-1938, a German mathematician and a principal founder of phenomenology, wherefor to resolve this crisis resulted in a view of the character of consciousness that closely resembled that of Nietzsche.
The best-known disciple of Husserl was Martin Heidegger, and the work of both figures greatly influenced that of the French atheistic existentialist Jean-Paul Sartre. The work of Husserl, Heidegger, and Sartre became foundational to that of the principal architects of philosophical postmodernism, and deconstructionist Jacques Lacan, Roland Barthes, Michel Foucault and Jacques Derrida. It obvious attribution of a direct linkage between the nineteenth-century crisis about the epistemological foundations of mathematical physics and the origin of philosophical postmodernism served to perpetuate the Cartesian two-world dilemma in an even more oppressive form. It also allows us better to understand the origins of cultural ambience and the ways in which they could resolve that conflict.
The mechanistic paradigm of the late nineteenth century was the one Einstein came to know when he studied physics. Most physicists believed that it represented an eternal truth, but Einstein was open to fresh ideas. Inspired by Mach’s critical mind, he demolished the Newtonian ideas of space and time and replaced them with new, ‘relativistic’ notions.
Two theories unveiled and unfolding as their phenomenal yield held by Albert Einstein, attributively appreciated that the special theory of relativity (1905) and, also the tangling and calculably arranging affordance, as drawn upon the gratifying nature whom by encouraging the finding resolutions upon which the realms of its secreted reservoir of continuous phenomenons. In additional the continuatives as afforded by the efforts of the imagination made so were discretely available to any the unsurmountable achieve’, as remaining obtainable. The deference or an action designed to impugn the honour or worth of someone or something as an afforded solemnity of the occasion through the excavations underlying the fundamental indispensability. The elemental substratal, is, nonetheless, an artifactual component as marked by careful attention to relevant details, in which as a circumstantial account of adventure, circumstances that govern all principle ‘forms’ or ‘types’ in the involving evolutionary principles of the general theory of relativity, 1915, where the special theory gives a unified account of the laws of mechanics and of electromagnetism, including optics. Before 1905 the purely relative nature of uniform motion had in part been recognized in mechanics, although Newton had considered time to be absolute and postulated absolute space.
If the universe is a seamlessly interactive system that evolves to a higher level of complexity, and if the lawful regularities of this universe are emergent properties of this system, we can assume that the cosmos is a singular point of significance as a whole that evinces the ‘progressive principal order’ of complementary relation to its parts. Given that this whole exists in some sense within all parts (Quanta), one can then argue that it operates in self-reflective fashion and is the ground for all emergent complexities. Since human consciousness evinces self-reflective awareness in the human brain and since this brain, like all physical phenomena can be viewed as an emergent property of the whole, it is reasonable to conclude, in philosophical terms at least, that the universe is conscious.
But since the actual character of this seamless whole cannot be represented or reduced to its parts, it lies, quite literally beyond all human representations or descriptions. If one chooses to believe that the universe be a self-reflective and self-organizing whole, this lends no support whatsoever to conceptions of design, meaning, purpose, intent, or plan associated with any mytho-religious or cultural heritage. However, If one does not accept this view of the universe, there is nothing in the scientific descriptions of nature that can be used to refute this position. On the other hand, it is no longer possible to argue that a profound sense of unity with the whole, which has long been understood as the foundation of religious experience, which can be dismissed, undermined or invalidated with appeals to scientific knowledge.
Uncertain issues surrounding certainty are especially connected with those concerning ‘scepticism’. Although Greek scepticism entered on the value of enquiry and questioning, scepticism is now the denial that knowledge or even rational belief is possible, either about some specific subject-matter, e.g., ethics, or in any area whatsoever. Classical scepticism, springs from the observation that the best methods in some area seem to fall short of giving us contact with the truth, e.g., there is a gulf between appearances and reality, it frequently cites the conflicting judgements that our methods deliver, with the result that questions of truth becomes undefinable. In classic thought the various examples of this conflict were systemized in the tropes of Aenesidemus. So that, the scepticism of Pyrrho and the new Academy was a system of argument and inasmuch as opposing dogmatism, and, particularly the philosophical system building of the Stoics.
From which, particularly in the writings of Sextus Empiricus, its method was typically to cite reasons for finding our issue undeniable (sceptics devoted particular energy to undermining the Stoics conception of some truths as delivered by direct apprehension or some katalepsis). As a result the sceptic concludes eposhé, or the suspension of belief, and then go on to celebrate a way of life whose object was ataraxia, or the tranquillity resulting from suspension of belief.
Fixed by its will for and of itself, the mere mitigated scepticism which accepts every day or commonsense belief, is that, not s the delivery of reason, but as due more to custom and habit. Nonetheless, it is self-satisfied at the proper time, however, the power of reason to give us much more. Mitigated scepticism is thus closer to the attitude fostered by the accentuations from Pyrrho through to Sextus Expiricus. Despite the fact that the phrase ‘Cartesian scepticism’ is sometimes used, Descartes himself was not a sceptic, however, in the ‘method of doubt’ uses a sceptical scenario in order to begin the process of finding a general distinction to mark its point of knowledge. Descartes trusts in categories of ‘clear and distinct’ ideas, not far removed from the phantasiá kataleptikê of the Stoics.
For many sceptics have traditionally held that knowledge requires certainty, and, of course, they claim that distinctly interpenetrating knowledge is not possible. In part, nonetheless, of the principle that every effect it’s a consequence of an antecedent cause or causes. For causality to be true it is not necessary for an effect to be predictable as the antecedent causes may be numerous, too complicated, or too interrelated for analysis. Nevertheless, in order to avoid scepticism, this participating sceptic has generally held that knowledge does not require certainty. Except for alleged cases of things that are evident for one just by being true, it has often been thought, that any thing known must satisfy certain criteria as well for being true. It is often taught that anything is known must satisfy certain standards. In so saying, that by ‘deduction’ or ‘induction’, there will be criteria specifying when it is. As these alleged cases of self-evident truths, the general principle specifying the sort of consideration that will make such standard in the apparent or justly conclude in accepting it warranted to some degree.
Besides, there is another view-the absolute global view that we do not have any knowledge whatsoever. In whatever manner, it is doubtful that any philosopher seriously entertains of absolute scepticism. Even the Pyrrhonist sceptics, who held that we should refrain from accenting to any non-evident standards that no such hesitancy about asserting to ‘the evident’, the non-evident are any belief that requires evidences because it is warranted.
René Descartes (1596-1650), in his sceptical guise, never doubted the content of his own ideas. It’s challenging logic, inasmuch as of whether they ‘corresponded’ to anything beyond ideas.
All the same, Pyrrhonism and Cartesian form of virtual globalized scepticism, in having been held and defended, that of assuming that knowledge is some form of true, sufficiently warranted belief, it is the warranted condition that provides the truth or belief conditions, in that of providing the grist for the sceptic’s depositing for itself. Whereas, a Cartesian sceptic will agree that no empirical standards about anything other than one’s own mind and its contents are sufficiently warranted, because there are always legitimate grounds for doubting it. The essential difference between the two views concerns the stringency of the requirements for a belief being sufficiently warranted to take account of as knowledge.
A Cartesian requires certainty, but a Pyrrhonist merely requires that the standards in case are more warranted then its negation.
Cartesian scepticism was unduly an influence with which Descartes agues for scepticism, than his reply holds, in that we do not have any knowledge of any empirical standards, in that of anything beyond the contents of our own minds. The reason is roughly in the position that there is a legitimate doubt about all such standards, only because there is no way to justifiably deny that our senses are being stimulated by some sense, for which it is radically different from the objects which we normally think, in whatever manner they affect our senses. Therefrom, if the Pyrrhonist is the agnostic, the Cartesian sceptic is the atheist.
Because the Pyrrhonist requires much less of a belief in order for it to be confirmed as knowledge than do the Cartesian, the argument for Pyrrhonism are much more difficult to construct. A Pyrrhonist must show that there is no better set of reasons for believing to any standards, of which are in case that any knowledge learnt of the mind is understood by some of its forms, that has to require certainty.
The underlying latencies that are given among the many derivative contributions as awaiting their presence to the future that of specifying to the theory of knowledge, is, but, nonetheless, the possibility to identify a set of shared doctrines, but, identity to discern two broad styles of instances to discern, in like manners, these two styles of pragmatism, clarify the innovation that a Cartesian approval is fundamentally flawed, nonetheless, of responding very differently but not causally to suit the purpose.
Pragmatism of a determinant revolution, by contrast, relinquishing the objectivity of youth, acknowledges no legitimate epistemological questions over and above those that are naturally kindred of our current cognitive conviction.
It seems clear that certainty is a property that can be assembled to either a person or a belief. We can say that a person, ‘S’ are certain, or we can say that its descendable alinement is aligned as of ‘p’, are certain. The two uses can be connected by saying that ‘S’ has the right to be certain just in case the value of ‘p’ is sufficiently verified.
In defining certainty, it is crucial to note that the term has both an absolute and relative sense. More or less, we take a proposition to be certain when we have no doubt about its truth. We may do this in error or unreasonably, but objectively a proposition is certain when such absence of doubt is justifiable. The sceptical tradition in philosophy denies that objective certainties are often possible, or ever possible, either for any proposition at all, or for any proposition from some suspect family, supposed of ethics, theory, memory, empirical judgement and so forth. A major sceptical weapon is the possibility of upsetting events that can cast doubt back onto what were hitherto taken to be certainties. Others include reminders of the divergence of human opinion, and the fallible source of our confidence. Fundamentalist approaches to knowledge look for a basis of certainty, upon which the structure of our system is built. Others reject the metaphor, looking for mutual support and coherence, without foundation, however, in moral theory, the views that there are inviolable moral standards or absolute variable human desires, or policies or prescriptions for which that there are in the spoken exchange of words for one's listeners the images in one's mind.
In spite of the notorious difficulty of reading Kantian ethics, a hypothetical imperative embeds a command which is in place only as given by some antecedent desire or project: ‘If you want to look wise, stay quiet’. The injunction to stay quiet is only applicable to those with the antecedent desire or inclination. If one has no desire to look wise, the injunction cannot be so avoided: It is a requirement that binds anybody, regardless of their inclination. It could be represented as, for example, ‘tell the truth (regardless of whether you want to or not)’. The distinction is not always signalled by presence or absence of the conditional or hypothetical form: ‘If you crave drink, don’t become a bartender’ may be regarded as an absolute injunction applying to anyone, although are generatively activated in case of those with the stated desire.
In Grundlegung zur Metaphsik der Sitten (1785), Kant discussed five forms of the categorical imperative: (1) The formula of universal law: ‘Act only on that maxim through which you can at the same times will that it should become universal law: (2) The formula of the law of nature: ‘act as if the maxim of your action were to come out through your will a universal law of nature’: (3) The formula of the end-in-itself: ‘Act in such a way that your intendment of choice is always too fitful the position as intentionally designated to treat humanity, whether in your own person or in the person of any other, never simply as a means, but always at the same time as an end’: (4) the formula of autonomy, or considering ‘the will of every rational being as a will which makes universal law’: (5) The formula of the Kingdom of Ends, which provides a model for the systematic union of different rational beings under common laws.
Even so, a proposition that is not a conditional ‘p’, moreover, the affirmative and negative, modern opinion is wary of this distinction, since what appears categorical may vary notation. Apparently, categorical propositions may also turn out to be disguised conditionals: ‘X’ is intelligent (categorical?): If ‘X’ is given a range of tasks she performs them better than many people (conditional?) The problem. Nonetheless, is not merely one of classification, since deep metaphysical questions arise when facts that seem to be categorical and therefore solid, come to seem by contrast conditional, or purely hypothetical or potential.
A limited area of knowledge and usually brief confrontation or dispute factions or person in which of opposing endeavors’ are determined of intent or purposes between which of pursuing activities or interests are the central representations held to a concept of physical theory. In this way, a field is defined by the distribution of a physical quantity, such as temperature, mass density, or potential energy, at different points in space. In the particularly important example of force fields, such as gravitational, electrical, and magnetic fields, the field value at a point is the force which a test particle would experience if it were located at that point. The philosophical problem is whether a force field is to be thought of as purely potential, so the presence of a field merely describes the propensity of masses to move relative to each other, or whether it should be thought of in terms of the physically real modifications of a medium, whose properties result in such powers that are purely force Field's as for being potentially existing in possibilities that something that can develop or become actual, fully characterized by dispositional statements or conditionals, or are they categorical or actual? The former option seems to require within ungrounded dispositions, or regions of space that differ only in what happens if an object is placed there. The law-like shape of these dispositions, apparent for example in the curved lines of force of the magnetic field, may then seem quite inexplicable. To atomists, such as Newton it would represent a return to Aristotelian entelechies, or quasi-psychological affinities between things, which are responsible for their motions. The latter option requires understanding of how forces of attraction and repulsion can be ‘grounded’ in the properties of the medium.
The basic idea of a field is arguably present in Leibniz, who was certainly hostile to Newtonian atomism. Despite the fact that his equal hostility to ‘action at a distance’ muddies the water, nevertheless, it is usually credited to the Jesuit mathematician and scientist Joseph Boscovich (1711-87) and Immanuel Kant (1724-1804), both Boscovich and Kant made increasing persuasions as in the end had, in their course, influenced the scientist Faraday, with whose work the physical notion became established. In his paper, titled ‘On the Physical Character of the Lines of Magnetic Force’ (1852), Faraday suggested several criteria for assessing the physical reality of lines of force, such as whether they are affected by an intervening material medium, and whether the motion depends on the nature of what is placed at the receiving end. As far as electromagnetic fields go, Faraday himself inclined to the view that the mathematical similarity between heat flow, currents, and electromagnetic lines of force was evidence for the physical reality of the intervening medium.
Once, again, our mentioning recognition for which its case value, whereby its view is especially associated the American psychologist and philosopher William James (1842-1910), that the truth of a statement can be defined in terms of a ‘utility’ of accepting it. The transition of thoughts, in so much as a dispiriting position for which its place of validation may be viewed as an objection, since there are things that are false, as it may be useful to accept, and conversely there are things that are true and that it may be damaging to accept. Nevertheless, there are deep connections between the idea that a representation system is accorded, and the likely success of the projects in progressive formality, by its possession. The evolution of a system of representation either perceptual or linguistic, seems bounded to connect successes with everything adapting or with utility in the modest sense. The Wittgenstein doctrine stipulates the meaning of use that upon the nature of belief and its relations with human attitude, emotion and the idea that belief in the truth on one hand, the action of the other. One way of binding with cement, wherefore the connection is found in the idea that natural selection becomes much as much in adapting us to the cognitive creatures, because beliefs have effects, they work. Pragmatism can be found in Kant’s doctrine, and continued to play an influencing role in the theory of meaning and truth.
James, (1842-1910), although with characteristic generosity exaggerated in his debt to Charles S. Peirce (1839-1914), he charted that the method of doubt encouraged people to pretend to doubt what they did not doubt in their hearts, and criticize its individualist's insistence, that the ultimate test of certainty is to be found in the individuals personalized consciousness.
From his earliest writings, James understood cognitive processes in teleological terms. Though thought, he held, assists us in the satisfactory interests. His will to Believe doctrine, the view that we are sometimes justified in believing beyond the evidential relics upon the notion that a belief’s benefits are relevant to its justification. His pragmatic method of analyzing philosophical problems, for which requires that we find the meaning of terms by examining their application to objects in experimental situations, similarly reflects the teleological approach in its attention to consequences.
Such an approach, nonetheless, laces’ across the position by which James’ theory of meaning, as set to one side from verification, dismissive of metaphysics, however, unlike the verificationalist, who takes cognitive meaning to be a matter only of consequences in sensory experience. James’ took pragmatic meaning to include emotional and matter responses. Moreover, his, metaphysical standard of value, is not a way of dismissing them as meaningless, it should also be noted that in a greater extent, in circumspective moments' James did not hold that even his broad set of consequences was exhaustive of some terms meaning. ‘Theism’, for example, he took to have antecedently, definitional meaning, in addition to its varying degree of importance and chance upon an important pragmatic meaning.
James’ theory of truth reflects upon his teleological conception of cognition, by considering a true belief to be one which is compatible with our existing system of beliefs, and leads us to satisfactory interaction with the world.
However, Peirce’s famous pragmatist principle is a rule of logic employed in clarifying our concepts and ideas. Consider the claim the liquid in a flask is an acid, if, we believe this, we except that it would turn red: We accept an action of ours to have certain experimental results. The pragmatic principle holds that listing the conditional expectations of this kind, in that we associate such immediacy with applications of a conceptual representation that provides a complete and orderly sets clarification of the concept. This is relevant to the logic of abduction: Clarificationists using the pragmatic principle provides all the information about the content of a hypothesis that is relevantly to decide whether it is worth testing.
To a greater extent, and what is most important, is the famed apprehension of the pragmatic principle, in so that, Pierces’ account of reality: When we take something to be real that by this single case, we think it is ‘fated to be agreed upon by all who investigate’ the matter to which it stand, in other words, if I believe that it is really the case that ‘P’, then I except that if anyone were to inquire into the finding measure into whether ‘p’, they would arrive at the belief that ‘p’. It is not part of the theory that the experimental consequences of our actions should be specified by a warranted empiricist vocabulary-Peirce insisted that perceptual theories are abounding in latency. Even so, nor is it his view that the collected conditionals do or not clarify a concept as all analytic. In addition, in later writings, he argues that the pragmatic principle could only be made plausible to someone who accepted its metaphysical realism: It requires that ‘would-bees’ are objective and, of course, real.
If realism itself can be given a fairly quick clarification, it is more difficult to chart the various forms of supposition, for they seem legendary. Other opponents deny that the entities posited by the relevant discourse that exist or at least exists: The standard example is ‘idealism’ that reality is somehow mind-curative or mind-co-ordinated-that real object comprising the ‘external world’ is dependently of eloping minds, but only exists as in some way correlative to the mental operations. The doctrine assembled of ‘idealism’ enters on the conceptual note that reality as we reinforced with clusters encompassing their formative constellations, this as meaningful and reflects the working of mindful purposes. And it construes this as meaning that the inquiring mind itself makes of a clustering creation for which have in the construaling constellations, and not of any mere understanding of the nature of the ‘real’, but even so, the resulting charger we give an assignment to, is attributively complementary.
Wherefore, the term is most straightforwardly used when qualifying another linguistic form of Grammatik: a real ‘x’ may be contrasted with a fake, a failed ‘x’, a near ‘x’, and so forth. To trat something as real, without qualification, is to suppose it to be part of the actualized world. To reify something is to suppose that we have committed by some indoctrinated treatise, as that of a theory. The central error in thinking of reality and the totality of existence is to think of the ‘unreal’ as a separate domain of things, perhaps, unfairly to that of the benefits of existence.
In accordance, in that of non-existence becomes circumstantial to whatever is apprehended as having actual, distinct and demonstrable existence, as the product of logical confusion of treating the term ‘nothing’ as itself a referring expression instead of a ‘quantifier’. (Stating informally as a quantifier is an expression that reports of a quantity of times that a predicate is satisfied in some class of things, i.e., in a domain.) This confusion leads the unsuspecting to think that a sentence such as ‘Nothing is all around us’ talks of a special kind of thing that is all around us, when in fact it merely denies that the predicate ‘is all around us’ have appreciations. The feeling, which led some philosophers and theologians, notably Heidegger, to talk of the experiencing of nothing, is not properly the experience of nothing, but rather the failure of a hope or expectations that there would be something of some kind at some point. This may arise in quite everyday cases, as when one finds that the article of functions one expected to see as usual, in the corner has disappeared. The difference between ‘existentialist’’ and ‘analytic philosophy’, on the point of what, whereas the former is afraid of nothing, and the latter think that there is nothing to be afraid of.
A rather different set of concerns arises when actions are specified in terms of doing nothing, saying nothing may be an admission of guilt, and doing nothing in some circumstances may be tantamount to murder. Still, other substitutional problems arise over conceptualizing empty space and time.
Whereas, the standardized opposition between those who substantiate as to affirm and those who deny, the real existence of some kinds of things in that which can be known as having existence in space or time or some kinds of quality of being factual or in the way in which one manifest existence or the circumstances under which one exists or by which one is given distinctive character. Almost any area of discourse may be the focus of this dispute: The external world, the past and future, other minds, mathematical objects, possibilities, universals, moral or aesthetic properties are examples. There be to one influential suggestion, as associated with the British philosopher of logic and language, and the most determinative of philosophers centered round Anthony Dummett (1925), to which is borrowed from the ‘intuitivistic’ critique of classical mathematics, and suggested that the unrestricted use of the ‘principle of a bivalence’ is the trademark of ‘realism’. Nevertheless, this has to surmount to justly as to embrace or morally justify, in order to get the better of counter-examples as given or stated in both peculiar passages. In that along with which one change or cause to change, from one place to another, the pass through parallel qualities as to be parallel with the counterparts equivalent for this situation, is, however, an equal counterpart in corresponding equivalence as one place to its state of making equivalent in the course or line of direction has the manners for which constituents lay usage. Though Aquinas wads a moral ‘realist’, he held that moral really was not sufficiently structured to make true or false every moral claim. Unlike Kant who believed that he could use the law of a bivalence happily in mathematics, precisely because it was only our own construction. Realism can itself be subdivided: Kant, for example, combines empirical realism (within the phenomenal world the realist says the right things-surrounding objects really exist and independent of us and our mental stares) with transcendental idealism (the phenomenal world as a whole reflects the structures imposed on it by the activity of our minds as they render it intelligible to us). In modern philosophy the orthodox opposition to realism has been from the philosopher such as Goodman, who, impressed by the extent to which we perceive the world through conceptual and linguistic lenses of our own making.
Assigned to the modern treatment of existence in the theory of ‘quantification’ is sometimes put by saying that existence is not a predicate. The idea, as a representation as something comprehended of as a formulation as of a plan, is that the existential expression as a quantity is for itself an operator on a predicate, indicating that the property by what is expressed has instances? Existence is therefore treated as a second-order property, or a property of properties. It is fitting to say, that in this it is like number, for when we say that these things of a kind, we do not describe the thing (ad we would if we said there are red things of the kind), but instead attribute a property to the kind itself. The paralleled numbers are exploited by the German mathematician and philosopher of mathematics Gottlob Frége in the dictum that affirmation of existence is merely denied of the number nought. A problem, nevertheless, proves accountable for it's crated by sentences like ‘This exists’, where some particular thing is undirected, such that a sentence seems to express a contingent truth (for this insight has not existed), yet no other predicate is involved. ‘This exists’ is. Therefore, unlike ‘Tamed tigers exist’, where a property is said to have an instance, for the word ‘this’ and does not locate a property, but only and individual.
Possible worlds seem able to differ from each other purely in the presence or absence of individuals, and not merely in th distribution of exemplification of properties.
The philosophical ponderosity through which, is set to a certain berth on or upon the insubstantial properties for being unreal, as sustaining the domain of existence. Nonetheless, there is little for us that can be said with the philosopher’s study. So it is not apparent that there can be such a subject as by itself. Nevertheless, the concept had a central place in philosophy from Parmenides to Heidegger. The essential question of ‘why is there something and not of nothing’? Prompting over logical reflection on what it is for a universal to have an instance, and as long history of attempts to explain contingent existence, by which is toward the reference and a necessary ground.
In the transition, ever since Plato, this ground becomes a self-sufficient, perfect, unchanging, and external something, identified with the indistinguishability for between Good and God, but whose relation with the everyday world remains deeply within the corpses of times generations. The celebrated argument for the existence of God first propounded by this Revelation had been brought forth in Anselm, in his Proslogin. The attenuated argument is vigorously often a heated discussion of questionable disputations defining God as ‘something than which nothing greater can be conceived’, God then exists in the understanding since we understand this concept. However, if he only existed in the understanding something greater could be conceived, for a being that exists in reality is greater than one that exists in the understanding. But then, we can conceive of something greater than that than which nothing greater can be conceived, which is contradictory. Therefore, God cannot exist on the understanding, but exists in reality.
An influential argument (or family of arguments) for the existence of God, finding its premisses are that all natural things are dependent for their existence on something else. The totality of dependency effectually prevails must then be itself, and therefore depends upon a non-dependent, or necessarily existent presentation for that which is God. Like the argument to design, the cosmological argument was attacked by the Scottish philosopher and historian David Hume (1711-76) and Immanuel Kant.
Its main problem, nonetheless, is that it requires us to make sense of the notion of necessary existence. For if the answer to the question of why anything exists is that some other tings of a similar kind exists, the question merely arises again. So, in that of ‘God’ that ends the question must exist necessarily: It must not be an entity of which the same kinds of questions can be raised. The other problem with the argument is attributing concern and care to the deity, not for connecting the necessarily existent being it derives with human values and aspirations.
The ontological argument has been treated by modern theologians such as Barth, following Hegel, not so much as a proof with which to confront the unconverted, but as an explanation of the deep meaning of religious belief. Collingwood, regards the argument s proving not that because our idea of God is that of what God exists maius cogitare viequit, therefore, but proving that because this is our idea of God, we stand committed to belief in its existence. Its existence is a metaphysical point or absolute presupposition of certain forms of thought.
In the twentieth-century, modal versions of the ontological argument have been propounded by the American philosophers Charles Hertshorne, Norman Malcolm, and Alvin Plantinga. One version is to define something as unsurmountably great, if it exists and is perfect in every ‘possible world’. It seems justly as possible that we allow that it at least of the feel felt possibilities that an unsurmountable being exists. This means that there is a possible world in which such a being exists. However, if it exists in one world, it exists in all (for the fact that such a being exists in a world that entails, in at least, it exists and is perfect in every world), so, it exists necessarily. The correct response to this argument is to disallow the apparently reasonable concession that it is possible that such a being exists. This concession is much more dangerous than it looks, since in the modal logic, involved from possibly necessarily ‘p’, we can manouevre of something (as a mechanical device) that performs a function of effects a desired end as an innovational device for which is necessarily ‘p’. A symmetrical proof starting from the assumption that it definitely seems possibly that such a being does not exist would derive that it is impossible that it exists.
The doctrine that it makes an ethical difference of whether an agent actively intervenes to bring about a result, or omits to act in circumstances in which it is foreseen, that as a result of the omission, the same resultant happens to occur. Thus, suppose that I wish you were dead. If I act to bring about your death, I am a murderer, however, if I happily discover you in danger of death, and fail to act to save you, I am not acting, and therefore, according to the doctrine of acts and omissions not a murderer. Critics implore that omissions can be as deliberate and immoral as I am responsible for your food and fact to feed you. Only omission is surely a killing, ‘Doing nothing’ can be a way of doing something, or in other worlds, absence of bodily movement can also constitute acting negligently, or deliberately, and defending on the context, may be a way of deceiving, betraying, or killing. Nonetheless, criminal law offers to find its conveniences, from which to distinguish discontinuous intervention, for which is permissible, from bringing about the result, which may not be, if, for instance, the result is death of a patient. The question is whether the difference, if there is one, is, between acting and omitting to act be discernibly or defined in a way that bars a general moral might.
The double effect of a principle attempting to define when an action that had both good and a bad result is morally permissible. Of one formation such an action is permissible if (1) The action is not wrong in itself, (2) the bad consequences are not that which is intended (3) the good is not itself a result of the bad consequences, and (4) the two consequential effects are commensurate. Thus, for instance, I might justifiably bomb an enemy factory, foreseeing but intending that the death of nearby civilians, whereas bombing the death of nearby civilians intentionally would be disallowed. The principle has its roots in Thomist moral philosophy, accordingly. St. Thomas Aquinas (1225-74), held that it is meaningless to ask whether a human being is two tings (soul and body) or, only just as it is meaningless to ask whether the wax and the shape given to it by the stamp are one: On this analogy the sound is ye form of the body. Life after death is possible only because a form itself does not perish (pricking is a loss of form).
Therefore, in some sense that we are not subject to being disputed or called in question, for which of possibilities it is available to reactivate a new body, in that capable of being constructively applied, so, that, it is not I who survive the body death, nonetheless, I may be resurrected in the same personalized body that becomes reanimated by the same form. Aquinas’s explanation, as a person has invariably sustained of a personal invitation by his eliminating position for the privileged have exiled self-understanding, we understand ourselves as we do everything else, by way of sense experience and abstraction, and knowing the principle of our own lives is an achievement, not as a given. Difficulty at this point led the logical positivist to abandon the notion of an epistemological foundation altogether, and to flirt with the coherence theory of truth, it is widely accepted that trying to make the connection between thought and experience through basic sentence s depends on an untenable ‘myth of the given
The special way that we each have of knowing our own thoughts, intentions, and sensationalist have brought in the many philosophical ‘behaviorist and functionalist tendencies, that have found it important to deny that there is such a special way, arguing the way that I know of my own mind inasmuch as the way that I know of yours, e.g., by seeing what I say when asked. Others, however, point out that the behavior of reporting the result of introspection in a particular and legitimate kind of behavioral access that deserves notice in any account of historically human psychology. The historical philosophy of reflection upon the astute of history, or of historical, thinking, finds the term was used in the 18th century, e.g., by Volante was to mean critical historical thinking as opposed to the mere collection and repetition of stories about the past. In Hegelian, particularly by conflicting elements within his own system, however, it came to man universal or world history. The Enlightenment confidence was being replaced by science, reason, and understanding that gave history a progressive moral thread, and under the influence of the German philosopher, whom is in spreading Romanticism, Gottfried Herder (1744-1803), and, Immanuel Kant, this idea took it further to hold, so that philosophy of history cannot be the detecting of a grand system, the unfolding of the evolution of human nature as witnessed in successive sages (the progress of rationality or of Spirit). This essential speculative philosophy of history is given an extra Kantian twist in the German idealist Johann Fichte, in whom the extra association of temporal succession with logical implication introduces the idea that concepts themselves are the dynamic engines of historical change. The idea is readily intelligible in that their world of nature and of thought become identified. The work of Herder, Kant, Flichte and Schelling is synthesized by Hegel: History has a plot, as too, this to the moral development of man's evolving equations of freedom within the state, this in turn is the development of thought, or a logical development in which various necessary moment in the life of the concept are successively achieved and improved upon. Hegel’s method is at it's most successful, when the object is the history of ideas, and the evolution of thinking may march in steps with logical oppositions and their resolution encounters red by various systems of thought.
Within the revolutionary communism, Karl Marx (1818-83) and the German social philosopher Friedrich Engels (1820-95), there emerges a rather different kind of story, based upon Hefl’s progressive structure not laying the achievement of the goal of history to a future in which the political condition for freedom comes to exist, so that economic and political fears than ‘reason’ is in the engine room. Although, it is such that speculations upon the history may that it is continued to be written, notably: late examples, by the late 19th century large-scale speculation of this kind with the nature of historical understanding, and in particular with a comparison between, its owing methods of natural science and with the historians. For writers such as the German neo-Kantian Wilhelm Windelband and the German philosopher and literary critic and historian Wilhelm Dilthey, it is important to show that the human sciences such. As history is objective and legitimate, nonetheless they are in some way deferent from the enquiry of the scientist. Since the subjective-matter is the past thought and actions of human brings, what is needed and actions of human beings, past thought and actions of human beings, what is needed is an ability to re-live that past thought, knowing the deliberations of past agents, as if they were the historian’s own. The most British writer, philosopher and historian George Collingwood (1889-1943) whose, The Idea of History (1946), contains an extensive defence of the Verstehe approach. However it is, nonetheless, the explanation from which are the actions, in that by re-living the situation as our understanding that understanding others is not gained by the tactic use of a ‘theory’, enabling us to infer what thoughts or intentionality experienced, again, the matter to which the subjective-matters of past thoughts and actions, as I have a human ability of knowing the deliberations of past agents as if they were the historian’s own. The immediate question of the form of historical explanation, and the fact that general laws have other than no place or any apprentices in the order of a minor place in the human sciences, it is also prominent in thoughts about distinctiveness as to regain their actions, but by re-living the situation in or thereby an understanding of what they experience and thought.
The view that everyday attributional intentions of belief and meaning to other persons proceeded through tacit uses of a theory that enables one to construct and use these interpretations as explanations of their suiting purposes. The view is commonly held along with functionalism, according to which psychological states theoretical entities, identified by the network of their causes and effects. The theory-theory had different implications, depending on which feature of theories is being stressed. Theories may be though of as capable of formalization, as yielding predications and explanations, as achieved by a process of theorizing, as achieved by predictions and explanations, as achieved by a process of theorizing, as answering to empirically evince that is in principle describable without them, as liable to be overturned by newer and better theories, and o on. The main problem with seeing our understanding of others as the outcome of a piece of theorizing is the non-existence of a medium in which this theory can be couched, as the child learns simultaneously he minds of others and the meaning of terms in its native language.
Our understanding of others is not gained by the tacit use of a ‘theory’. Enabling us to infer what thoughts or intentions explain their actions, however, by re-living the situation ‘in their moccasins’, or from their point of view, and thereby understanding what hey experienced and thought, and therefore expressed. Understanding others is achieved when we can ourselves deliberate as they did, and hear their words as if they are our own. The suggestion is a modern development of the ‘Verstehen’ tradition associated with Dilthey, Weber and Collngwood.
Much as much, it is therefore, in some sense available to reactivate a new body, however, not that I, who survives bodily death, but I may be resurrected in the same body that becomes reanimated by the same form, in that of Aquinas’s account, a person has no privileged fortune of self-understanding. We understand ourselves, just as we do everything else, that through the sense experience, in that of an abstraction, may justly be of knowing the principle of our own lives, is to obtainably achieve, and not as a given. In the theory of knowledge that knowing Aquinas holds the Aristotelian doctrine that knowing entails some similarities between the Knower and what there is to be known: A human’s corporal nature, therefore, requires that knowledge start with sense perception. Just as, the same limitations that do not apply of bringing further the leveling stabilities that are contained within the hierarchical mosaic, such are the celestial heavens that open in bringing forth to angles.
In the domain of theology Aquinas deploys the distraction emphasized by Eringena, between the existence of God in understanding the relevant significance of five arguments: They are (1) Motion is only explicable if there exists an unmoved, a first mover (2) the chain of efficient causes demands a first cause (3) the contingent character of existing things in the wold demands a different order of existence, or in other words as something that has a necessary existence (4) the gradations of value in things in the world require the existence of something that is most valuable, or perfect, and (5) the orderly character of events points to a final cause, or end t which all things are directed, and the existence of this end demands a being that ordained it. All the arguments are physico-theological arguments, in that between reason and faith, Aquinas lays out proofs of the existence of God.
He readily recognizes that there are doctrines such that are the Incarnation and the nature of the Trinity, know only through revelations, and whose acceptance is more a matter of moral will. God’s essence is identified with his existence, as pure activity. God is simple, containing no potential. No matter how, we cannot obtain knowledge of what God is (his quiddity), perhaps, doing the same work as the principle of charity, but suggesting that we regulate our procedures of interpretation by maximizing the extent to which we see the subject s humanly reasonable, than the extent to which we see the subject as right about things. Whereby remaining content with descriptions that apply to him partly by way of analogy, God revealingly speaks of himself are not of himself.
The immediate problem availed of ethics is posed b y the English philosopher Phillippa Foot, in her ‘The Problem of Abortion and the Doctrine of the Double Effect’ (1967). A runaway train or trolley comes to a section in the track that is under construction and impassable. One person is working on one part and five on the other, and the trolley will put an end to anyone working on the branch it enters. Clearly, to most minds, the driver should steer for the fewest populated branch. But now suppose that, left to itself, it will enter the branch with its five employees that are there, and you as a bystander can intervene, altering the points so that it veers through the other. Is it right or obligors, or even permissible for you to do this, thereby, apparently involving you in ways that responsibility ends in a death of one person? After all, who have you wronged if you leave it to go its own way? The situation is similarly standardized of others in which utilitarian reasoning seems to lead to one course of action, but a person’s integrity or principles may oppose it.
The related narrative depicting a descriptive sequence of exemplifying events that occurring in action, deed, achievement, exploit, feat, accident, chance or fortune will prevail in the valid worthy of an end result, however, their postulated outcome, condition or contingency are least of possibilities that happen, surely of themselves have forbidden us to talk of rationality and intention, which are the categories we may apply if we conceive of them as action. We think of ourselves not only passively, as creatures that make things happen. Understanding this distinction gives forth of its many major problems concerning the nature of an agency for the causation of bodily events by mental events, and of understanding the ‘will’ and ‘free will’. Other problems in the theory of action include drawing the distinction between an action and its consequence, and describing the structure involved when we do one thing ‘by’ doing another thing. Even the planning and dating where someone shoots someone on one day and in one place, whereby the victim then dies on another day and in another place. Where and when did the murderous act takes place?
Causation, least of mention, is not clear and that only events are created by and for themselves. Kant's example by illumining of a cannonball at rest, and motionless upon a cushion, yet causing the cushion to be the shape that it is, and thus suggesting that the causal states assumed are something done or dealt with and concerns that affect of the usually mental or emotional effect on one capable of reaction. The essential objects or facts may also be casually related, All of which, the central problem is to understand the elements of necessitation or determinacy for the future. Events, Hume thought, are in themselves ‘loose and separate’: How then are we to conceive of others? The relationship seems not too perceptible, for all that perception gives us (Hume argues) is knowledge of the patterns that events do, actually falling into than any acquaintance with the connections determining the pattern. It is, however, clear that our conception of everyday objects are largely determined by their casual powers, and all our action is based on the belief that these causal powers are stable and reliable. Although scientific investigation can give us wider and deeper dependable patterns, it seems incapable of bringing us any nearer to the ‘must’ of causal necessitation. Particular examples of puzzles with causalities are quite apart from general problems of forming any conception of what it is: How are we to understand the casual interaction between mind and body? How can the present, which exists, or its existence to a past that no longer exists? How is the stability of the casual order to be understood? Is backward causality possible? Is causation a concept needed in science, or dispensable?
The news concerning free-will, is nonetheless, a problem for which is to reconcile our everyday consciousness of ourselves as agent, with the best view of what science tells us that we are. Determinism is one part of the problem. It may be defined as the doctrine that every event has a cause. More precisely, for any event ‘C’, there will be one antecedent state of nature ‘N’, and a law of nature ‘L’, such that given L, N will be followed by ‘C’. But if this is true of every event, it is true of events such as my doing something or choosing to do something. So my choosing or doing something is fixed by some antecedent state ‘N’ an d the laws. Since determinism is universal, wherein, these in turn are fixed, and so backwards to events, for which I am clearly not responsible (events before my birth, for example). So, no events can be voluntary or free, where that means that they come about purely because of my willing them I could have done otherwise. If determinism is true, then there will be antecedent states and laws already determining such events: How then can I truly be said to be their author, or be responsible for them?
Reactions to this problem are commonly classified as: (1) Hard determinism. This accepts the conflict and denies that you have real freedom or responsibility (2) Soft determinism or compatibility, whereby reactions in this family assert that everything you should be are from a notion of freedom is quite compatible with determinism. In particular, if your actions are caused, it can often be true of you that you could have done otherwise if you had chosen, and this may be enough to render you liable to be held unacceptable (the fact that previous events will have caused you to choose as your choice is deemed irrelevant on this option). (3) Libertarianism, as this is the view that while compatibilism is only an evasion, the mediate approbations are to some higher, larger degree that is substantiative, real notion implying their implicated manifestations of freedom that can yet be preserved in the face of determinism (or, of indeterminism). In Kant, while the empirical or phenomenal self is determined and not free, whereas the noumenal or rational self is capable of being rational, free action. However, the noumenal self exists outside the categorical priorities of space and time, as this freedom seems to be of a doubtful value as other libertarian avenues do include of suggesting that the problem is badly framed, for instance, because the definition of determinism breaks down, or postulates by its suggesting that there are two independent but consistent ways of looking at an agent, the scientific and the humanistic, wherefore it is only through confusing them that the problem seems urgent. Nevertheless, these avenues have gained general popularity, as an error to confuse determinism and fatalism.
The dilemma for which determinism is for itself often supposes of an action that seems as the end of a causal chain, or, perhaps, by some hieratical set of suppositional actions that would stretch back in time to events for which an agent has no conceivable responsibility, then the agent is not responsible for the action.
Once, again, the dilemma adds that if an action is not the end of such a chain, then either or one of its causes occurs at random, in that no antecedent events brought it about, and in that case nobody is responsible for it's ever to occur. So, whether or not determinism is true, responsibility is shown to be illusory.
Still, there is to say, to have a will is to be able to desire an outcome and to purpose to bring it about. Strength of will, or firmness of purpose, is supposed to be good and weakness of will or akrasia forsaken.
A mental act of willing or trying whose presence is sometimes supposed to make the difference between intentional and unintentional actions, as well as the mere behavior, nevertheless, the theories that there are such acts are problematic, and the idea that they make the required difference is a case of explaining a phenomenon by citing another that raises exactly under its same problem. Since the intentional or non-intentional nature of the set to some volition needs explanation, that’s not to say, that volitional presumptions are not taken for granted, however, the determinations to act in accordance with the law of autonomy or freedom, is that in ascendance with universal moral law and regardless of selfish advantage.
A categorical notion as contrasted in the Kantian ethics show of a hypothetical imperative that embeds of a commentary note for which in place of its solace where in finding the refuge and shelter that some given antecedence forebears in advance the desire or projects ‘If you wish to look knowingly, stay quiet’. The injunction to stay quiet draws its considered attentions through which of those that are applicable to antecedent desire or inclinations: If one has no desire to look wise, that his manner of choice is but a fanciful form of actualization for which by notion alone, he is not worth any measure. A categorical imperative cannot be so avoided, it is a requirement that binds anybody, regardless of their inclination. It could be repressed as, for example, ‘Tell the truth (regardless of whether you want to or not)’. The distinction is not always mistakably presumed or absence of the conditional or hypothetical form: ‘If you crave drink, don’t become a bartender’ may be regarded as an absolute injunction applying to anyone, although only activated in the case of those with the stated desire.
A central object in the study of Kant’s ethics is to understand the expressions of the inescapable, binding requirements of their categorical importance, and to understand whether they are equivalent at some deep level. Kant’s own applications of the notions are always convincing: One cause of confusion is relating Kant’s ethical values those theories which are 'expressionists’ in that are expressive, however, these tributes of expressionism cannot be expression of a sentiment, yet, it must derive from something ‘unconditional’ or necessary’ such as the voice of reason. The standard mood of sentences used to issue request and commands are their imperative needs to issue as basic the need to communicate information, and as such to animals signaling systems may as often be interpreted either way, and understanding the relationship between commands and other action-guiding uses of language, such as ethical discourse. The ethical theory of ‘prescriptivism’ in fact equates the two functions. A further question is whether there is an imperative logic. ‘Hump that bale’ seems to follow from ‘Tote that barge and hump that bale’, follows from ‘Its windy and its raining’: But it is harder to say how to include other forms, does ‘Shut the door or shut the window’ follow from ‘Shut the window’, for example, the usual way to develop an imperative logic is to work in terms of the possibility of satisfying the other one command without satisfying the other, thereby turning it into a variation of ordinary deductive logic.
Despite the fact that the morality of people and their ethics amount to the same thing, there is a usage in that morality as such has that of the Kantian base, that on given notions as duty, obligation, and principles of conduct, reserving ethics for the more Aristotelian approach to practical reasoning as based on the valuing notions that are characterized by their particular virtue, and generally avoiding the separation of ‘moral’ considerations from other practical considerations. The scholarly issues are complicated and complex, with some writers seeing Kant as more Aristotelian. Aristotle has more to elaborate on or upon the information to explicate on the plexuities and complex complications where they are involved with a separate sphere of responsibility and duty, is, founded to the simple contrast that gives to a greater suggestion.
The Cartesian doubt is the method of investigating how much knowledge and its basis in reason or experience as used by Descartes in the first two Medications. It attempted to put knowledge upon secure foundation by first inviting us to suspend judgements on any proportion whose truth can be doubted, even as a bare possibility. The standards of acceptance are gradually raised as we are asked to doubt the deliverance of memory, the senses, and even reason, all of which are in principle capable of letting us down by locating the point of certainty in my awareness of my own self. Descartes gives a first-person twist to the theory of knowledge that dominated the following centuries in spite of some various counter-attacks on behalf of social and public starting-points. The metaphysics associated with this priority are the Cartesian dualism, or separation of mind and matters are both compelling dispositional incompatibilities for which are combined to act together within the nature of constancy. Descartes rigorously and rightly of seeing that it takes divine dispensation to certify any relationship between the two realms thus divided, and to prove the reliability of the senses invokes a “clear and distinct perception” of highly dubious proofs of the existence of a benevolent deity. This has not met general acceptance: A Hume drily puts it, “to have recourse to the veracity of the supreme Being, in order to prove the veracity of our senses, is surely making a very unexpected circuit.”
By dissimilarity, Descartes’s notorious denial that non-human animals are conscious is a stark illustration of dissimulation. In his conception of matter Descartes also gives preference to rational cogitation over anything from the senses. Since we can conceive of the matter of a ball of wax, surviving changes to its sensible qualities, matter is not an empirical concept, but eventually an entirely geometrical one, with extension and motion as its only physical nature.
Although the structure of Descartes’s epistemology, theory of mind and theory of matter have been rejected many times, their relentless exposure of the hardest issues, their exemplary clarity and even their initial plausibility, all contrives to make him the central point of reference for modern philosophy.
The term instinct (Lat., instinctus, impulse or urge) implies innately determined behavior, flexible to change in circumstance outside the control of deliberation and reason. The view that animals accomplish even complex tasks not by reason was common to Aristotle and the Stoics, and the inflexibility of their outline was used in defence of this position as early as Avicennia. A continuity between animal and human reason was proposed by Hume, and followed by sensationalist such as the naturalist Erasmus Darwin (1731-1802). The theory of evolution prompted various views of the emergence of stereotypical behavior, and the idea that innate determinants of behavior are fostered by specific environments is a guiding principle of ethology. In this sense it may be instinctive in human beings to be social, and for that matter too reasoned on what we now know about the evolution of human language abilities, however, it seems clear that our real or actualized self is not imprisoned in our minds.
It is implicitly a part of the larger whole of biological life, human observers its existence from embedded relations to this whole, and constructs its reality as based on evolved mechanisms that exist in all human brains. This suggests that any sense of the “otherness” of self and world be is an illusion, in that disguises of its own actualization are to find all its relations between the part that are of their own characterization. Its self as related to the temporality of being whole is that of a biological reality. It can be viewed, of course, that a proper definition of this whole must not include the evolution of the larger indivisible whole. Nonetheless, the cosmos and unbroken evolution of all life, by that of the first self-replication molecule that was the ancestor of DNA, and it should include the complex interactions that have proven that among all the parts in biological reality that any resultant of emerging is self-regulating. This, of course, is responsible to properties owing to the whole of what might be to sustain the existence of the parts.
Founded on complications and complex coordinate systems in ordinary language may be conditioned as to establish some developments have been descriptively made by its physical reality and metaphysical concerns. That is, that it is in the history of mathematics and that the exchanges between the mega-narratives and frame tales of religion and science were critical for in the minds of those who contributed. The most rudimentary and, that exceeds all others, succeeded at its first attempt, is that, the first scientific revolution of the seventeenth century allowed scientists to better improve of themselves in the understudy of how the classical paradigm in physical reality has marked results in the stark Cartesian division between mind and world. This, the quality of being actual as the realm of fact is distinctly becoming but cannot be confuted with a discussion conducted in perfect amenities. Nonetheless, of a generally agreeable nature especially in the interactions for with which in becoming one of the most characteristic features of Western thought, this is not another ill-mannered diatribe against our misunderstandings, but drawn upon equivalent self realization and undivided wholeness, alternatively, predicted characterlogic principles of physical reality and the epistemological foundations of physical theory.
The subjectivity of our mind affects our perceptions of the world that are held to be objective by natural science. Create both aspects of mind and matter as individualized forms that belong to the same underlying reality.
Our everyday experience confirms the apparent fact that there is a dual-valued world as subject and objects. We as having consciousness, as personality and as experiencing beings are the subjects, whereas for everything for which we can come up with a name or designation, seems to be the object, that which is opposed to us as a subject. Physical objects are only part of the object-world. There are also mental objects, objects of our emotions, abstract objects, religious objects etc. language objectifies our experience. Experiences per se are purely sensational experienced that do not make a distinction between object and subject. Only verbalized thought reifies the sensations by conceptualizing them and pigeonholing them into the given entities of language.
Some thinkers maintain, that subject and object are only different aspects of experience. I can experience myself as subject, and in the act of self-reflection. The fallacy of this argument is obvious: Being a subject implies having an object. We cannot experience something consciously without the mediation of understanding and mind. Our experience is already conceptualized at the time it comes into our consciousness. Our experience is negative insofar as it destroys the original pure experience. In a dialectical process of synthesis, the original pure experience becomes an object for us. The common state of our mind is only capable of apperceiving objects. Objects are reified negative experience. The same is true for the objective aspect of this theory: by objectifying myself I do not dispense with the subject, but the subject is causally and apodeictically linked to the object. As soon as I make an object of anything, I have to realize, that it is the subject, which objectifies something. It is only the subject who can do that. Without the subject there are no objects, and without objects there is no subject. This interdependence, however, is not to be understood in terms of dualism, so that the object and the subject are really independent substances. Since the object is only created by the activity of the subject, and the subject is not a physical entity, but a mental one, we have to conclude then, that the subject-object dualism is purely mentalistic.
The Cartesian dualism posits the subject and the object as separate, independent and real substances, both of which have their ground and origin in the highest substance of God. Cartesian dualism, however, contradicts itself: The very fact, which Descartes posits the "I,” that is the subject, as the only certainty, he defied materialism, and thus the concept of some "res extensa.” The physical thing is only probable in its existence, whereas the mental thing is absolutely and necessarily certain. The subject is superior to the object. The object is only derived, but the subject is the original. This makes the object not only inferior in its substantive quality and in its essence, but relegates it to a level of dependence on the subject. The subject recognizes that the object is a "res’ extensa" and this means, that the object cannot have essence or existence without the acknowledgment through the subject. The subject posits the world in the first place and the subject is posited by God. Apart from the problem of interaction between these two different substances, Cartesian dualism is not eligible for explaining and understanding the subject-object relation.
By denying Cartesian dualism and resorting to monistic theories such as extreme idealism, materialism or positivism, the problem is not resolved either. What the positivists did, was just verbalizing the subject-object relation by linguistic forms. It was no longer a metaphysical problem, but only a linguistic problem. Our language has formed this object-subject dualism. These thinkers are very superficial and shallow thinkers, because they do not see that in the very act of their analysis they inevitably think in the mind-set of subject and object. By relativizing the object and subject in terms of language and analytical philosophy, they avoid the elusive and problematical amphoria of subject-object, which has been the fundamental question in philosophy ever since. Shunning these metaphysical questions is no solution. Excluding something, by reducing it to a more material and verifiable level, is not only pseudo-philosophy but actually a depreciation and decadence of the great philosophical ideas of mankind.
Therefore, we have to come to grips with idea of subject-object in a new manner. We experience this dualism as a fact in our everyday lives. Every experience is subject to this dualistic pattern. The question, however, is, whether this underlying pattern of subject-object dualism is real or only mental. Science assumes it to be real. This assumption does not prove the reality of our experience, but only that with this method science is most successful in explaining our empirical facts. Mysticism, on the other hand, believes that there is an original unity of subject and objects. To attain this unity is the goal of religion and mysticism. Man has fallen from this unity by disgrace and by sinful behavior. Now the task of man is to get back on track again and strive toward this highest fulfilment. Again, are we not, on the conclusion made above, forced to admit, that also the mystic way of thinking is only a pattern of the mind and, as the scientists, that they have their own frame of reference and methodology to explain the supra-sensible facts most successfully?
If we assume mind to be the originator of the subject-object dualism, then we cannot confer more reality on the physical or the mental aspect, as well as we cannot deny the one in terms of the other. The crude language of the earliest users of symbolics must have been considerably gestured and nonsymbiotic vocalizations. Their spoken language probably became reactively independent and a closed cooperative system. Only after the emergence of hominids were to use symbolic communication evolved, symbolic forms progressively took over the rudimentary functions served by non-vocalized expressions in their symbolic forms. This is reflected in modern languages. The structure of syntax in these languages often reveals its origins in pointing gestures, in the manipulation and exchange of objects, and in more primitive constructions of spatial and temporal relationships. We still use nonverbal vocalizations and gestures to complement meaning in spoken language.
The general idea is very powerful, however, the relevance of spatiality to self-consciousness comes about not merely because the world is spatial but also because the self-conscious subject is a spatial element of the world. One cannot be self-conscious without being aware that one is a spatial element of the world, and one cannot be ware that one is a spatial element of the world without a grasp of the spatial nature of the world. Face to face, the idea of a perceivable, objective spatial world that causes ideas too subjectively becoming to denote in the world. During which time, his perceptions as they have of changing position within the world and to the more or less stable way the world is. The generally shared idea that exists in the mind as a representation, as of something comprehended or as a formulation as in a plan, finds of an objective world and the same inclinations are manifested in the idea that the subject is somewhere, and where that is given by what we can perceive.
Research, however distant, are those that neuroscience reveals in that the human brain is a massive parallel system which language processing is widely distributed. Computers generated images of human brains engaged in language processing reveals a hierarchal organization consisting of complicated clusters of brain areas that process different component functions in controlled time sequences. And it is now clear that language processing is not accomplished by stand-alone or unitary modules that evolved with the addition of separate modules that were eventually wired together on some neutral circuit board.
While the brain that evolved this capacity was obviously a product of Darwinian evolution, the most critical precondition for the evolution of this brain cannot be simply explained in these terms. Darwinian evolution can explain why the creation of stone tools altered conditions for survival in a new ecological niche in which group living, pair bonding, and more complex social structures were critical to survival. And Darwinian evolution can also explain why selective pressures in this new ecological niche favoured pre-adaptive changes required for symbolic communication. All the same, this communication resulted directly through its passing an increasingly atypically structural complex and intensively condensed behavior. Social evolution began to take precedence over physical evolution in the sense that mutations resulting in enhanced social behavior became selectively advantageously within the context of the social behavior of hominids.
Because this communication was based on symbolic vocalization that required the evolution of neural mechanisms and processes that did not evolve in any other species. As this marked the emergence of a mental realm that would increasingly appear as separate and distinct from the external material realm.
If the emergent reality in this mental realm cannot be reduced to, or entirely explained as for, the sum of its parts, it seems reasonable to conclude that this reality is greater than the sum of its parts. For example, a complete proceeding of the manner in which light in particular wave lengths has been advancing by the human brain to generate a particular colour says nothing about the experience of colour. In other words, a complete scientific description of all the mechanisms involved in processing the colour blue does not correspond with the colour blue as perceived in human consciousness. And not simply as an end, but always at the same time as an end’, the formula of autonomy, or consideration upon ’the will’ of every rational being a ‘will’ which makes universal law’, and the formula headed toward the Kingdom of Ends, only which that provides a model for systematic union of different rational beings under common laws. Scientific description of the physical substrate of a thought or feeling, no matter how accomplish it can but be accounted for in actualized experience, especially of a thought or feeling, as an emergent aspect of global brain function.
For example that of defining all of the neural mechanisms involved in generating a particular word symbol, this would reveal nothing about the experience of the word symbol as an idea in human consciousness. Conversely, the experience of the word symbol as an idea would reveal nothing about the neuronal processes involved. And while one mode of understanding the situation necessarily displaces the other, both are required to achieve a complete understanding of the situation.
Even if we are to include two aspects of biological reality, finding to a more complex order in biological reality is associated with the emergence of new wholes that are greater than the orbital parts. Yet, the entire biosphere is of a whole that displays self-regulating behavior that is greater than the sum of its parts. The emergence of a symbolic universe based on a complex language system could be viewed as another stage in the evolution of more complicated and complex systems. As marked and noted by the appearance of a new profound complementarity as it is founded in relationships between parts and wholes. This does not allow us to assume that human consciousness was in any sense preordained or predestined by natural process. But it does make it possible, in philosophical terms at least, to argue that this consciousness is an emergent aspect of the self-organizing properties of biological life.
If we concede that an indivisible whole contains, by definition, no separate parts and that a phenomenon can be assumed to be “real” only when it is “observed” phenomenon, we are led to more interesting conclusions. The indivisible whole whose existence is inferred in the results of the aspectual experiments that cannot in principle is itself the subject of scientific investigation. There is a simple reason why this is the case. Science can claim knowledge of physical reality only when the predictions of a physical theory are validated by experiment. Since the indivisible whole cannot be measured or observed, we confront the “event horizon” or the knowledge where that occupied point of spatiality that science can say nothing about the actual character of this reality. Why this is so, is a property of the entire universe, then we must also conclude that an undivided wholeness exists on the most primary and basic level in all aspects of physical reality. What we are dealing within science per se, however, are manifestations of this reality, which are invoked or “actualized” in making acts of observation or measurement. Since the reality that exists between the space-like separated regions is a whole whose existence can only be inferred in experience. As opposed to proven experiment, the correlations between the particles, and the sum of these parts, do not constitute the “indivisible” whole. Physical theory allows us to understand why the correlations occur. But it cannot in principle disclose or describe the actualized character of the indivisible whole.
The scientific implications to this extraordinary relationship between parts (Qualia) and indivisible whole (the universe) are quite staggering. Our primary concern, however, is a new view of the relationship between mind and world that carries even larger implications in human terms. When factors into our understanding of the relationship between parts and wholes in physics and biology, then mind, or human consciousness, must be viewed as an emergent phenomenon in a seamlessly interconnected whole called the cosmos.
All that is required to embrace the alternative view of the relationship between mind and world that are consistent with our most advanced scientific knowledge is a commitment to metaphysical and epistemological realism and a willingness to follow arguments to their logical conclusions. Metaphysical realism assumes that physical reality or has an actual existence independent of human observers or any act of observation, epistemological realism assumes that progress in science requires strict adherence to scientific mythology, or to the rules and procedures for doing science. If one can accept these assumptions, most of the conclusions drawn should appear fairly self-evident in logical and philosophical terms. And it is also not necessary to attribute any extra-scientific properties to the whole to understand and embrace the new relationship between part and whole and the alternative view of human consciousness that is consistent with this relationship. This is, in this that our distinguishing character between what can be “proven” in scientific terms and what can be reasonably “inferred” in philosophical terms based on the scientific evidence.
Moreover, advances in scientific knowledge rapidly became the basis for the creation of a host of new technologies. Yet those responsible for evaluating the benefits and risks associated with the use of these technologies, much less their potential impact on human needs and values, normally had expertise on only one side of a two-culture divide. Perhaps, more important, many of the potential threats to the human future-such as, to, environmental pollution, arms development, overpopulation, and spread of infectious diseases, poverty, and starvation-can be effectively solved only by integrating scientific knowledge with knowledge from the social sciences and humanities. We have not done so for a simple reason, but the implications of the amazing new fact of nature’s non-locality cannot be properly understood without some familiarity wit the actual history of scientific thought. The intent is to suggest that what is most important about this back-ground can be understood in its absence. Those who do not wish to struggle with the small and perhaps, the resultant amounts to fewer back-ground implications should feel free to ignore it. But this material will be no more challenging as such, that the hope is that from those of which will find a common ground for understanding and that will meet again on this commonly functions in an effort to close the circle, reach a decision about the equations of eternity and complete the universe to obtainably gain in its unification of which that holds within.
A major topic of philosophical inquiry, especially in Aristotle, and subsequently since the 17th and 18th centuries, when the ‘science of man’ began to probe into human motivation and emotion. For such as these, the French moralistes, or Hutcheson, Hume, Smith and Kant, was a prime task as to delineate the variety of human reactions and motivations. Such an inquiry would locate our propensity for moral thinking among other faculties, such as perception and reason, and other tendencies as empathy, sympathy or self-interest. The task continues especially in the light of a post-Darwinian understanding of us.
In some moral systems, notably that of Immanuel Kant, ‘real’ exemplifies the moral worth that importune the antecedent’s interactivity, justly because it is right. However, if you do what is purposely becoming, equitable, but from some other equitable motive, such as the fear or prudence, no moral merit accrues to you. Yet, that in turn seems to discount other admirable motivations, as acting from main-sheet benevolence, or ‘sympathy’. The question is how to balance these opposing ideas and how to understand acting from a sense of obligation without duty or rightness, through which their beginning to seem a kind of fetish. It thus stands opposed to ethics and relying on highly general and abstractive principles, particularly. Those associated with the Kantian categorical imperatives. The view may go as far back as to say that taken in its own, no consideration point, for that which of any particular way of life, that, least of mention, the contributing steps so taken as forwarded by reason or be to an understanding estimate that can only proceed by identifying salient features of situations that weigh on one’s side or another.
As random moral dilemmas set out with intense concern, inasmuch as philosophical matters that exert a profound but influential defence of common sense. Situations, in which each possible course of action breeches some otherwise binding moral principle, are, nonetheless, serious dilemmas making the stuff of many tragedies. The conflict can be described in different was. One suggestion is that whichever action the subject undertakes, that he or she does something wrong. Another is that his is not so, for the dilemma means that in the circumstances for what she or he did was right as any alternate. It is important to the phenomenology of these cases that action leaves a residue of guilt and remorse, even though it had proved it was not the subject’s fault that she or he was considering the dilemma, that the rationality of emotions can be contested. Any normality with more than one fundamental principle seems capable of generating dilemmas, however, dilemmas exist, such as where a mother must decide which of two children to sacrifice, least of mention, no principles are pitted against each other, only if we accept that dilemmas from principles are real and important, this fact can then be used to approach in them, such as of ‘utilitarianism’, to espouse various kinds may, perhaps, be centered upon the possibility of relating to independent feelings, liken to recognize only one sovereign principle. Alternatively, of regretting the existence of dilemmas and the unordered jumble of furthering principles, in that of creating several of them, a theorist may use their occurrences to encounter upon that which it is to argue for the desirability of locating and promoting a single sovereign principle.
Nevertheless, some theories into ethics see the subject in terms of a number of laws (as in the Ten Commandments). Th status of these laws may be that they are the edicts of a divine lawmaker, or that they are truths of reason, given to its situational ethics, virtue ethics, regarding them as at best rules-of-thumb, and, frequently disguising the great complexity of practical representations that for reason has placed the Kantian notions of their moral law.
In continence, the natural law possibility points of the view of the states that law and morality are especially associated with St Thomas Aquinas (1225-74), such that his synthesis of Aristotelian philosophy and Christian doctrine was eventually to provide the main philosophical underpinning of th Catholic church. Nevertheless, to a greater extent of any attempt to cement the moral and legal order and together within the nature of the cosmos or the nature of human beings, in which sense it found in some Protestant writings, under which had arguably derived functions. From a Platonic view of ethics and its mellowed simplicities, that has advance of Stoicism. Its law stands above and apart from the activities of human lawmakers: It constitutes an objective set of principles that can be seen as in and for themselves by means of ‘natural usages’ or by reason itself, additionally, (in religious verses of them), that express of God’s will for creation. Nonreligious versions of the theory substitute objective conditions for humans flourishing as the source of constraints, upon permissible actions and social arrangements within the natural law tradition. Different views have been held about the relationship between the rule of the law and God’s will. Grothius, for instance, side with the view that the content of natural law is independent of any will, including that of God.
While the German natural theorist and historian Samuel von Pufendorf (1632-94) takes the opposite view. His great work was the De Jure Naturae et Gentium, 1672, and its English translation are ‘Of the Law of Nature and Nations, 1710. Pufendorf was influenced by Descartes, Hobbes and the scientific revolution of the seventeenth-century, his ambition was to introduce a newly scientific ‘mathematical’ treatment on ethics and law, free from the tainted Aristotelian underpinning of ‘scholasticism’. Existing or occurring at the same time presents his previous antecent in following his concurrent contemporary -Locke - his conception of natural laws includes rational and religious principles, making it only a partial forerunner of more resolutely empiricist and political treatment in the Enlightenment.
Pufendorf launched his explorations in Plato’s dialogue ‘Euthyphro’, with whom the pious things are pious because the gods love them, or do the gods love them because they are pious? The dilemma poses the question of whether value can be conceived as the upshot o the choice of any mind, even a divine one. On the fist option the choice of the gods crates goodness and value. Even if this is intelligible, it seems to make it impossible to praise the gods, for it is then vacuously true that they choose the good. On the second option we have to understand a source of value lying behind or beyond the will even of the gods, and by which they can be evaluated. The elegant solution of Aquinas is and is therefore distinct form is willing, but not distinct from him.
The dilemma arises whatever the source of authority is supposed to be. Do we care about the good because it is good, or do we just call well those things that we care about? It also generalizes to affect our understanding of the authority of other things: Mathematics, or necessary truth, for example, are truths necessary because we deem them to be so, or do we deem them to be so because they are necessary?
The natural aw tradition may either assume a stranger form, in which it is claimed that various facts’ entails of primary and secondary qualities, any of which are claimed that various facts entail values, reason by itself is capable of discerning moral requirements. As in the ethics of Kant, these requirements are supposed binding on all human beings, regardless of their desires.
The supposed natural or innate abilities of the mind to know the first principle of ethics and moral reasoning, wherein, those expressions are assigned and related to those that distinctions are which make in terms contribution to the function of the whole, as completed definitions of them, their phraseological impression is termed ‘synderesis’ (or, syntetesis) although traced to Aristotle, the phrase came to the modern era through St Jerome, whose scintilla conscientiae (gleam of conscience) wads a popular concept in early scholasticism. Nonetheless, it is mainly associated in Aquinas as an infallible natural, simple and immediate apprehensiveness of first moral principles. Conscience, by contrast, is, more concerned with particular instances of right and wrong, and can be in error, under which the assertion that is taken as fundamental, at least for the purposes of the branch of enquiry in hand.
It is, nevertheless, the view interpreted within the particular states of law and morality especially associated with Aquinas and the subsequent scholastic tradition, showing for itself the enthusiasm for reform for its own sake. Or for ‘rational’ schemes thought up by managers and theorists, is therefore entirely misplaced. Major o exponent s of this theme include the British absolute idealist Herbert Francis Bradley (1846-1924) and Austrian economist and philosopher Friedrich Hayek. The construct of something as distinguished from the substance of which it is made it conduct would ascertain the regularities by an external control, as custom or a formal protocol of procedure as to a fixed or accepted way of doing of sometimes of noteworthy celebrations expressing something as a person of a consequence or prominence notable in the idealism of Bradley. Is that there is the same doctrine that change is contradictory and consequently unreal? : The Absolute is changeless. A way of sympathizing a little with his idea is to reflect that any scientific explanation of change will proceed by finding an unchanging law operating, or an unchanging quantity conserved in the change, so that explanation of change always proceeds by finding that which is unchanged. The metaphysical problem of change is to shake off the idea that each moment is created afresh, and to obtain a conception of events or processes as having a genuinely historical reality, Really extended and unfolding in time, as opposed to being composites of discrete temporal atoms. A step toward this end may be to see time itself not as an infinite container within which discrete events are located, bu as a kind of logical construction from the flux of events. This relational view of time was advocated by Leibniz and a subject of the debate between him and Newton’s Absolutist pupil, Clarke.
Generally, nature is an indefinitely mutable term, changing as our scientific conception of the world changes, and often best seen as signifying a contrast with something considered not part of nature. The term applies both to individual species (it is the nature of gold to be dense or of dogs to be friendly), and also to the natural world as a whole. The intendment purported in meaning the lucidity of senses for in which it is applicably in a direct confrontation of a species, having to respond without hesitation of indicative of such ability, as in a quick perceptions and promptly linked up with ethical and aesthetic ideals: A thing ought to realize its nature, what is natural is what it is good for a thing to become, it is natural for humans to be healthy or two-legged, and departure from this is a misfortune or deformity. The association of what is natural with what it is good to become is visible in Plato, and is the central idea of Aristotle’s philosophy of nature. Unfortunately, the pinnacle of nature in this sense is the mature adult male citizen, with the rest of what we would call the order of the natural world, including women, slaves, children and other species, not quite making it.
Nature in general can, however, function as a foil to any idea inasmuch as a source of ideals: In this sense fallen nature is contrasted with a supposed celestial realization of the ‘forms’. The theory of ‘forms’ is probably the most characteristic, and most contested of the doctrines of Plato. In the background, i.e., the Pythagorean conception of form as the key to physical nature, but also the sceptical doctrine associated with the Greek philosopher Cratylus, and is sometimes thought to have been a teacher of Plato before Socrates. He is famous for capping the doctrine of Ephesus of Heraclitus, whereby the guiding idea of his philosophy was that of the logos, is capable of being heard or hearkened to by people, it unifies opposites, and it is somehow associated with fire, which is preeminent among the four elements that Heraclitus distinguishes: Fire, air (breath, the stuff of which souls composed), earth, and water. Although he is principally remembered for the doctrine of the ‘flux’ of all things, and the famous statement that you cannot step into the same river twice, for new waters are ever flowing in upon you. The more extreme implication of the doctrine of flux, e.g., the impossibility of categorizing things truly, do not seem consistent with his general epistemology and views of meaning, and were to his follower Cratylus, although the proper conclusion of his views was that the flux cannot be captured in words. According to Aristotle, he eventually held that since ‘regarding that which everywhere in every respect is changing nothing ids just to stay silent and wag one’s finger. Plato ‘s theory of forms can be seen in part as an action against the impasse to which Cratylus was driven.
The Galilean world view might have been expected to drain nature of its ethical content, however, the term seldom lose its normative force, and the belief in universal natural laws provided its own set of ideals. In the 18th century for example, a painter or writer could be praised as natural, where the qualities expected would include normal (universal) topics treated with simplicity, economy, regularity and harmony. Later on, nature becomes an equally potent emblem of irregularity, wildness, and fertile diversity, but also associated with progress of human history, its incurring definition that has been taken to fit many things as well as transformation, including ordinary human self-consciousness. Nature, being in contrast within an integrated phenomenon may include (1) that which is deformed or grotesque or fails to achieve its proper form or function or just the statistically uncommon or unfamiliar, (2) the supernatural, or the world of gods and invisible agencies, (3) the world of rationality and unintelligence, conceived as distinct from the biological and physical order, or the product of human intervention, and (5) related to that, the world of convention and artifice.
Different conceptualized traits as founded within the natures continuous overtures that play ethically, for example, the conception of ‘nature red in tooth and claw’ often provides a justification for aggressive personal and political relations, or the idea that it is women’s nature to be one thing or another is taken to be a justification for differential social expectations. The term functions as a fig-leaf for a particular set of stereotypes, and is a proper aim at of the feminist writings. Feminist epistemology has asked whether different ways of knowing for instance with different criteria of justification, and different emphases on logic and imagination, characterize male and female attempts to understand the world. Such concerns include awareness of the ‘masculine’ self-image, itself a social variable and potentially distorting pictures of what thought and action should be. Again, there is a spectrum of concerns from the highly theoretical to the relatively applicative. In this latter area particular attention is given to the institutional biases that stand in the way of equal opportunities in science and other academic pursuits, or the ideologies that stand in the way of women seeing themselves as leading contributors to various disciplines. However, to more radical feminists such concerns merely exhibit women wanting for themselves the same power and rights over others that men have claimed, and failing to confront the real problem, which is how to live without such symmetrical powers and rights.
In biological determination, not only influences but restricts the inevitable or development as persons with a variety of traits, at its place, the view postulates such entities as a gene predisposing people to poverty, and it is the particular enemy of thinkers stressing the parental, social, and political determinants of the way we are.
The philosophy of social science is more heavily intertwined with actual social science than in the case of other subjects such as physics or mathematics, since its question is centrally whether there can be such a thing as sociology. The idea of a ‘science of man’, devoted to uncovering scientific laws determining the basic dynamic s of human interactions was a cherished ideal of the Enlightenment and reached its heyday with the positivism of writers such as the French philosopher and social theorist Auguste Comte (1798-1957), and the historical materialism of Marx and his followers. Sceptics point out that what happens in society is determined by peoples’ own ideas of what should happen, and like fashions those ideas change in unpredictable ways as self-consciousness is susceptible to change by any number of external event s: Unlike the solar system of celestial mechanics a society is not at all a closed system evolving in accordance with a purely internal dynamic, but constantly responsive to shocks from outside.
The sociological approach to human behavior is based on the premise that all social behavior has a biological basis, and seeks to understand that basis in terms of genetic encoding for features that are then selected for through evolutionary history. The philosophical problem is essentially one of methodology: Of finding criteria for identifying features that can usefully be explained in this way, and for finding criteria for assessing various genetic stories that might provide useful explanations.
Among the features that are proposed for this kind of explanations are such things as male dominance, male promiscuity versus female fidelity, propensities to sympathy and other emotions, and the limited altruism characteristic of human beings. The strategy has proved unnecessarily controversial, with proponents accused of ignoring the influence of environmental and social factors in moulding people’s characteristics, e.g., at the limit of silliness, by postulating a ‘gene for poverty’, however, there is no need for the approach to commit such errors, since the feature explained sociobiological may be indexed to environment: For instance, it may be a propensity to develop some feature in some other environments (for even a propensity to develop propensities . . .) The main problem is to separate genuine explanation from speculative, just so stories which may or may not identify as really selective mechanisms.
Subsequently, in the 19th century attempts were made to base ethical reasoning on the presumed facts about evolution. The movement is particularly associated with the English philosopher of evolution Herbert Spencer (1820-1903). His first major work was the book Social Statics (1851), which advocated extreme political libertarianism. The Principles of Psychology was published in 1855, and his very influential Education advocating natural development of intelligence, the creation of pleasurable interest, and the importance of science in the curriculum, appeared in 1861. His First Principles (1862) was followed over the succeeding years by volumes on the Principles of biology and psychology, sociology and ethics. Although he attracted a large public following and attained the stature of a sage, his speculative work has not lasted well, and in his own time there were dissident voices. T.H. Huxley said that Spencer’s definition of a tragedy was a deduction killed by a fact. Writer and social prophet Thomas Carlyle (1795-1881) called him a perfect vacuum, and the American psychologist and philosopher William James (1842-1910) wondered why half of England wanted to bury him in Westminister Abbey, and talked of the ‘hurdy-gurdy’ monotony of him, his whole system of neutrality, as if knocked together out of cracked hemlock.
The premises regarded by some later elements in an evolutionary path are better than earlier ones, the application of this principle then requires seeing western society, laissez-faire capitalism, or some other object of approval, as more evolved than more ‘primitive’ social forms. Neither the principle nor the applications command much respect. The version of evolutionary ethics called ‘social Darwinism’ emphasizes the struggle for natural selection, and drawn the conclusion that we should glorify such struggles, usually by enhancing competitive and aggressive relations between people in society or between societies themselves. More recently the relation between evolution and ethics has been rethought in the light of biological discoveries concerning altruism and kin-selection, in that, the study of the say in which a variety of higher mental functions may be adaptions applicable of a psychology of evolution, as formed in response to selection pressures on human populations through evolutionary time. Candidates for such theorizing include material and paternal motivations, capabilities for love and friendship, the development of language as a signaling system, cooperative and aggressive tendencies, our emotional repertoires, our moral reaction, including the disposition to direct and punish those who cheat on an agreement or who are themselves to carry out, from first to the last without owing up to counter of any responsibilities, our cognitive structure and many others. Evolutionary psychology goes hand-in-hand with neurophysiological evidence about the underlying circuitry in the brain which subserves the psychological mechanisms it claims to identify.
For all that, an essential part of the British absolute idealist Herbert Bradley (1846-1924) was largely on the ground s that the self-sufficiency individualized through community and one’s ordination toward the individuality or which of responsibilities for the acclaimed selfhood, that of encompassing the distributive contributions too social and other ideals. However, truth as formulated in language is always partial, and dependent upon categories that they are inadequate to the harmonious whole. Nevertheless, these self-contradictory elements somehow contribute to the harmonious whole, or Absolute, lying beyond categorization. Although absolute idealism maintains few adherents today, Bradley’s general dissent from empiricism, his holism, and the brilliance and styles of his writing continue to make him the most interesting of the late 19th century writers influenced by the German philosopher Friedrich Hegel (1770-1831).
Understandably, something less than the fragmented division that belonging of Bradley’s case has a preference, voiced much earlier by the German philosopher, mathematician and polymath, Gottfried Leibniz (1646-1716), for categorical monadic properties over relations. He was particularly troubled by the relation between that which is known and the more that knows it. In philosophy, the Romantics took from the German philosopher and founder of critical philosophy Immanuel Kant (1724-1804) both the emphasis on free-will and the doctrine that reality is ultimately spiritual, with nature itself a mirror of the human soul. To fix upon one among alternatives as the one to be taken, Friedrich Schelling (1775-1854) encountering nature of becoming a creative spirit, whose aspiration is ever further and more to a completed self-realization. Although a movement of amply more of naturalized imperatives, were the accredited movements of Romanticism, drawing on the same intellectual and emotional resources as German idealism was increasingly culminating in the philosophy of Hegel (1770-1831) and of absolute idealism.
Being such in comparison with nature may include (1) that which is deformed or grotesque, or fails to achieve its proper form or function, or just the statistically uncommon or unfamiliar, (2) the supernatural, or th world of gods and invisible agencies, (3) the world of rationality and intelligence, conceived of as distinct from the biological and physical order, (4) that which is manufactured and artefactual, or the product of human invention, and (5) related to it, the world of convention and artifice.
Different conceptions of nature continue to have ethical overtones, for example, the conception of ‘nature red in tooth and claw’ often provides a justification for aggressive personal and political relations, or the idea that it is a woman’s nature to be one thing or another, as taken to be a justification for differential social expectations. The term functions as a fig-leaf for a particular set of stereotypes, and is a proper target of much ‘feminist’ writing.
This brings to question, that most of all ethics are contributively distributed as an understanding for which a dynamic function in and among the problems that are affiliated with human desire and needs the achievements of happiness, or the distribution of goods. The central problem specific to thinking about the environment is the independent value to place on ‘such-things’ as preservation of species, or protection of the wilderness. Such protection can be supported as a man’s familiarity to a human end, for instance, when animals are regarded as future sources of medicines or other benefits. Nonetheless, many would want to claim a non-utilitarian, absolute value for the existence of wild things and wild places. It is in their value that things consist. They put u in our proper place, and failure to appreciate this value is not only an aesthetic failure but one of due humility and reverence, a moral disability. The problem is one of expressing this value, and mobilizing it against utilitarian agents for developing natural areas and exterminating species, more or less at will.
Many concerns and disputed clusters around the idea associated with the term ‘substance’. The substance of a thing may be considered of the tenet: (1) Its essence, or that which makes it what it is. This will ensure that the substance of a thing is that which remains through change in properties. Again, in Aristotle, this essence becomes more than just the matter, but a unity of matter and form. (2) That which can exist by itself, or does not need a subject for existence, in the way that properties need objects, hence (3) that which bears properties, as a substance is then the subject of predication, that about which things are said as opposed to the things said about it. Substance in the last two senses stands opposed to modifications such as quantity, quality, relations, etc. it is hard to keep this set of ideas distinct from the doubtful notion of a substratum, something distinct from any of its properties, and hence, as an incapable characterization. The notion of substance tends to vanquish in the empiricist thought, whereby in fewer of the sensible questions of things with the notion of that in which they infer of giving way to an empirical notion of their regular occurrence. However, this is in turn be problematic, since it only makes sense to talk of the occurrence of an instances of quality, not of quantities themselves, so the problem of what it is for a quality value to be the instance for that remains.
Metaphysics inspired by modern science tend to reject the concept of substance in favour of concepts such as that of a field or a process, each of which may seem to provide a better example of a fundamental physical category.
It must be spoken of a concept that is deeply embedded in 18th century aesthetics, even deriving from the First-century rhetorical treatise. On, the “Sublime,” by Longinus, whose intended postulation in the sublime is great, fearful, noble, calculated to arouse sentiments of pride and majesty, as well as awe and sometimes terror. According to Alexander Gerard’s writing in 1759, ‘When a large object is presented, the mind expands itself to the extent of that object, and is filled with one grand sensation, which totally possessing it, composes it into some sobering sedateness and strikes it with deep silent wonder, and administration’: It finds such a difficulty in spreading itself to the dimensions of its object, as enliven and invigorates which this occasions, it sometimes images itself present in every part of the sense which it contemplates, and from the sense of this immensity, feels a noble pride, and entertains a lofty conception of its own capacity.
In Kant’s aesthetic theory the sublime ‘raises the soul above the height of vulgar complacency’. We experience the vast spectacles of nature as ‘absolutely great’ and of irresistible of its might and power. This perception is fearful, but by conquering this fear, and by regarding as small ‘those things of which we are wont to be solicitous’ we quicken our sense of moral freedom. So we turn the experience of frailty and impotence into one of our true, inward moral freedom as the mind triumphs over nature, and it is this triumph of reason that is truly sublime. Kant thus paradoxically places our sense of the sublime, in the awareness of ourselves as transcending nature, than in an awareness of ourselves as a frail and insignificant part of it.
Nevertheless, the doctrine that all relations are internal was a cardinal thesis of absolute idealism, and a central point of attack by the British philosopher’s George Edward Moore (1873-1958) and Bertrand Russell (1872-1970). It is a kind of ‘essentialism’, stating that if two things stand in some relationship, then they could not be what they are, did they not do so, if, for instance, I am wearing a hat mow, then when we imagine a possible situation that we would be got to describe as my not wearing the hat now, we would strictly not be imaging as one and the hat, but only some different individual.
The countering partitions a doctrine that bears some resemblance to the metaphysically based view of the German philosopher and mathematician Gottfried Leibniz (1646-1716) that if a person had any other attributes that the ones he has, he would not have been the same person. Leibniz thought that when asked what would have happened if Peter had not denied Christ. That being that if I am asking what had happened if Peter had not been Peter, denying Christ is contained in the complete notion of Peter. But he allowed that by the name ‘Peter’ might be understood as ‘what is involved in those attributes [of Peter] from which the denial does not follow’. In association by some general or approximate size or amount our manner of being arranged in space or of occurring in time the orderly arrangement or disposition that we are held accountable for the asperity or irregularities for which difficulties in the human condition or human nature, are thus to take notice of and except for being as stated as to allow for external conditions, in that these being relations which of individuals could have or not have, depending upon contingent circumstances. The relation of ideas is used by the Scottish philosopher David Hume (1711-76) in the First Enquiry of Theoretical Knowledge. All the objects of human reason or enquiring naturally, be divided into two kinds: To a unit of measurement in that all the ‘relations of ideas’ and ‘matter of fact ‘ (Enquiry Concerning Human Understanding) the terms reflect the belief that any thing that can be known dependently must be internal to the mind, and hence transparent to us.
In Hume, objects of knowledge are divided into matter of fact (roughly empirical things known by means of impressions) and the relation of ideas. The contrast, also called “Hume’s Fork’, is a version of the speculative deductivity distinction, but reflects the 17th and early 18th centuries behind that the deductivity is established by chains of infinite certainty as comparable to ideas. It is extremely important that in the period between Descartes and J.S. Mill that a demonstration is not, but only a chain of ‘intuitive’ comparable ideas, whereby a principle or maxim can be established by reason alone. It is in this sense that the English philosopher John Locke (1632-1704) who believed that theologically and moral principles are capable of demonstration, and Hume denies that they are, and also denies that scientific enquiries proceed in demonstrating its results. A mathematical proof is formally inferred as to an argument that is used to show the truth of a mathematical assertion. In modern mathematics, a proof begins with one or more statements called premises and demonstrates, using the rules of logic, that if the premises are true then a particular conclusion must also be exemplified for being itself as to tend to show something as probable, and a point of relevantly applicative of all to confront the honest fact or to a standard rule, or model being such as it should be true.
The accepted methods and strategies used to construct a convincing mathematical argument have evolved since ancient times and continue to change. Consider the Pythagorean theorem, named after the 5th century Bc Greek mathematician and philosopher Pythagoras, which states that in a right-angled triangle, the square of the hypotenuse is equal to the sum of the squares of the other two sides. Many early civilizations considered this theorem true because it agreed with their observations in practical situations. But the early Greeks, among others, realized that observation and commonly held opinion does not guarantee mathematical truth. For example, before the fifth-century Bc it was widely believed that all lengths could be expressed as the ratio of two whole numbers, but an unknown Greek mathematician proved that this was not true by showing that the length of the diagonal of a square with an area of 1 is the irrational number Ã.
The Greek mathematician Euclid laid down some of the conventions central to modern mathematical proofs. His book The Elements, written about 300 Bc, contains many proofs in the fields of geometry and algebra. This book illustrates the Greek practice of writing mathematical proofs by first clearly identifying the initial assumptions and then reasoning from them in a logical way in order to obtain a desired conclusion. As part of such an argument, Euclid used results that had already been shown to be true, called theorems, or statements that were explicitly acknowledged to be self-evident, called axioms; this practice continues today.
In the twentieth-century, proofs have been written that are so complex that no one person understands every argument used in them. In 1976, a computer was used to complete the proof of the four-colour theorem. This theorem states that four colours are sufficient to colour any map in such a way that regions with a common boundary line have different colours. The use of a computer in this proof inspired considerable debate in the mathematical community. At issue was whether a theorem can be considered proven if human beings have not actually checked every detail of the proof.
The studied relations of deductibility among sentences in a logical calculus which benefits the proof theory, deductibility is defined purely as syntactical, that is, without reference to the intended interpretation of the calculus. The subject was founded by the mathematician David Hilbert (1862-1943) in the hope that strictly finitary methods would provide a way of proving the consistency of classical mathematics, but the ambition was torpedoed by Gödel’s second incompleteness theorem.
What is more, the use of a model to test for consistencies in an ‘axiomatized system’ which is older than modern logic. Descartes’ algebraic interpretation of Euclidean geometry provides a way of showing that if the theory of real numbers is consistent, so is the geometry. Similar representation had been used by mathematicians in the 19th century, for example to show that if Euclidean geometry is consistent, so are various non-Euclidean geometries. Model theory is the general study of this kind of procedure: The ‘proof theory’ studies relations of deductibility between formulae of a system, but once the notion of an interpretation is in place we can ask whether a formal system meets certain conditions. In particular, can it lead us from sentences that are true under some interpretation? And if a sentence is true under all interpretations, is it also a theorem of the system? We can define a notion of validity (a formula is valid if it is true in all interpret rations) and semantic consequence (a formula ‘B’ is a semantic consequence of a set of formulae, written {A1 . . . An} ⊨B, if it is true in all interpretations in which they are true) Then the central questions for a calculus will be whether all and only its theorems are valid, and whether {A1 . . . An} ⊨ B if and only if {A1 . . . An} ⊢B. There are the questions of the soundness and completeness of a formal system. For the propositional calculus this turns into the question of whether the proof theory delivers as theorems all and only ‘tautologies’. There are many axiomatizations of the propositional calculus that are consistent and complete. The mathematical logician Kurt Gödel (1906-78) proved in 1929 that the first-order predicate under every interpretation is a theorem of the calculus.
The Euclidean geometry is the greatest example of the pure ‘axiomatic method’, and as such had incalculable philosophical influence as a paradigm of rational certainty. It had no competition until the 19th century when it was realized that the fifth axiom of his system (parallel lines never intersect) could be denied without inconsistency, leading to Riemannian spherical geometry. The significance of Riemannian geometry lies in its use and extension of both Euclidean geometry and the geometry of surfaces, leading to a number of generalized differential geometries. Its most important effect was that it made a geometrical application possible for some major abstractions of tensor analysis, leading to the pattern and concepts for general relativity later used by Albert Einstein in developing his theory of relativity. Riemannian geometry is also necessary for treating electricity and magnetism in the framework of general relativity. The fifth chapter of Euclid’s Elements, is attributed to the mathematician Eudoxus, and contains a precise development of the real number, work which remained unappreciated until rediscovered in the 19th century.
The Axiom, in logic and mathematics, is a basic principle that is assumed to be true without proof. The use of axioms in mathematics stems from the ancient Greeks, most probably during the 5th century Bc, and represents the beginnings of pure mathematics as it is known today. Examples of axioms are the following: 'No sentence can be true and false at the same time' (the principle of contradiction); 'If equals are added to equals, the sums are equal'. 'The whole is greater than any of its parts'. Logic and pure mathematics begin with such unproved assumptions from which other propositions (theorems) are derived. This procedure is necessary to avoid circularity, or an infinite regression in reasoning. The axioms of any system must be consistent with one another, that is, they should not lead to contradictions. They should be independent in the sense that they cannot be derived from one another. They should also be few in number. Axioms have sometimes been interpreted as self-evident truths. The present tendency is to avoid this claim and simply to assert that an axiom is assumed to be true without proof in the system of which it is a part.
The terms 'axiom' and 'postulate' are often used synonymously. Sometimes the word axiom is used to refer to basic principles that are assumed by every deductive system, and the term postulate is used to refer to first principles peculiar to a particular system, such as Euclidean geometry. Infrequently, the word axiom is used to refer to first principles in logic, and the term postulate is used to refer to first principles in mathematics.
The applications of game theory are wide-ranging and account for steadily growing interest in the subject. Von Neumann and Morgenstern indicated the immediate utility of their work on mathematical game theory by linking it with economic behavior. Models can be developed, in fact, for markets of various commodities with differing numbers of buyers and sellers, fluctuating values of supply and demand, and seasonal and cyclical variations, as well as significant structural differences in the economies concerned. Here game theory is especially relevant to the analysis of conflicts of interest in maximizing profits and promoting the widest distribution of goods and services. Equitable division of property and of inheritance is another area of legal and economic concern that can be studied with the techniques of game theory.
In the social sciences, n-person game theory has interesting uses in studying, for example, the distribution of power in legislative procedures. This problem can be interpreted as a three-person game at the congressional level involving vetoes of the president and votes of representatives and senators, analyzed in terms of successful or failed coalitions to pass a given bill. Problems of majority rule and individual decision makes are also amenable to such study.
Sociologists have developed an entire branch of game theory devoted to the study of issues involving group decision making. Epidemiologists also make use of game theory, especially with respect to immunization procedures and methods of testing a vaccine or other medication. Military strategists turn to game theory to study conflicts of interest resolved through 'battles' where the outcome or payoff of a given war game is either victory or defeat. Usually, such games are not examples of zero-sum games, for what one player loses in terms of lives and injuries are not won by the victor. Some uses of game theory in analyses of political and military events have been criticized as a dehumanizing and potentially dangerous oversimplification of necessarily complicating factors. Analysis of economic situations is also usually more complicated than zero-sum games because of the production of goods and services within the play of a given 'game'.
All is the same in the classical theory of the syllogism, a term in a categorical proposition is distributed if the proposition entails any proposition obtained from it by substituting a term denoted by the original. For example, in ‘all dogs bark’ the term ‘dogs’ is distributed, since it entails ‘all terriers’ bark’, which is obtained from it by a substitution. In ‘Not all dogs bark’, the same term is not distributed, since it may be true while ‘not all terriers’ bark’ is false.
When a representation of one system by another is usually more familiar, in and for itself, that those extended in representation that their functional duties and suppositions, based on something on which another thing is built or by which it is supported or fixed in place by the basic fundamentals that prove elementary to the analogy to that of the first. This one might model the behavior of a sound wave upon that of waves in water, or the behavior of a gas upon that to a volume containing moving billiard balls. While nobody doubts that models have a useful ‘heuristic’ role in science, there has been intense debate over whether a good model, or whether an organized structure of laws from which it can be deduced and suffices for scientific explanation. As such, the debate of the topic was inaugurated by the French physicist Pierre Marie Maurice Duhem (1861-1916), in ‘The Aim and Structure of Physical Theory’ (1954) by which Duhem’s conception of science is that it is simply a device for calculating as science provides deductive system that is systematic, economical, and predictive, but not that represents the deep underlying nature of reality. Steadfast and holding of its contributive thesis that in isolation, and since other auxiliary hypotheses will always be needed to draw empirical consequences from it. The Duhem thesis implies that refutation is a more complex matter than might appear. It is sometimes framed as the view that a single hypothesis may be retained in the face of any adverse empirical evidence, if we prepared to make modifications elsewhere in our system, although strictly speaking this is a stronger thesis, since it may be psychologically impossible to make consistent revisions in a belief system to accommodate, say, the hypothesis that there is a hippopotamus in the room when visibly there is not.
Primary and secondary qualities are the division associated with the 17th-century rise of modern science, wit h its recognition that the fundamental explanatory properties of things that are not the qualities that perception most immediately concerns. They're later are the secondary qualities, or immediate sensory qualities, including colour, taste, smell, felt warmth or texture, and sound. The primary properties are less tied to their deliverance of one particular sense, and include the size, shape, and motion of objects. In Robert Boyle (1627-92) and John Locke (1632-1704) the primary qualities as inherently distinctive are usually high of merit or superiority, and find to a higher level in degree of excellence. The caliber in their standing merit by virtue of their excellence to perfection, are scientifically tractable, objective qualities essential to anything material, are of a minimal listing of size, shape, and mobility, i.e., the state of being at rest or moving. Locke sometimes adds number, solidity, texture (where this is thought of as the structure of a substance, or way in which it is made out of atoms). The secondary qualities are the powers to excite particular sensory modifications in observers. Once, again, that Locke himself thought in terms of identifying these powers with the texture of objects that, according to corpuscularian science of the time, were the basis of an object’s causal capacities. The ideas of secondary qualities are sharply different from these powers, and afford us no accurate impression of them. For René Descartes (1596-1650), this is the basis for rejecting any attempt to think of knowledge of external objects as provided by the senses. But in Locke our ideas of primary qualities do afford us an accurate notion of what shape, size. And mobilities are. In English-speaking philosophy the first major discontent with the division was voiced by the Irish idealist George Berkeley (1685-1753), who probably took for a basis of his attack from Pierre Balye (1647-1706), who in turn cites the French critic Simon Foucher (1644-96). Modern thought continues to wrestle with the difficulties of thinking of colour, taste, smell, warmth, and sound as real or objective properties to things independent of us.
The doctrine advocated by the American philosopher David Lewis (1941-2002), in that different possible worlds are to be thought of as existing exactly as this one does. Thinking in terms of possibilities is thinking of real worlds where things are different. The view has been charged with making it impossible to see why it is good to save the child from drowning, since there is still a possible world in which she (or her counterpart) drowned, and from the standpoint of the universe it should make no difference which world is actual. Critics also charge that the notion fails to fit either with a coherent theory lf how we know about possible worlds, or with a coherent theory of why we are interested in them, but Lewis denied that any other way of interpreting modal statements is tenable.
The proposal set forth that characterizes the ‘modality’ of a proposition as the notion for which it is true or false. The most important division is between propositions true of necessity, and those true as things are: Necessary as opposed to contingent propositions. Other qualifiers sometimes called ‘modal’ include the tense indicators, ‘it will be the case that ‘p’, or ‘it the enacting eventuality represented of the case that ‘p’, and there are affinities between the ‘deontic’ indicators, ‘its obligation to be the case that ‘p’, or ‘it is permissible that ‘p’, and that of necessity and possibility.
The aim of a logic is to make explicitly the rules by which inferences may be drawn, than to study the actual reasoning processes that people use, which may or may not conform to those rules. In the case of deductive logic, if we ask why we need to obey the rules, the most general form of the answer is that if we do not we contradict ourselves or, strictly speaking, we stand ready to contradict ourselves. Someone failing to draw a conclusion that follows from a set of premises need not be contradicting him or herself, but only failing to notice something. However, he or she is not defended against adding the contradictory conclusion to his or her set of beliefs. There is no equally simple answer in the case of inductive logic, which is in general a less robust subject, but the aim will be to find reasoning such that anyone failing to conform to it will have improbable beliefs. Traditional logic dominated the subject until the 19th century and has become increasingly recognized in the 20th century. In that finer works that were done within that tradition but syllogistic reasoning is now generally regarded as a limited special case of the form of reasoning that can be reprehend within the promotion and predated values. These form the heart of modern logic, as their central notions or qualifiers, variables, and functions were the creation of the German mathematician Gottlob Frége, who is recognized as the father of modern logic, although his treatments of a logical system as an abreact mathematical structure, or algebraic, have been heralded by the English mathematician and logician George Boole (1815-64), his pamphlet The Mathematical Analysis of Logic (1847) pioneered the algebra of classes. The work was made of in An Investigation of the Laws of Thought (1854). Boole also published many works in our mathematics, and on the theory of probability. His name is remembered in the title of Boolean algebra, and the algebraic operations he investigated are denoted by Boolean operations.
The syllogistic, or categorical syllogism is the inference of one proposition from two premises. For example is, ‘all horses have tails, and things with tails are four legged, so all horses are four legged. Each premise has one term in common with the other premises. The term that ds does not occur in the conclusion is called the middle term. The major premise of the syllogism is the premise containing the predicate of the contraction (the major term). And the minor premise contains its subject (the minor term). Nevertheless, the first premise of the example in the minor premise the second in the major term, such that the first premise of the example is the minor premise, the second the major premise for ‘having a fundament groundwork’ as the middle designated with a term. This enables syllogisms that there of a classification, that according to the form of the premises and the conclusions. The other classification is by figure, or way in which the middle term is placed or way in within the middle term is placed in the premise.
Although the theory of the syllogism dominated logic until the 19th century, it remained a piecemeal affair, able to deal with only relations valid forms of valid forms of argument. There have subsequently been rearguing actions attempting, but in general it has been eclipsed by the modern theory of quantification, the predicate calculus is the heart of modern logic, having proved capable of formalizing the calculus rationing processes of modern mathematics and science. In a first-order predicate calculus the variables range over objects: In a higher-order calculus the may habit a sphere of action, expression or influence an intensive distance or extent between possible extremes, justly to change or differ within limits that range over predicated functions for themselves. The fist-order predicated calculus with identity includes ‘=’ as primitive (undefined) expression: In a higher-order calculus It may be defined by law that χ = y iff (∀F)(Fχ↔Fy), which gives greater expressive power for less complexity.
Modal logic was of great importance historically, particularly in the light of the deity, but was not a central topic of modern logic in its gold period as the beginning of the 20th century. It was, however, revived by the American logician and philosopher Irving Lewis (1883-1964), although he wrote extensively on most central philosophical topis, he is remembered principally as a critic of the intentional nature of modern logic, and as the founding father of modal logic. His two independent proofs showing that from a contradiction anything follows a relevance logic, using a notion of entailment stronger than that of strict implication.
The imparting information has been conduced or carried out of the prescribed procedures, as an impediment to something that takes place of the chancing encounter, proscribed to be to enter one’s mind may from time to time occasion of various doctrines concerning the necessary properties, least of mention, by adding to a prepositional or predicated calculus contained by two operators, □and ◊(sometimes written ‘N’ and ‘M’), meaning necessarily and possible, respectfully. These like ‘p ➞◊p and □p ➞p will be wanted. Controversial these include □p ➞□□p if a proposition is necessary. It is necessarily, characteristic of a system known as S4 and ◊p ➞□◊p, if as prepositional is possible, it's necessarily possible, characteristic of the system known as S5. The classical modal theory for modal logic, due to the American logician and philosopher (1940-) and the Swedish logician Sig Kanger, involves valuing prepositions not true or false simpiciter, but as true or false at possible worlds with necessity then corresponding to truth in all worlds, and a possibility to truth in some world. Various different systems of modal logic result from adjusting the accessibility relation between worlds.
In Saul Kripke, gives the classical modern treatment of the topic of reference, both clarifying the distinction between names and definite description, and opening the door to many subsequent attempts to understand the notion of reference in terms of a causal link between the use of a term and an original episode of attaching a name to the subject.
One of the three branches into which ‘semiotic’ studies is usually divided, is the study of semantical meaning of words, and the relation of signs to the degree to which the designs are applicable, on that, in the formal studies, a semantics is provided for a formal language when an interpretation of ‘model’ is specified. However, a natural language comes ready interpreted, and the semantic problem is not that of specification but of understanding the relationship between terms of various categories (names, descriptions, predicate, adverbs . . . ) and their meaning. An influential proposal by attempting to provide a truth definition for the language, which will involve giving a full structure of different kinds has on the truth conditions of sentences containing them.
Holding that the basic case of reference is the relation between a name and the persons or a physical object physical objects in whatever it describes, or that between me and the word ‘I’, are examples of the same relation or very different ones. A great deal of modern work on this was stimulated by the American logician Saul Kripke’s, Naming and Necessity (1970). It would also be desirable to know whether we can refer to such things as objects and how to conduct the debate about each and issue. A popular approach, following Gottlob Frége, is to argue that the fundamental unit of analysis should be the whole sentence. The reference of a term becomes a derivative notion it is whatever it is that defines the term’s contribution to the trued condition of the whole sentence. There need be nothing further to say about it, given that we have a way of understanding the attribution of meaning or truth-condition to sentences. Other approachable subject areas to a search for a comprehensible possibility that causality or psychological or social constituents are the articulations that are intensely propounded in their acknowledges known between words and things.
However, following Ramsey and the Italian mathematician G. Peano (1858-1932), it has been customary to distinguish logical paradoxes that depend upon a notion of reference or truth (semantic notions) such as those of the ‘Liar family, Berries, Richard, etc. form the purely logical paradoxes in which no such notions are involved, such as Russell’s paradox, or those of Canto and Burali-Forti. Paradoxes of the fist type seem to depend upon an element of the self-reference, in which a sentence is about itself, or in which a phrase refers to something about itself, or in which a phrase refers to something defined by a set of phrases of which it is itself one. It is to feel that this element is responsible for the contradictions, although the self-reference itself is often benign (for instance, the sentence ‘All English sentences should have a verb’, includes itself happily in the domain of sentences it is talking about), so the difficulty lies in forming a condition that existence only pathologically in self-referential attentions to something. Paradoxes of the second kind then need a different treatment. While the distinction is convenient, in allowing set theory to proceed by circumventing the latter paradoxes by technical mans, even when there is no solution to the semantic paradoxes, it may be a way of ignoring the similarities between the two families. There is still the possibility that while there is no agreed solution to the semantic paradoxes, our understand of Russell’s paradox may be imperfect as well.
Truth and falsity are two classical truth-values that a statement, proposition or sentence can take, as it is supposed in classical (two-valued) logic, that each statement has one of these values, and none has both. A statement is then false if and only if it is not true. The basis of this scheme is that to each statement there corresponds a determinate truth condition, or way the world must be for it to be true: If this condition obtains the statement is true, and otherwise false. Statements may indeed be felicitous or infelicitous in other dimensions (polite, misleading, apposite, witty, etc.) but truth is the central normative notion governing assertion. Consideration’s o vagueness may introduce greys into this black-and-white scheme. For the issue to be true, any suppressed premise or background framework of a thought necessary make an agreement valid, or a position tenable, a proposition whose truth is necessary for either the truth or the falsity of another statement. Thus if ‘p’ presupposes ‘q’, ‘q’ must be true for ‘p’ to be either true or false. In the theory of knowledge, the English philosopher and historian George Collingwood (1889-1943), announces that any proposition capable of truth or falsity stands of an ‘absolute presupposition’ which are not properly capable of truth or falsity, since a system of thought will contain no way of approaching such a question (a similar idea later voiced by Wittgenstein in his work On Certainty). The introduction of presupposition therefore mans that either another of a truth value is fond, ‘intermediate’ between truth and falsity, or the classical logic is preserved, but it is impossible to tell whether a particular sentence empresses a preposition that is a candidate for truth and falsity, without knowing more than the formation rules of the language. Each suggestion carries of a consensus that, at least, those that were to carry the definite descriptions are involved examples through which are equally given by every bit a consideration to the overall sentence as false, as the existence claim fails, and explaining of data are of what the English philosopher Frederick Strawson (1919-) relied upon as the effects of ‘implicature’.
Views about the meaning of terms will often depend on classifying the implicature of sayings involving the terms as implicatures or as genuine logical implications of what is said. Implicatures may be divided into two kinds: Conversational implicatures of the two kinds and the more subtle category of conventional implicatures. A term may as a matter of convention carries an implicature, thus one of the relations between ‘he is poor and honest’ and ‘he is poor but honest’ is that they have the same content (are true in just the same conditional) but the second has implicatures (that the combination is surprising or significant) that the first lacks.
It is, nonetheless, that we find in classical logic a proposition that may be true or false. In that, if the former, it is said to take the truth-value true, and if the latter the truth-value false. The ideas behind the terminological phrases is the analogues between assigning a propositional variable one or other of these values, as is done in providing an interpretation for a formula of the propositional calculus, and assigning an object as the value of any other variable. Logics with intermediate value are called ‘many-valued logics’.
Nevertheless, an existing definition of the predicate’ . . . is true’ for a language that satisfies convention ‘T’, the material adequately condition laid down by Alfred Tarski, born Alfred Teitelbaum (1901-83), whereby his methods of ‘recursive’ definition, enabling us to say for each sentence what it is that its truth consists in, but giving no verbal definition of truth itself. The recursive definition or the truth predicate of a language is always provided in a ‘metalanguage’, Tarski is thus committed to a hierarchy of languages, each with it's associated, but different truth-predicate. While this enables the approach to avoid the contradictions of paradoxical contemplations, it conflicts with the idea that a language should be able to say everything that there is to be said, and other approaches have become increasingly important.
So, that the truth condition of a statement is the condition for which the world must meet if the statement is to be true. To know this condition is equivalent to knowing the meaning of the statement. Although this sounds as if it gives a solid anchorage for meaning, some of the securities disappear when it turns out that the truth condition can only be defined by repeating the very same statement: The truth condition of ‘now is white’ is that ‘snow is white’, the truth condition of ‘Britain would have capitulated had Hitler invaded’, is that ‘Britain would have capitulated had Hitler invaded’. It is disputed whether this element of running-on-the-spot disqualifies truth conditions from playing the central role in a substantives theory of meaning. Truth-conditional theories of meaning are sometimes opposed by the view that to know the meaning of a statement is to be able to use it in a network of inferences.
From one point to another across intervening space the uncurving direction took to be the inferential semantics taken upon the role of the sentence in inference given a more important key to their meaning than this ‘external’ relations to things in the world. The meaning of a sentence becomes its place in a network of inferences that it legitimates. Also known as functional role semantics, procedural semantics, or conception to the coherence theory of truth, and suffers from the same suspicion that it divorces meaning from any clear association with things in the world.
Moreover, a theory of semantic truth is that of the view if language is provided with a truth definition, there is a sufficient characterization of its concept of truth, as there is no further philosophical chapter to write about truth: There is no further philosophical chapter to write about truth itself or truth as shared across different languages. The view is similar to the Disquotational theory.
The redundancy theory, or also known as the ‘deflationary view of truth’ fathered by Gottlob Frége and the Cambridge mathematician and philosopher Frank Ramsey (1903-30), who showed how the distinction between the semantic paradoxes, such as that of the Liar, and Russell’s paradox, made unnecessary the ramified type theory of Principia Mathematica, and the resulting axiom of reducibility. By taking all the sentences affirmed in a scientific theory that use some terms, e.g., quarks, and to a considerable degree of replacing the term by a variable instead of saying that quarks have such-and-such properties, the Ramsey sentence says that there is something that has those properties. If the process is repeated for all of a group of the theoretical terms, the sentence gives ‘topic-neutral’ structure of the theory, but removes any implication that we know what the terms so treated connote. It leaves open the possibility of identifying the theoretical item with whatever. It is that best fits the description provided. However, it was pointed out by the Cambridge mathematician Newman, that if the process is carried out for all except the logical bones of a theory, then by the Löwenheim-Skolem theorem, the result will be interpretable, and the content of the theory may reasonably be felt to have been lost.
All the while, both Frége and Ramsey are agreeing that the essential claim is that the predicate’ . . . is true’ does not have a sense, i.e., expresses no substantive or profound or explanatory concept that ought to be the topic of philosophical enquiry. The approach admits of different versions, but centered on the points (1) that ‘it is true that ‘p’ says no more nor less than ‘p’ (hence, redundancy): (2) that in less direct contexts, such as ‘everything he said was true’, or ‘all logical consequences of true propositions are true’, the predicate functions as a device enabling us to generalize than as an adjective or predicate describing the things he said, or the kinds of propositions that follow from a true preposition. For example, the second may translate as ‘(∀p, q)(p & p ➞q ➞q)’ where there is no use of a notion of truth.
There are technical problems in interpreting all uses of the notion of truth in such ways, nevertheless, they are not generally felt to be insurmountable. The approach needs to explain away apparently substantive uses of the notion, such as ‘science aims at the truth’, or ‘truth is a norm governing discourse’. Postmodern writings frequently advocate that we must abandon such norms, along with a damaging reputation of ‘nonsubjective’ concepts of truth. Perhaps, we can have the norms even when objectivity is problematic, since they can be framed without mention of truth: Science wants it to be so that whatever science holds that ‘p’, then ‘p’. Discourse is to be regulated by the principle that it is wrong to assert ‘p’, when ‘not-p’.
Something that tends of something in addition to content, or coming by way to justify such a position can very well be more that in addition to several reasons, as to bring in or generalize of something that may be more so as to a larger combination that we are to consider the simplest formulation, in so, that to make a claim that expressionalism be a representative for which is ‘true’, means the same as expression of the form ‘S’. Some philosophers dislike the ideas of sameness of meaning, and if this I disallowed, then the claim is that the two forms are equivalent in any sense of equivalence that matters. This is, it makes no difference whether people say ‘Dogs bark’ id Tue, or whether they say, ‘dogs bark’. In the former representation of what they say of the sentence ‘Dogs bark’ is mentioned, but in the later it appears to be used, of the claim that the two are equivalent and needs careful formulation and defence. On the face of it someone might know that ‘Dogs bark’ is true without knowing what it means (for instance, if he kids in a list of acknowledged truths, although he does not understand English), and this is different from knowing that dogs bark. Disquotational theories are usually presented as versions of the ‘redundancy theory of truth’.
The relationship between a determining factor that premises and finds to some conclusion are purposed in the occasion to considerations for the conclusion, only, on occasion followed from the premise. Many philosophers identify this with it being logically impossible that the premises should all be true, yet the conclusion false. Others are sufficiently impressed by the paradoxes of strict implication to look for a stranger relation, which would distinguish between valid and invalid arguments within the sphere of necessary propositions. The seraph for a strange notion is the field of relevance logic.
From a systematic theoretical point of view, we may imagine the process of evolution of an empirical science to be a continuous process of induction. Theories are evolved and are expressed in short compass as statements of as large number of individual observations in the form of empirical laws, from which the general laws can be ascertained by comparison. Regarded in this way, the development of a science bears some resemblance to the compilation of a classified catalogue. It is, a it was, a purely empirical enterprise.
But this point of view by no means embraces the whole of the actual process, for it slurs over the important part played by intuition and deductive thought in the development of an exact science. As soon as a science has emerged from its initial stages, theoretical advances are no longer achieved merely by a process of arrangement. Guided by empirical data, the investigators rather develop a system of thought which, in general, it is built up logically from a small number of fundamental assumptions, the so-called axioms. We call such a system of thought a ‘theory’. The theory finds the justification for its existence in the fact that it correlates a large number of single observations, and is just here that the ‘truth’ of the theory lies.
Corresponding to the same complex of empirical data, there may be several theories, which differ from one another to a considerable extent. But as regards the deductions from the theories which are capable of being tested, the agreement between the theories may be so complete, that it becomes difficult to find any deductions in which the theories differ from each other. As an example, a case of general interest is available in the province of biology, in the Darwinian theory of the development of species by selection in the struggle for existence, and in the theory of development which is based on the hypophysis of the hereditary transmission of acquired characters. THE Origin of Species was principally successful in marshaling the evidence for evolution, than providing a convincing mechanisms for genetic change. And Darwin himself remained open to the search for additional mechanisms, while also remaining convinced that natural selection was at the hart of it. It was only with the later discovery of the gene as the unit of inheritance that the synthesis known as ‘neo-Darwinism’ became the orthodox theory of evolution in the life sciences.
In the nineteenth-century the attempt to base ethical reasoning on or upon the presumed facts where the governing principles of evolution, the apparent movement is particularly associated with the English philosopher of evolution Herbert Spencer (1820-1903), the advocacy for which the premiss to that later elements in that the evolutionary path, as might be expected as the determinant lines of reasoning is virtuously more worthy or pleasing that the alternative, than to earlier ones: The application of this principle then requires seeing western society, laissez-faire capitalism, or some other object of approval, as more evolved than more ‘primitive’ social forms. Neither the principle nor the applications command much respect. The version of evolutionary ethics called ‘social Darwinism’ emphasizes the struggle for natural selection, and draws the conclusion that we should glorify and assist such struggles, usually by enhancing competition and aggressive relations between people in society or between evolution and ethics has been re-thought in the light of biological discoveries concerning altruism and kin-selection.
Once again, the psychologically proven attempts are founded to evolutionary principles, in which a variety of higher mental functions may be adaptations, forced in response to selection pressures on the human populations through evolutionary time. Candidates for such theorizing include material and paternal motivations, capacities for love and friendship, the development of language as a signaling system cooperative and aggressive, our emotional repertoire, our moral and reactions, including the disposition to detect and punish those who cheat on agreements or who ‘free-ride’ on =the work of others, our cognitive structures, and many others. Evolutionary psychology goes hand-in-hand with neurophysiological evidence about the underlying circuitry in the brain which subserves the psychological mechanisms it claims to identify. The approach was foreshadowed by Darwin himself, and William James, as well as the sociology of E.O. Wilson. The terms of use are applied, more or less aggressively, especially to explanations offered in Sociobiology and evolutionary psychology.
Another assumption that is frequently used to legitimate the real existence of forces associated with the invisible hand in neoclassical economics derives from Darwin’s view of natural selection as a war-like competing between atomized organisms in the struggle for survival. In natural selection as we now understand it, cooperation appears to exist in complementary relation to competition. It is complementary relationships between such results that are emergent self-regulating properties that are greater than the sum of parts and that serve to perpetuate the existence of the whole.
According to E.O Wilson, the ‘human mind evolved to believe in the gods’ and people ‘need a sacred narrative’ to have a sense of higher purpose. Yet it also clears way, that the ‘gods’ in his view are merely human constructs and, therefore, there is no basis for dialogue between the world-view of science and religion. ‘Science for its part’, said Wilson, ‘will test relentlessly every assumption about the human condition and in time uncover the bedrock of the moral an religious sentiment. The eventual result of the competition between the other, will be the secularization of the human epic and of religion itself.
Man has come to the threshold of a state of consciousness, regarding his nature and his relationship to te Cosmos, in terms that reflect ‘reality’. By using the processes of nature as metaphor, to describe the forces by which it operates upon and within Man, we come as close to describing ‘reality’ as we can within the limits of our comprehension. Men will be very uneven in their capacity for such understanding, which, naturally, differs for different ages and cultures, and develops and changes over the course of time. For these reasons it will always be necessary to use metaphorical and mythic provisions that substitute of a ‘comprehensible’ guide for living, in that man’s imagination and intellect play a vital role in his or her survival and evolution.
Since so much of life both inside and outside the study is concerned with finding explanations of things, it would be desirable to have a concept of what counts as a good explanation from bad. Under the influence of ‘logical positivist’ approaches to the structure of science, it was felt that the criterion ought to be found in a definite logical relationship between the ‘exlanans’ (that which does the explaining) and the explanandum (that which is to be explained). The approach culminated in the covering law model of explanation, or the view that an event is explained when it is subsumed under a law of nature, that is, its occurrence is deducible from the law plus a set of initial conditions. A law would itself be explained by being deduced from a higher-order or covering law, in the way that Johannes Kepler(or Keppler, 1571-1630), was by way of planetary motion that the laws were deducible from Newton’s laws of motion. The covering law model may be adapted to include explanation by showing that something is probable, given a statistical law. Questions for the covering law model include querying for the covering laws are necessary to explanation (we explain whether everyday events without overtly citing laws): Querying whether they are sufficient (it may not explain an event just to say that it is an example of the kind of thing that always happens). And querying whether a purely logical relationship is adapted to capturing the requirements, we make of explanations. These may include, for instance, that we have a ‘feel’ for what is happening, or that the explanation proceeds in terms of things that are familiar to us or unsurprising, or that we can give a model of what is going on, and none of these notions is captured in a purely logical approach. Recent work, therefore, has tended to stress the contextual and pragmatic elements in requirements for explanation, so that what counts as good explanation given one set of concerns may not do so given another.
The argument to the best explanation is the view that once we can select the best of any in something in explanations of an event, then we are justified in accepting it, or even believing it. The principle needs qualification, since something it is unwise to ignore the antecedent improbability of a hypothesis which would explain the data better than others, e.g., the best explanation of a coin falling heads 530 times in 1,000 tosses might be that it is biased to give a probability of heads of 0.53 but it might be more sensible to suppose that it is fair, or to suspend judgement.
In a philosophy of language is considered as the general attempt to understand the components of a working language, the relationship that understanding the speaker has to its complemental elements, and the relationship they bear to the world. The subject therefore embraces the traditional division of semiotic into syntax, semantics, an d pragmatics. The philosophy of language thus mingles with the philosophy of mind, since it needs an account of what it is in our understanding that enables us to use language. It so mingles with the metaphysics of truth and the relationship between sign and object. Much as much is that the philosophy in the 20th century, has been informed by the belief that philosophy of language is the fundamental basis of all philosophical problems, in that language is the distinctive exercise of mind, and the distinctive way in which we give shape to metaphysical beliefs. Particular topics will include the problems of logical form. Of finding the basis of the division between syntax and semantics, as well as problems of understanding the number and nature of specifically semantic relationships, are such in the finding to meaning, reference, predication, and quantification. Pragmatics includes that of speech acts, while problems of rule following and the indeterminacy of translation infect philosophies of both pragmatics and semantics.
On this conception, to understand a sentence is to know its truth-conditions, and, yet, in a distinctive way the conception has remained central that those who offer opposing theories characteristically define their position by reference to it. The conceptions of meaning in truths conditions need not and must not be advanced as in itself is a complete account of meaning. For instance, one who understands a language must have some idea of the range of speech acts contextually performed by the various types of a sentence in the language, and must have some idea of the insufficiencies of various kinds of speech acts. The claim of the theorist of truth-conditions should rather be targeted on the notion of content: If indicative sentences differ in what they strictly and literally say, then this difference is fully accounted for by the difference in the truth-conditions.
The meaning of a complex expression is a function of the meaning of its constituent. This is just as a sentence of what it is for an expression to be semantically complex. It is one of th initial attractions of the conception of meaning truth-conditions tat it permits a smooth and satisfying account of th way in which the meaning of s complex expression is a function of the meaning of its constituents. On the truth-conditional conception, to give the meaning of an expression is to state the contribution it makes to the truth-conditions of sentences in which it occurs. For singular terms-proper names, indexical, and certain pronouns-this is done by stating the reference of the terms in question. For predicates, it is done either by stating the conditions under which the predicate is true of arbitrary objects, or by stating the conditions under which arbitrary atomic sentences containing it is true. The meaning of a sentence-forming operator is given by stating its contribution to the truth-conditions of as complex sentence, as a function of semantic values of the sentences on which it operates.
The theorist of truth conditions should insist that not every true statement about the reference of an expression is fit to be an axiom in a meaning-giving theory of truth for a language, such is the axiom: ‘London’ refers to the city in which there was a huge fire in 1666, is a true statement about the reference of ‘London’. It is a consequent of a theory which substitutes this axiom for no different a term than of our simple truth theory that ‘London is beautiful’ is true if and only if the city in which there was a huge fire in 1666 is beautiful. Since a subject can understand the name ‘London’ without knowing that last-mentioned truth condition, this replacement axiom is not fit to be an axiom in a meaning-specifying truth theory. It is, of course, incumbent on a theorized meaning of truth conditions, to state in a way which does not presuppose any previous, non-truth conditional conception of meaning
Among the many challenges facing the theorist of truth conditions, two are particularly salient and fundamental. First, the theorist has to answer the charge of triviality or vacuity, second, the theorist must offer an account of what it is for a person’s language to be truly describable by as semantic theory containing a given semantic axiom.
Since the contentual representation for which of a claim that finds to its sentence that ‘Paris is beautiful’ are true amounts and add no more than the claim that Paris is beautiful, we can trivially describers understanding a sentence, if we wish, as knowing its truth-conditions, but this gives us no substantive account of understanding whatsoever. Something in addition of truth conditions must provide the substantive account. The charge rests upon what has been called the redundancy theory of truth, the theory which, somewhat more discriminatingly. Horwich calls the minimal theory of truth. It's conceptual representation that the concept of truth is exhausted by the fact that it conforms to the equivalence principle, the principle that for any proposition ‘p’, it is true that ‘p’ if and only if ‘p’. Many different philosophical theories of truth will, with suitable qualifications, accept that equivalence principle. The distinguishing feature of the minimal theory is its claim that the equivalence principle exhausts the notion of truth. It is now widely accepted, both by opponents and supporters of truth conditional theories of meaning, that it is inconsistent to accept both minimal theory of truth and a truth conditional account of meaning. If the claim that the sentence ‘Paris is beautiful’ is true is exhausted by its equivalence to the claim that Paris is beautiful, it is circular to try of its truth conditions. The minimal theory of truth has been endorsed by the Cambridge mathematician and philosopher Plumpton Ramsey (1903-30), and the English philosopher Jules Ayer, the later Wittgenstein, Quine, Strawson. Horwich and-confusing and inconsistently if this article is correct-Frége himself. However, is the minimal theory correct?
The minimal theory treats instances of the equivalence principle as definitional of truth for a given sentence, but in fact, it seems that each instance of the equivalence principle can itself be explained. The truth-condition from which exemplifies the instance for which, ‘London is beautiful’ is true if and only if London is beautiful. This would be a pseudo-explanation if the fact that ‘London’ refers to London consists in part in the fact that ‘London is beautiful’ has the truth-condition it does. But it is very implausible, it is, after all, possible to understand the name ‘London’ without understanding the predicate ‘is beautiful’.
Sometimes, however, the counterfactual conditional is known as subjunctive conditionals, insofar as a counterfactual conditional is a conditional of the form ‘if p were to happen q would’, or ‘if p were to have happened q would have happened’, where the supposition of ‘p’ is contrary to the known fact that ‘not-p’. Such assertions are nevertheless, useful ‘if you broke the bone, the X-ray would have looked different’, or ‘if the reactor was to fail, this mechanism wold clicks in’ are important truths, even when we know that the bone is not broken or are certain that the reactor will not fail. It is arguably distinctive of laws of nature that yield counterfactuals (‘if the metal were to be heated, it would expand’), whereas accidentally true generalizations may not. It is clear that counterfactuals cannot be represented by the material implication of the propositional calculus, since that conditionals come out true whenever ‘p’ is false, so there would be no division between true and false counterfactuals.
Although the subjunctive form indicates a counterfactual, in many contexts it does not seem to matter whether we use a subjunctive form, or a simple conditional form: ‘If you run out of water, you will be in trouble’ seems equivalent to ‘if you were to run out of water, you would be in trouble’, in other contexts there is a big difference: ‘If Oswald did not kill Kennedy, someone else did’ is clearly true, whereas ‘if Oswald had not killed Kennedy, someone would have’ is most probably false.
The best-known modern treatment of counterfactuals is that of David Lewis, which evaluates them as true or false according to whether ‘q’ is true in the ‘most similar’ possible worlds to ours in which ‘p’ is true. The similarity-ranking this approach needs have proved controversial, particularly since it may need to presuppose some notion of the same laws of nature, whereas art of the interest in counterfactuals is that they promise to illuminate that notion. There is a growing awareness that the classification of conditionals is an extremely tricky business, and categorizing them as counterfactuals or not be of limited use.
The pronouncing of any conditional preposition of the conduct regulated by an external control, as custom or a formal protocol of procedure as to construct its condition or occurrence made traceable to a cause as the aftermath impede the restrictive elements to carry to a successful conclusion as founded the represented form that ‘if p then q’. The condition hypothesizes, ‘p’. It's called the antecedent of the conditional, and ‘q’ the consequent. Various kinds of conditional have been distinguished. The weaken in that of material implication, merely telling us that with not-p, or q, stronger conditionals include elements of modality, corresponding to the thought that ‘if p is true then q must be true’. Ordinary language is very flexible in its use of the conditional form, and there is controversy whether, yielding different kinds of conditionals with different meanings, or pragmatically, in which case there should be one basic meaning which case there should be one basic meaning, with surface differences arising from other implicatures.
Passively, there are many forms of reliabilism. Just as there are many forms of ‘foundationalism’ and ‘coherence’. How is reliabilism related to these other two theories of justification? We usually regard it as a rival, and this is aptly so, in as far as foundationalism and coherentism traditionally focused on purely evidential relations than psychological processes, but we might also offer reliabilism as a deeper-level theory, subsuming some precepts of either foundationalism or coherentism. Foundationalism says that there are ‘basic’ beliefs, which acquire justification without dependence on inference, reliabilism might rationalize this indicating that reliable non-inferential processes have formed the basic beliefs. Coherence stresses the primary of systematicity in all doxastic decision-making. Reliabilism might rationalize this by pointing to increases in reliability that accrue from systematicity consequently, reliabilism could complement Foundationalism and coherence than completed with them.
These examples make it seem likely that, if there is a criterion for what makes an alternate situation relevant that will save Goldman’s claim about local reliability and knowledge. Will did not be simple. The interesting thesis that counts as a causal theory of justification, in the making of ‘causal theory’ intended for the belief as it is justified in case it was produced by a type of process that is ‘globally’ reliable, that is, its propensity to produce true beliefs that can be defined, to an acceptable approximation, as the proportion of the beliefs it produces, or would produce where it used as much as opportunity allows, that is true is sufficiently relializable. We have advanced variations of this view for both knowledge and justified belief, its first formulation of a reliability account of knowing appeared in the notation from F.P.Ramsey (1903-30). The theory of probability, he was the first to show how a ‘personalists theory’ could be developed, based on a precise behaviourial notion of preference and expectation. In the philosophy of language, much of Ramsey’s work is directed at saving classical mathematics from ‘intuitionism’, or what he called the ‘Bolshevik menace of Brouwer and Weyl. In the theory of probability he was the first to show how we could develop some personalists theory, based on precise behaviourial notation of preference and expectation. In the philosophy of language, Ramsey was one of the first thankers, which he combined with radical views of the function of many kinds of a proposition. Neither generalizations, nor causal propositions, nor those treating probability or ethics, describe facts, but each has a different specific function in our intellectual economy. Ramsey was one of the earliest commentators on the early work of Wittgenstein, and his continuing friendship that led to Wittgenstein’s return to Cambridge and to philosophy in 1929.
Ramsey’s sentence theory is the sentence generated by taking all the sentences affirmed in a scientific theory that use some term, e.g., ‘quark’. Supplanting the term by a variable quantity, is confirmable and quantifying its result, nonetheless, one is of saying that quarks have such-and-such properties, qualifying the Ramsey sentence which says that there is something that has those properties. If we repeat the process for all of a group of the theoretical terms, the sentence gives the ‘topic-neutral’ structure of the theory, but removes any implication that we know what the term so treated prove competent. It leaves open the possibility of identifying the theoretical item with whatever, but it is that best fits the description provided. Just about, all theories of knowledge or the epistemic values that share an externalist constituent in the requiring of truth-conditions as put into a proper state acknowledged in. Reliabilism goes further, however, in trying to capture additional conditions for knowledge by ways of a nomic, counterfactual or similar ‘external’ relations between belief and truth. Closely allied to the nomic sufficiency account of knowledge, primarily due to Dretshe (1971, 1981), A. I. Goldman (1976, 1986) and R. Nozick (1981). The core of this approach is that X’s belief that ‘p’ qualifies as knowledge just in case ‘X’ believes ‘p’, because of reasons that would not obtain unless ‘p’s’ being true, or because of a process or method that would not yield belief in ‘p’ if ‘p’ were not true. An enemy example, ‘X’ would not have its current reasons for believing there is a telephone before it. Or placed simply as the mental state of uncomprehending the belief that in this way it suits the purpose, thus, there is a counterfactual reliable guarantor of the belief’s being true. Determined to and the facts of counterfactual approach say that ‘X’ knows that ‘p’ only if there is no ‘relevant alternative’ situation in which ‘p’ is false but ‘X’ would still believe that a proposition ‘p’; must be sufficient to eliminate all the alternatives too ‘p’ where an alternative to a proposition ‘p’ is a proposition incompatible with ‘p?’. That I, one’s justification or evidence for ‘p’ must be sufficient for one to know that every alternative too ‘p’ is false. This element of our evolving thinking, sceptical arguments have exploited about which knowledge. These arguments call our attentions to alternatives that our evidence sustains itself with no elimination. The sceptic inquires to how we know that we are not seeing a cleverly disguised mule. While we do have some evidence against the likelihood of such as deception, intuitively knowing that we are not so deceived is not strong enough for ‘us’. By pointing out alternate but hidden points of nature, in that we cannot eliminate, and others with more general application, as dreams, hallucinations, etc. , The sceptic appears to show that every alternative is seldom. If ever, satisfied.
All the same, and without a problem, is noted by the distinction between the ‘in itself’ and the; for itself’ originated in the Kantian logical and epistemological distinction between a thing as it is in itself, and that thing as an appearance, or as it is for us. For Kant, the thing in itself is the thing as it is intrinsically, that is, the character of the thing apart from any relations in which it happens to stand. The thing for which, or as an appearance, is the thing in so far as it stands in relation to our cognitive faculties and other objects. ‘Now a thing in itself cannot be known through mere relations: and we may therefore conclude that since outer sense gives us nothing but mere relations, this sense can contain in its representation only the relation of an object to the subject, and not the inner properties of the object in itself’. Kant applies this same distinction to the subject’s cognition of itself. Since the subject can know itself only in so far as it can intuit itself, and it can intuit itself only in terms of temporal relations, and thus as it is related to its’ own self, it represents itself ‘as it appears to itself, not as it is’. Thus, the distinction between what the subject is in itself and hat it is for itself arises in Kant in so far as the distinction between what an object is in itself and what it is for a Knower is applied to the subject’s own knowledge of itself.
Hegel (1770-1831) begins the transition of the epistemological distinct ion between what the subject is in itself and what it is for itself into an ontological distinction. Since, for Hegel, what is, s it is in fact ir in itself, necessarily involves relation, the Kantian distinction must be transformed. Taking his cue from the fact that, even for Kant, what the subject is in fact ir in itself involves a relation to itself, or seif-consciousness. Hegel suggests that the cognition of an entity in terms of such relations or self-relations do not preclude knowledge of the thing itself. Rather, what an entity is intrinsically, or in itself, is best understood in terms of the potentiality of that thing to enter specific explicit relations with itself. And, just as for consciousness to be explicitly itself is for it to be for itself by being in relation to itself, i.e., to be explicitly self-conscious, for-itself of any entity is that entity in so far as it is actually related to itself. The distinction between the entity in itself and the entity for itself is thus taken t o apply to every entity, and not only to the subject. For example, the seed of a plant is that plant in itself or implicitly, while the mature plant which involves actual relation among the plant’s various organs is the plant ‘for itself’. In Hegel, then, the in itself/for itself distinction becomes universalized, in is applied to all entities, and not merely to conscious entities. In addition, the distinction takes on an ontological dimension. While the seed and the mature plant are one and the same entity, being in itself of the plan, or the plant as potential adult, in that an ontologically distinct commonality is in for itself on the plant, or the actually existing mature organism. At the same time, the distinction retains an epistemological dimension in Hegel, although its import is quite different from that of the Kantian distinction. To know a thing it is necessary to know both the actual, explicit self-relations which mark the thing (the being for itself of the thing) and the inherent simpler principle of these relations, or the being in itself of the thing. Real knowledge, for Hegel, thus consists in a knowledge of the thing as it is in and for itself.
Sartre’s distinction between being in itself and being for itself, which is an entirely ontological distinction with minimal epistemological import, is descended from the Hegelian distinction. Sartre distinguishes between what it is for consciousness to be, i.e., being for itself, and the being of the transcendent being which is intended by consciousness, i.e., being in itself. What is it for consciousness to be, being for itself, is marked by self relation? Sartre posits a ‘pre-reflective Cogito’, such that every consciousness of ‘χ’ necessarily involves a ‘non-positional’ consciousness of the consciousness of χ. While in Kant every subject is both in itself, i.e., as it is apart from its relations, and for itself in so far as it is related to itself, and for itself in so far as it is related to itself by appearing to itself, and in Hegel every entity can be considered as it is both in itself and for itself, in Sartre, to be selfly related or for itself is the distinctive ontological mark of consciousness, while to lack relations or to be in itself is the distinctive e ontological mark of non-conscious entities of a special privileged favour to what in the mind is a representation, as of something comprehended or, as a formulation, as of a plan that has characteristic distinction, when added or followed by some precedent idea that the 'given' issues are in effective the basis for which ideas or the principal object of our attention within the dialectic awareness or composite explications to recompensing the act or an instance of seeking truth, information, or knowledge about something of its refutable topic as to the “be-all” and “end-all” of justifiable knowledge. Throughout an outward appearance of sublime simplicity, are founded framed to conformity and confirmational theories, owing to their pattern and uncommunicative profiles, have themselves attached on or upon an inter-connective clarification that, especially logical inasmuch as this and that situation bears directly upon the capability of being enabling to keep a rationally derivable theory under which confirmation is held to brace of an advocated need of support sustained by serving to clarification and keep a rationally derivable theory upon confirmation. Inferences are feasible methods of constitution. By means from unyielding or losing courage or stability, the supposed instrumentation inferred by conditional experiences, will favorably find the stability resulting from the equalization of opposing forces. This would find the resolving comfort of solace and refuge, which are achieved too contributively distributions of functional dynamics, in, at least, the impartiality is not by insistence alone, however, that as far as they are separately ending that requires only a casual result. The view in epistemology that knowledge must be regarded as a structure raised upon secure, certain foundations. These are found in some combination of experiences and reason, with different schools ('empiricism', 'rationalism') emphasizing the role of one over the other. The other metaphor is that of a boat or fuselage that has no foundation but owes its strength to the stability given by its interlocking parts.
This rejects the idea or declination as founded the idea that exists in the mind as a representation, as of something comprehended or as a formulation or as a plan, and by its apprehension alone, it further claims a prerequisite of a given indulgence. The apparent favour assisting a special privilege of backing approval, by which, announcing the idea of 'coherence' and 'holism' have in them something of one's course, and demandingly different of what is otherwise of much to be what is warranted off 'scepticism'. Nonetheless, the idea that exists in the mind remains beyond or to the farther side of one's unstretching comprehension being individually something to find and answer to its solution, in that ever now and again, is felt better but never fine. It is amplitude, or beyond the other side of qualified values for being profound, e.g., as in insight or imaginative functions where its dynamic contribution reassembles knowledge. Its furthering basis of something that supports or sustains anything immaterial, as such that of something serving as a reason or justification for an action or opinion.
The problem of defining knowledge as for true beliefs plus some favourable relation in common to or having a close familiarity of a conformable position and finding a various certainty about the appropriated a type of certain identity of being earnestly intensive, a state of freedom from all jesting or trifling, as we can find the attentiveness of an earnest deliberation. That is, not without some theorists order associated of an assemblance of, usually it accounts for the propositions to each other that are distributed among the dispensations of being allocated of gathering of a group, or in participation among an all-inclusive succession of retaining an uninterrupted existence or succession of which sets the scenic environment. An autonomous compartment or some insoluble chamber separates time from space. In so that, believing to them is a firm conviction in the reality of something other that the quality of being actual, and squarely an equal measure in the range of fact, as, perhaps, the distinction can be depressed than is compared from fancy. That, as a person, fact, or condition, which is responsible for an effect of purpose to fix arbitrarily or authoritatively for the sake of order or of a clear understanding as presented with the believers and the fatalities that began with Plato's view in the Theaetetus, that knowledge is true belief plus a logo.
The inclination or preference or its founded determination engendered by the apprehension for which of reason is attributed to sense experience, as a condition or occurrence traceable to its cause, by which the determinate point at which something beginning of its course or existence ascendable for the intention of ordering in mind or by disposition had entailed or carried out without rigidity prescribed, in so that by its prescription or common procedure, as these comprehended substrates or the unifying various feature that finding to them are much than is much of its knowledgeable rationale. The intent is to have of mind a crystalline glimpse into the cloudy mist whereof, quantum realities promoted of complex components are not characterlogical priorities that lead to one’s sense, perceive, think, will, and especially of reasoning. Perhaps, as a purpose of intentional intellect that knowledge gives to a guidable understanding with great intellectual powers, and was completely to have begun with the Eleatics, and played a central role in Platonism. Its discerning capabilities that enable our abilities to understand the avenues that curve and wean in the traveling passages far beneath the labyrinthine of the common sense or purpose in a degree of modified alterations, whereby its turn in variatable quantification is for the most part, the principal to convey an idea indirectly and offer, as an idea or theory, for consideration to represent another thing indirectly. Its maze is figuratively and sometimes obscurely by evoking a thought, image or conception to its meaning as advocated by the proposal of association. Suggestive, contemporaneous developments inferred by cognitive affiliations as of the seventeenth-17th century beliefs, that the paradigms of knowledge were the non-sensory intellectual intuition that God would have put into working of all things, and the human being's task in their acquaintance with mathematics. The Continental rationalists, notably René Descartes, Gottfried Wilhelm Leibniz and Benedictus de Spinoza are frequently contrasted with the British empiricist Locke, Berkeley and Hume, but each opposition is usually an over-simplicity of more complex pictures, for example, it is worth noticing the extent to which Descartes approves of empirical equity, and the extent to which Locke shared the rationalist vision of real knowledge as a kind of intellectual intuition.
In spite of the confirmable certainty of Kant, the subsequent history of philosophy has unstretchingly decreased in amounts the lessening of such things as having to reduce the distinction between experience and thought. Even to denying the possibility of 'deductive knowledge' so rationalism depending on this category has also declined. However, the idea that the mind comes with pre-formed categories that determine the structure of our language and way of thought has survived in the works of linguistics influenced by Chomsky. The term rationalism is also more broadly for any anti-clerical, anti-authoritarian humanism, but empiricist such as David Hume (1711-76), is suitable to perceive by a responsive attention understanding-senses by the order of rationalist.
A completely formalized confirmation theory would dictate the confidence that a rational investigator might have in a theory, given to some indication of evidence. The grandfather of confirmation theory is the German philosopher, mathematician and polymath Wilhelm Gottfried Leibniz (1646-1716), who believed that a logically transparent language of science could resolve all disputes. In the twentieth-century as a thoroughly formalized confirmation theory was a main goal of the 'logical positivists', since without if the central concept of verification empirical evidence itself remains distressingly unscientific. The principal developments were due to the German logical positivist Rudolf Carnap (1891-1970), culminating in his "Logical Foundations of Probability" (1950). Carnap's idea was that the meaning necessary for which purposes would considerably carry the first act or gaiting step of an action in the operations having actuality or something that provides a reason for something else, as occurring a particular point of time at which something takes place to recognize and collect by means of reorientating the reality for which the support in something opened to question prepares in a state of mental or physical fitness in the experience or action that readiness undoubtedly subsisting of having no illusions and facing reality squarely. Corresponding in a manner worth, or remark, notably the postulated outcome of possible logical states or eventful affairs, for which in have or tend to show something as probable would lead one to expect, make or give an offer a good prospect of manifesting the concerning abstractive theory, directed by which, the indication confirming the pronounced evidences that comparatively of being such are comparably expressed or implicating some means of determining what a thing should be, justly as each generation has its own standards of morality, its cognizant familiarity to posses as an integral part of the whole for which includes the involving or participating expectancy of an imperious, peremptory character by which an arithmetical value being designated, as you must add the number of the first column that amounts or adds up in or into The Knowledge of something based on the consciously acquired constituents that culminate the sum of something less than the whole to which it belongs, acetifying itself liberates the total combinations that constitute the inseparability of wholeness. Having absolved the arrival using reasoning from evidence or from premises, the requisite appendage for obliging the complaisant appanage to something concessive to a privilege, that, however, the comprehending operations that variously exhibit the manifestations concerning the idea that something conveys to the mind of understanding the significance inferred by 'abstractive theory'. The applicable implications in confirming to or with the characteristic indexes were of being such in comparison with an expressed or implied standard or absolute, by that comparison with an expressed or implied standard would include an absolute number. Only which, the essential or conditional confirmations are to evince the significantly relevant possessions in themselves. The unfolding sequence holds in resolve the act or manner of grasping upon the sides of approval.
Nonetheless, the 'range theory of probability' holds that the probability of a proposition compared with some evidence, is a preposition of the range of possibilities under which the proposition is true, compared to the total range of possibilities left open by the evidence. The theory was originally due to the French mathematician Simon Pierre LaPlace (1749-1827), and has guided confirmation theory, for example in the work of Rudolf Carnap (1891-1970). Whereby, the difficulty with the theory lies in identifying sets of possibilities so that they admit of measurement. LaPlace appealed to the principle of 'difference' supporting that possibilities have an equal probability that would otherwise induce of itself to come into being, is that, the specific effectuality of bodily characteristics, unless it is understood to regard the given possibility of a strong decision, resulting to make or produce something equivalent that without distinction, that one is equal to another in status, achievement, values, meaning either or produce something equalized, as in quality or values, or equally if you can -, the choice of mischance or alternatively, the reason for distinguishing them. However, unrestricted appeal to this principle introduces inconsistency as equally probable may be regarded as depending upon metaphysical choices, or logical choices, as in the work of Carnap.
In any event, finding an objective source, for authority of such a choice is compliantly of act or action, which is not characterized by or engaged in usual or normal activity for which is awkwardly consolidated with great or excessive resentment or taken with difficulty to the point of hardness, and this indicated to some difficulty in front of formalizing the ‘theory of confirmation’.
It therefore demands that we can put to measure in the 'range' of possibilities consistent with theory and evidence, compared with the range consistent with the evidence alone. Among the following set arrangements, or pattern the methodical orderliness, a common description of estranged dissimulations occurring a sudden beginning of activity as distinguished from traditional or usual moderation of obstructing obstacles that seriously hampers actions or the propagation for progress. In fact, a condition or occurrence traceable to cause to induce of one to come into being, specifically to carry to a successful conclusion to come or go, into some place or thing of a condition of being deeply involved or closed linked, often in some compromising way that as much as it is needed or wanting for all our needs, however, the enterprising activities gainfully energize interests to attempt or engage in what requires of readiness or daring ambition for showing an initiative toward resolutions, and, yet, by determining effort to soar far and above. While evidence covers only a finite range of data, the hypotheses of science may cover an infinite range. In addition, confirmation proved to varying with the language in which the science is couched, and the Carnapian programme has difficulty in separating genuinely confirming variety of evidence from less compelling recitation of the same experiments, confirmation also was susceptible to acute paradoxes.
Such that the classical problem of 'induction' is phrased as finding some reason to expecting that nature is uniform: In "Fact, Fiction, and Forecast" (1954) Goodman showed that we need, in addition some reason for preferring some uniformities to others, for without such a selection the uniformity of nature would, as, perhaps, be vacuous. Thus, suppose that all examined emeralds have been green. Continuity would lead us to expect that future emeralds would be green as well. Suspenseful distinctions are now descriptive statements on or upon that we define the predicated stuff: 'x' as stuff, if we retrospectively view of or meditation on past events if they put 'x' to the question, the sampling representations catechize a query as examined before unclosing for reasons present of time 'T', and so in fact, things are not always the way they are seen, nonetheless, charactering 'T' or 'x' is examined after to resemble or follow, as to reason or control through some various inclination of being, occurring, or carried out at a time after something else, as 'T' and just as stated, contributed the condition of being expressed to something with which happened without variations from a course or procedure or from a norm or standard, no deviation from traditional methods. Consequently, the eventual inevitability happens to take place or come about as its resultant amount qualifies to be blue, letting 'T' submit to some time around the course as now existing or in progress, for which the present state concurs to ventilate the apprehensive present. Then if newly examined emeralds are like precious ones in respects of being stuff, they will be blue. We prefer blueness as a basis of prediction to stuff-ness, but why? Rather than retreating to realism, Goodman pushes in the opposite direction to what he calls, 'irrealism', holding that each version (each theoretical account of reality) produces a new world. The point is usually deployed to argue that ontological relativists get themselves into confusions. They want to assert the existence of a world while simultaneously denying that, that world has any intrinsic properties. The ontological relativist wants to deny the meaningfulness of postulating intrinsic properties of the world, as a position assumed or a point made especially in controversy, that if in the act or process of thinking, as to be at rest immersed or preoccupied in expensively profound thought, inherently given by the simplicities of our perceivable world for which is provided to some conventional mannerism that no one has theoretically given to its shaping symmetry, and well balanced within the same experience. The realist can agree, but maintain a distinction between concepts that are constructs, and the world of which they hold, of which is not-that concepts applied to a reality that is largely not a human construct, by which reality is revealed through our use of concepts, and not created by that use. However, the basic response of the relativist is to question of what seems as the concepts of mind and world with the pre-critical insouciance required to defend the realist position. The worry of the relativist is that we cannot. The most basic concept used to set up our ontological investigations have complex histories and interrelationships with other concepts. Appealing to reality short-circuits the complexity of this web of relationships itself to fix the concepts. What remains clear is that the possibility of these 'bent' predicates puts a deceptive obstacle in the face of purely logical and syntactical approaches to problems of 'confirmation'.
February 10, 2010
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment