The equivalence between a gravitational field and the fictitious forces in non-inertial systems can be expressed by using Riemannian space-time, which differs from the Minkowski space-time of the special theory. In special relativity the motion of a pace that is not acted on any forces is represented by a straight line in Minkowski space-time. Overall relativity, using Riemannian space-time, the motion is represented by a line that is no longer straight (in the Euclidean sense) but is the line given the shortest distance. Since a line is called a ‘geodesic’, thus space-time is said to be curved. Nonetheless, the extent of this curvature is given bit the metric tensor for space-time, the components of which ae solutions to Einstein’s field equations. The fact that gravitational affected occur near masses is introduced by the postulate that the presence of matter produces this curvature of space-time. This curvature of space-time controls the natural motions of bodies.
The predictions of general relativity only differ from Newton’s theory by small amounts and most tests of the theory have been carries out through observations in astronomy. For example, it explains the shift in the perihelion of Mercury, the bending of light or other electromagnetic radiation in the presence of large bodies, and the Einstein shift. Which of a small resift in the lines of a stellar spectrum caused by the gravitational potential at the level in the star at which the radiations is emitted (for a bright lone) or absorbed (for a dark line). This shift can be explained in terms of either the speed or general theory of relativity. In the simplest terms a quantum of energy hv has mass hv/c2. On moving between two points with gravitational potential difference Φ. The work done is Φhv/c2 so the change of frequency δv is Φv/c2.
Thus and so, the theory of the atom in particle the simplest atom, that of hydrogen, consisting of a nucleus and one electron. It was assumed that there could be a ground state in which an isolatd atom would remain permanently, and short -lived states of higher energy to which the atom could be excited by collisions or absorption of radiation. It was supposed that radiation was emitted or absorbed in quanta or energy equal to integral multiplied of ‘hv’, where ‘h’ is the Planck constant and ‘v’ is the frequency of the electromagnetic waves. (Later it as realized that a single quantum has the unique value hv). The frequency of radiation emitted on capturing a free electron into the nth state (where –1 for the ground state) was supposed to be nh/2 times the rotational frequency of the electron in a circular orbit. This idea to, and was replaced by, the concept that the angular momentum of orbited is quantized in terms of h/2π. The energy of the nth state was found to be given by:
En= me4/8h2 ε0 2n2
where ‘m’ is the reduced mass of the electron. This formula gave excellent agreement with the then known series of lines in the visible and infrared regions of the spectrum of atomic hydrogen and predicted a series in the ultraviolet that was soon to be found by Lyman.
The extension of the theory to more complicated atoms had success but raised innumerable difficulties, which were only resolved by the development of wave mechanics.
Am allowed wave function of an electron in an atom obtained by a solution of the Schrödinger wave equations? In a hydrogen atom, for example, the electron moves in the electrostatic field of the nucleus and its potential energy is -e2.r, where ‘e’ is the electron charge and ‘r’ its distance from the nucleus. A precise orbit cannot be considered as in Bohr’s theory of the atom but the behaviour of the electron is described by its wave function, Ψ, which is a mathematical function of its position with respect to the nucleus. The significance of the wave function is that
Ψ
2dt is the probability of locating the electron in the element of volume ‘dt’.
Solution of Schrödinger’s equation for the hydrogen atom shoes that the electron can only have certain allowed wave functions (eigenfunctions). Each of these corresponds to a probability distinction in space given by the manner in which
Ψ
2 varies with position. They also have an associated value of the energy ‘E’. There allowed wave function, or orbitals, are characterized by three quantum numbers similar to those characterizing the allowed orbits in the earlier quantum theory of the atom.
‘N’ the principle quantum number, can have values 1, 2, 3, etc. the orbital with n = 1 has the lowest energy. The state of the electron with n = 2, 2, 3, etc., are called shells and designated the K, L, M shells, etc., I the azimuthal quantum number, which for a given value of ‘n’ can have values 0, 1, 2, . . . (–1). Thus when n = 1, I can only have the value 0. An electron in the L shell of an atom with n = 2 can occupy two Subshell of different energy corresponding to 1 = o and
I = 1. Similarly the M shell (–3) has three Subshell with I =0, I =1, and I = 2. Orbitals with I = 0, 1, 2 and 3 are called s,p,d, and f orbitals respectfully. The significance of the I quantum number is that it gives the angular momentum of the electron. The orbital angular momentum of an electron is given by:
√[I(I + 1)(h/2π)]
the Bohr Theory of the Atom (1913) introduced the concept that an electron in an atom is normally in a state of lowest energy (ground state) in which it remains indefinitely unless disturbed. By absorption of electromagnetic radiation or collision with particle the atom may be excited-that is an electron is moved into a state of higher energy. Such excited states usually have short lifetimes (typically nanoseconds) and the electron returns to the ground state, commonly by emitting one or more quanta of electromagnetic radiation. The original theory was only partially successful in predicting the energies and other properties of the electron states. Attempts were made to improve the theory by postulating elliptic orbits (Sommerfeld 1915) and electron spin (Pauli 1925) but a satisfactory theory only became possible upon th development of wave mechanics after 1925.
According to modern theories, an electron does des not follow a determinate orbit as envisaged by Bohr but in a state described by the solution of a wave equation. This determines the probability that the electron may be located in a given element of volume. Each state is characterized by a set of four quantum members, and, according to Pauli exclusion principle, not more than one election can be in a given state.
An exact calculation of the energies and other properties of the quantum states is only possible for the simplest atoms but there are various approximate methods that give useful results. The properties of the innermost electron states of complex atoms are found experimentally by the study of X-ray spectra. Th e outer electrons are investigated using spectra in the infrared, visible, and ultraviolet. Certain details have been studied using microwaves. Other information may be obtained from magnetism, and chemical properties
Properties of [Standard] elementary particles are also described by quantum numbers. For example, an electron has the property known a ‘spin’, and can exist in two possible energy states depending on whether this spin set parallel or antiparallel to a certain direction. The two states are conveniently characterized by quantum numbers + ½ and ‒ ½. Similarly properties such as charge, Isospin, strangeness, parity and hyper-charge are characterized by quantum numbers. In interactions between particles, a particular quantum number may be conserved, i.e., the sum of the quantum numbers of the particles before and after the interaction remains the same. It is the type of interaction-strong electromagnetic, weak that determines whether the quantum number is conserved.
Bohr discovered that if you use Planck’s constant in combination with the known mass and charge of the electron, the approximate size of the hydrogen atom could be derived. Assuming that a jumping electron absorbs or emits energy in units of Planck’s constant, in accordance with the formula Einstein used to explain the photoelectric effect, Bohr was able to find correlations with the special spectral lines for hydrogen. More important, the model also served to explain why the electron does not, as electromagnetic theory says it should, radiate its energy quickly away and collapse into the nucleus.
Bohr reasoned that this does not occur because the orbits are quantized-electrons absorb and emit energy corresponding to the specific orbits. Their lowest energy state, or lowest orbit, is the ground state. What is notable, however, is that Bohr, although obliged to use macro-level analogies and classical theory, quickly and easily posits a view of the functional dynamic of the energy shells of the electron that has no macro-level analogy and is inexplicable within th framework of classical theory.
The central problem with Bohr’s model from the perspective of classical theory was pointed out by Rutherford shortly before the first of the papers describing the model was published. “There appears to me,” Rutherford wrote in a letter to Bohr, “one grave problem in your hypothesis that I have no doubt you fully realize, namely, how does an electron decide what frequency it is going to vibrate at when it passes from one stationary state to another? It seems to me that you would have to assume that the electron knows beforehand where it is going to stop.” Viewing the electron as atomic in the Greek sense, or as a point-like object that moves, there is cause to wonder, in the absence of a mechanistic explanation, how this object instantaneously ‘jumps’ from one shell or orbit to another. It was essentially efforts to answer this question that led to the development of quantum theory.
The effect of Bohr’s model was to raise more questions than it answered. Although the model suggested that we can explain the periodic table of th elements by assuming that a maximum number of electrons are found in each shell, Bohr was not able to provide any mathematical acceptable explanation for the hypothesis. That explanation was provided in 1925 by Wolfgang Pauli, known throughout his career for his extraordinary talents as a mathematician.
Bohr had used three quantum numbers in his models-Planck’s constant, mass, and charge. Pauli added a fourth, described as spin, which was initially represented with the macro-level analogy of a spinning ball on a pool table. Rather predictably, th analogy does not work. Whereas a classical spin can point in any direction, a quantum mechanical spin points either up or down along the axis of measurement. In total contrast to the classical notion of a spinning ball, we cannot even speak of the spin of the particle if no axis is measured.
When Pauli added this fourth quantum number, he found a correspondence between the number of electrons in each full shell of atoms and the new set of quantum numbers describing the shell. This became the basis for what we now call the ‘Pauli exclusion principle’. The principle is simple and yet quite startling-two electrons cannot have all their quantum numbers the same, and no two actual electrons are identical in te sense of having the same quantum number. The exclusion principle explains mathematically why there is a maximum number of electrons in the shell of any given atom. If the shell is full, adding another electron would be impossible because this would result in two electrons in the shell having the same quantum number.
This may sound a bit esoteric, but the fact that nature obeys the exclusion principle is quite fortunate from our point of view. If electrons did not obey the principle, all elements would exist at the ground state and there would be no chemical affinity between them. Structures like crystals and DNA would not exist, and the only structure allows for chemical bonds, which, in turn, result in the hierarchy of strictures from atoms, molecules, cells, plants, and animals.
The energy associated with a quantum state of an atom or other system that is fixed, or determined, by given set quantum numbers. It is one of the various quantum states that can be assumed by an atom under defined conditions. The term is often used to mean the state itself, which is incorrect accorded to: (i) the energy of a given state may be changed by externally applied fields (ii) there may be a number of states of equal energy in the system.
The electrons in an atom can occupy any of an infinite number of bound states with discrete energies. For an isolated atom the energy for a given state is exactly determinate except for the effected of the ‘uncertainty principle’. The ground state with lowest energy has an infinite lifetime hence, the energy, in principle is exactly determinate, the energies of these states are most accurately measured by finding the wavelength of the radiation emitted or absorbed in transitions between them, i.e., from their line spectra. Theories of the atom have been developed to predict these energies by calculation. Due to de Broglie and extended by Schrödinger, Dirac and many others, it (wave mechanics originated in the suggestion that light consists of corpuscles as well as of waves and the consequent suggestion that all [standard] elementary particles are associated with waves. Wave mechanics are based on the Schrödinger wave equation describing the wave properties of matter. It relates the energy of a system to wave function, usually, it is found that a system, such as an atom or molecule can only have certain allowed wave functions (eigenfunction) and certain allowed energies (Eigenvalues), in wave mechanics the quantum conditions arise in a natural way from the basic postulates as solutions of the wave equation. The energies of unbound states of positive energy form a continuum. This gives rise to the continuum background to an atomic spectrum as electrons are captured from unbound states. The energy of an atom state sustains essentially by some changes by the ‘Stark Effect’ or the ‘Zeeman Effect’.
The vibrational energies of the molecule also have discrete values, for example, in a diatomic molecule the atom oscillates in the line joining them. There is an equilibrium distance at which the force is zero. The atoms repulse when closer and attract when further apart. The restraining force is nearly prepositional to the displacement hence, the oscillations are simple harmonic. Solution of the Schrödinger wave equation gives the energies of a harmonic oscillation as:
En = ( n + ½ ) h.
Where ‘h’ is the Planck constant, is the frequency, and ‘n’ is the vibrational quantum number, which can be zero or any positive integer. The lowest possible vibrational energy of an oscillator is not zero but ½ h. This is the cause of zero-point energy. The potential energy of interaction of atoms is described more exactly by the ‘Morse Equation,’ which shows that the oscillations are anharmonic. The vibrations of molecules are investigated by the study of ‘band spectra’.
The rotational energy of a molecule is quantized also, according to the Schrödinger equation, a body with the moment of inertial I about the axis of rotation have energies given by:
EJ = h2J ( J + 1 ) / 8π 2I.
Where J is the rotational quantum number, which can be zero or a positive integer. Rotational energies originate from band spectra.
The energies of the state of the nucleus are determined from the gamma ray spectrum and from various nuclear reactions. Theory has been less successful in predicting these energies than those of electrons because the interactions of nucleons are very complicated. The energies are very little affected by external influence but the ‘Mössbauer Effect’ has permitted the observations of some minute changes.
In quantum theory, introduced by Max Planck 1858-1947 in 1900, was the first serious scientific departure from Newtonian mechanics. It involved supposing that certain physical quantities can only assume discrete values. In the following two decades it was applied successfully by Einstein and the Danish physicist Neils Bohr (1885-1962). It was superseded by quantum mechanics in the tears following 1924, when the French physicist Louis de Broglie (1892-1987) introduced the idea that a particle may also be regarded as a wave. A set of waves that represent the behaviour, under appropriate conditions, of a particle (e.g., its diffraction by a crystal lattice). The wavelength is given by the de Broglie equation. They are sometimes regarded as waves of probability, since the square of their amplitude at a given point represents the probability of finding the particle in unit volume at that point. These waves were predicted by de Broglie in 1924 and observed in 1927 in the Davisson-Germer experiment. The Schrödinger wave equation relates the energy of a system to a wave function, the energy of a system to a wave function, the square of the amplitude of the wave is proportional to the probability of a particle being found in a specific position. The wave function expresses the lack of possibly of defining both the position and momentum of a particle, this expression of discrete representation is called as the ‘uncertainty principle,’ the allowed wave functions that have described stationary states of a system
Part of the difficulty with the notions involved is that a system may be in an indeterminate state at a time, characterized only by the probability of some result for an observation, but then ‘become’ determinate (the collapse of the wave packet) when an observation is made such as the position and momentum of a particle if that is to apply to reality itself, than to mere indetermincies of measurement. It is as if there is nothing but a potential for observation or a probability wave before observation is made, but when an observation is made the wave becomes a particle. The wave-particle duality seems to block any way of conceiving of physical reality-in quantum terms. In the famous two-slit experiment, an electron is fired at a screen with two slits, like a tennis ball thrown at a wall with two doors in it. If one puts detectors at each slit, every electron passing the screen is observed to go through exactly one slit. When the detectors are taken away, the electron acts like a wave process going through both slits and interfering with itself. A particle such an electron is usually thought of as always having an exact position, but its wave is not absolutely zero anywhere, there is therefore a finite probability of it ‘tunnelling through’ from one position to emerge at another.
The unquestionable success of quantum mechanics has generated a large philosophical debate about its ultimate intelligibility and it’s metaphysical implications. The wave-particle duality is already a departure from ordinary ways of conceiving of tings in space, and its difficulty is compounded by the probabilistic nature of the fundamental states of a system as they are conceived in quantum mechanics. Philosophical options for interpreting quantum mechanics have included variations of the belief that it is at best an incomplete description of a better-behaved classical underlying reality ( Einstein ), the Copenhagen interpretation according to which there are no objective unobserved events in the micro-world (Bohr and W. K. Heisenberg, 1901- 76), an ‘acausal’ view of the collapse of the wave packet (J. von Neumann, 1903-57), and a ‘many world’ interpretation in which time forks perpetually toward innumerable futures, so that different states of the same system exist in different parallel universes (H. Everett).
In recent tars the proliferation of subatomic particles, such as there are 36 kinds of quarks alone, in six flavours to look in various directions for unification. One avenue of approach is superstring theory, in which the four-dimensional world is thought of as the upshot of the collapse of a ten-dimensional world, with the four primary physical forces, one of gravity another is electromagnetism and the strong and weak nuclear forces, becoming seen as the result of the fracture of one primary force. While the scientific acceptability of such theories is a matter for physics, their ultimate intelligibility plainly requires some philosophical reflection.
A theory of gravitation that is consistent with quantum mechanics whose subject, still in its infancy, has no completely satisfactory theory. In controventional quantum gravity, the gravitational force is mediated by a massless spin-2 particle, called the ‘graviton’. The internal degrees of freedom of the graviton require (hij)(χ) represent the deviations from the metric tensor for a flat space. This formulation of general relativity reduces it to a quantum field theory, which has a regrettable tendency to produce infinite for measurable qualitites. However, unlike other quantum field theories, quantum gravity cannot appeal to renormalizations procedures to make sense of these infinites. It has been shown that renormalization procedures fail for theories, such as quantum gravity, in which the coupling constants have the dimensions of a positive power of length. The coupling constant for general relativity is the Planck length,
Lp = ( Gh / c3 )½ ≡ 10 ‒35 m.
Supersymmetry has been suggested as a structure that could be free from these pathological infinities. Many theorists believe that an effective superstring field theory may emerge, in which the Einstein field equations are no longer valid and general relativity is required to appar only as low energy limit. The resulting theory may be structurally different from anything that has been considered so far. Supersymmetric string theory (or superstring) is an extension of the ideas of Supersymmetry to one-dimensional string-like entities that can interact with each other and scatter according to a precise set of laws. The normal modes of super-strings represent an infinite set of ‘normal’ elementary particles whose masses and spins are related in a special way. Thus, the graviton is only one of the string modes-when the string-scattering processes are analysed in terms of their particle content, the low-energy graviton scattering is found to be the same as that computed from Supersymmetric gravity. The graviton mode may still be related to the geometry of the space-time in which the string vibrates, but it remains to be seen whether the other, massive, members of the set of ‘normal’ particles also have a geometrical interpretation. The intricacy of this theory stems from the requirement of a space-time of at least ten dimensions to ensure internal consistency. It has been suggested that there are the normal four dimensions, with the extra dimensions being tightly ‘curled up’ in a small circle presumably of Planck length size.
In the quantum theory or quantum mechanics of an atom or other system fixed, or determined by a given set of quantum numbers. It is one of the various quantum states that an atom can assume. The conceptual representation of an atom was first introduced by the ancient Greeks, as a tiny indivisible component of matter, developed by Dalton, as the smallest part of an element that can take part in a chemical reaction, and made very much more precisely by theory and excrement in the late-19th and 20th centuries.
Following the discovery of the electron (1897), it was recognized that atoms had structure, since electrons are negatively charged, a neutral atom must have a positive component. The experiments of Geiger and Marsden on the scattering of alpha particles by thin metal foils led Rutherford to propose a model (1912) in which nearly, but all the mass of an atom is concentrated at its centre in a region of positive charge, the nucleus, the radius of the order 10 -15 metre. The electrons occupy the surrounding space to a radius of 10-11 to 10-10 m. Rutherford also proposed that the nucleus have a charge of ‘Ze’ and is surrounded by ‘Z’ electrons (Z is the atomic number). According to classical physics such a system must emit electromagnetic radiation continuously and consequently no permanent atom would be possible. This problem was solved by the development of the quantum theory.
The ‘Bohr Theory of the Atom’, 1913, introduced the concept that an electron in an atom is normally in a state of lower energy, or ground state, in which it remains indefinitely unless disturbed. By absorption of electromagnetic radiation or collision with another particle the atom may be excited-that is an electron is moved into a state of higher energy. Such excited states usually have short lifetimes, typically nanoseconds and the electron returns to the ground state, commonly by emitting one or more quanta of electromagnetic radiation. The original theory was only partially successful in predicting the energies and other properties of the electronic states. Attempts were made to improve the theory by postulating elliptic orbits (Sommerfeld 1915) and electron spin (Pauli 1925) but a satisfactory theory only became possible upon the development of ‘Wave Mechanics,’ after 1925.
According to modern theories, an electron does not follow a determinate orbit as envisaged by Bohr, but is in a state described by the solution of a wave equation. This determines the probability that the electron may be located in a given element of volume. Each state is characterized by a set of four quantum numbers, and, according to the Pauli exclusion principle, not more than one electron can be in a given state.
The Pauli exclusion principle states that no two identical ‘fermions’ in any system can be in the same quantum state that is have the same set of quantum numbers. The principle was first proposed (1925) in the form that not more than two electrons in an atom could have the same set of quantum numbers. This hypothesis accounted for the main features of the structure of the atom and for the periodic table. An electron in an atom is characterized by four quantum numbers, n, I, m, and S. A particular atomic orbital, which has fixed values of n, I, and m, can thus contain a maximum of two electrons, since the spin quantum number ‘s’ can only be +
or-
. In 1928 Sommerfeld applied the principle to the free electrons in solids and his theory has been greatly developed by later associates.
Additionally, an effect occurring when atoms emit or absorb radiation in the presence of a moderately strong magnetic field. Each spectral; Line is split into closely spaced polarized components, when the source is viewed at right angles to the field there are three components, the middle one having the same frequency as the unmodified line, and when the source is viewed parallel to the field there are two components, the undisplaced line being preoccupied. This is the ‘normal’ Zeeman Effect. With most spectral lines, however, the anomalous Zeeman effect occurs, where there are a greater number of symmetrically arranged polarized components. In both effects the displacement of the components is a measure of the magnetic field strength. In some cases the components cannot be resolved and the spectral line appears broadened.
The Zeeman effect occurs because the energies of individual electron states depend on their inclination to the direction of the magnetic field, and because quantum energy requirements impose conditions such that the plane of an electron orbit can only set itself at certain definite angles to the applied field. These angles are such that the projection of the total angular momentum on the field direction in an integral multiple of h/2π (h is the Planck constant). The Zeeman effect is observed with moderately strong fields where the precession of the orbital angular momentum and the spin angular momentum of the electrons about each other is much faster than the total precession around the field direction. The normal Zeeman effect is observed when the conditions are such that the Landé factor is unity, otherwise the anomalous effect is found. This anomaly was one of the factors contributing to the discovery of electron spin.
Statistics that are concerned with the equilibrium distribution of elementary particles of a particular type among the various quantized energy states. It is assumed that these elementary particles are indistinguishable. The ‘Pauli Exclusion Principle’ is obeyed so that no two identical ‘fermions’ can be in the same quantum mechanical state. The exchange of two identical fermions, i.e., two electrons, does not affect the probability of distribution but it does involve a change in the sign of the wave function. The ‘Fermi-Dirac Distribution Law’ gives E the average number of identical fermions in a state of energy E:
E = 1/[eα + E/kT + 1],
Where ‘k’ is the Boltzmann constant, ‘T’ is the thermodynamic temperature and α is a quantity depending on temperature and the concentration of particles. For the valences electrons in a solid, ‘α’ takes the form-E1/kT, where E1 is the Fermi level. Whereby, the Fermi level (or Fermi energy) E F the value of E is exactly one half. Thus, for a system in equilibrium one half of the states with energy very nearly equal to ‘E’ (if any) will be occupied. The value of EF varies very slowly with temperatures, tending to E0 as ‘T’ tends to absolute zero.
In Bose-Einstein statistics, the Pauli exclusion principle is not obeyed so that any number of identical ‘bosons’ can be in the same state. The exchanger of two bosons of the same type affects neither the probability of distribution nor the sign of the wave function. The ‘Bose-Einstein Distribution Law’ gives E the average number of identical bosons in a state of energy E:
E = 1/[eα + E/kT-1].
The formula can be applied to photons, considered as quasi-particles, provided that the quantity α, which conserves the number of particles, is zero. Planck’s formula for the energy distribution of ‘Black-Body Radiation’ was derived from this law by Bose. At high temperatures and low concentrations both the quantum distribution laws tend to the classical distribution:
E = Ae-E/kT.
Additionally, the property of substances that have a positive magnetic ‘susceptibility’, whereby its quantity μr ‒ 1, and where μr is ‘Relative Permeability,’ again, that the electric-quantity presented as Єr ‒ 1, where Єr is the ‘Relative Permittivity,’ all of which has positivity. All of which are caused by the ‘spins’ of electrons, paramagnetic substances having molecules or atoms, in which there are paired electrons and thus, resulting of a ‘Magnetic Moment.’ There is also a contribution of the magnetic properties from the orbital motion of the electron, as the relative ‘permeability’ of a paramagnetic substance is thus greater than that of a vacuum, i.e., it is greater than unity.
A ‘paramagnetic substance’ is regarded as an assembly of magnetic dipoles that have random orientation. In the presence of a field the magnetization is determined by competition between the effect of the field, in tending to align the magnetic dipoles, and the random thermal agitation. In small fields and high temperatures, the magnetization produced is proportional to the field strength, wherefore at low temperatures or high field strengths, a state of saturation is approached. As the temperature rises, the susceptibility falls according to Curie’s Law or the Curie-Weiss Law.
Furthering by Curie’s Law, the susceptibility (χ) of a paramagnetic substance is unversedly proportional to the ‘thermodynamic temperature’ (T): χ = C/T. The constant ’C is called the ‘Curie constant’ and is characteristic of the material. This law is explained by assuming that each molecule has an independent magnetic ‘dipole’ moment and the tendency of the applied field to align these molecules is opposed by the random moment due to the temperature. A modification of Curie’s Law, followed by many paramagnetic substances, where the Curie-Weiss law modifies its applicability in the form:
χ = C/(T ‒ θ ).
The law shows that the susceptibility is proportional to the excess of temperature over a fixed temperature θ: ‘θ’ is known as the Weiss constant and is a temperature characteristic of the material, such as sodium and potassium, also exhibit type of paramagnetic resulting from the magnetic moments of free, or nearly free electrons, in their conduction bands? This is characterized by a very small positive susceptibility and a very slight temperature dependence, and is known as ‘free-electron paramagnetism’ or ‘Pauli paramagnetism’.
A property of certain solid substances that having a large positive magnetic susceptibility having capabilities of being magnetized by weak magnetic fields. The chief elements are iron, cobalt, and nickel and many ferromagnetic alloys based on these metals also exist. Justifiably, ferromagnetic materials exhibit magnetic ‘hysteresis’, of which formidable combination of decaying within the change of an observed effect in response to a change in the mechanism producing the effect. (Magnetic) a phenomenon shown by ferromagnetic substances, whereby the magnetic flux through the medium depends not only on the existing magnetizing field, but also on the previous state or states of the substances, the existence of a phenomenon necessitates a dissipation of energy when the substance is subjected to a cycle of magnetic changes, this is known as the magnetic hysteresis loss. The magnetic hysteresis loops were acceding by a curved obtainability from ways of which, in themselves were of plotting the magnetic flux density ‘B’, of a ferromagnetic material against the responding value of the magnetizing field ’H’, the area to the ‘hysteresis loss’ per unit volume in taking the specimen through the prescribed magnetizing cycle. The general forms of the hysteresis loop fore a symmetrical cycle between ‘H’ and ‘- H’ and ‘H-h, having inclinations that rise to hysteresis.
The magnetic hysteresis loss commands the dissipation of energy as due to magnetic hysteresis, when the magnetic material is subjected to changes, particularly, the cycle changes of magnetization, as having the larger positive magnetic susceptibility, and are capable of being magnetized by weak magnetic fields. Ferro magnetics are able to retain a certain domain of magnetization when the magnetizing field is removed. Those materials that retain a high percentage of their magnetization are said to be hard, and those that lose most of their magnetization are said to be soft, typical examples of hard ferromagnetic are cobalt steel and various alloys of nickel, aluminium and cobalt. Typical soft magnetic materials are silicon steel and soft iron, the coercive force as acknowledged to the reversed magnetic field’ that is required to reduce the magnetic ‘flux density’ in a substance from its remnant value to zero in characteristic of ferromagnetisms and explains by its presence of domains. A ferromagnetic domain is a region of crystalline matter, whose volume may be 10-12 to 10-8 m3, which contains atoms whose magnetic moments are aligned in the same direction. The domain is thus magnetically saturated and behaves like a magnet with its own magnetic axis and moment. The magnetic moment of the ferrometic atom results from the spin of the electron in an unfilled inner shell of the atom. The formation of a domain depends upon the strong interactions forces (Exchange forces) that are effective in a crystal lattice containing ferrometic atoms.
In an unmagnetized volume of a specimen, the domains are arranged in a random fashion with their magnetic axes pointing in all directions so that the specimen has no resultant magnetic moment. Under the influence of a weak magnetic field, those domains whose magnetic saxes have directions near to that of the field flux at the expense of their neighbours. In this process the atoms of neighbouring domains tend to align in the direction of the field but the strong influence of the growing domain causes their axes to align parallel to its magnetic axis. The growth of these domains leads to a resultant magnetic moment and hence, magnetization of the specimen in the direction of the field, with increasing field strength, the growth of domains proceeds until there is, effectively, only one domain whose magnetic axis appropriates to the field direction. The specimen now exhibits tron magnetization. Further, increasing in field strength cause the final alignment and magnetic saturation in the field direction. This explains the characteristic variation of magnetization with applied strength. The presence of domains in ferromagnetic materials can be demonstrated by use of ‘Bitter Patterns’ or by ‘Barkhausen Effect,’ which puts forward, that the magnetization of a ferromagnetic substance does not increase or decrease steadily with steady increase or decrease of the magnetizing field but proceeds in a series of minute jumps. The effect gives support to the domain theory of Ferromagnetism.
For ferromagnetic solids there are a change from ferromagnetic to paramagnetic behaviour above a particular temperature and the paramagnetic material then obeyed the Curie-Weiss Law above this temperature, this is the ‘Curie temperature’ for the material. Below this temperature the law is not obeyed. Some paramagnetic substances, obey the temperature ‘θ C’ and do not obey it below, but are not ferromagnetic below this temperature. The value ‘θ’ in the Curie-Weiss law can be thought of as a correction to Curie’s law reelecting the extent to which the magnetic dipoles interact with each other. In materials exhibiting ‘antiferromagnetism’ of which the temperature ‘θ’ corresponds to the ‘Néel temperature’.
Without discredited inquisitions, the property of certain materials that have a low positive magnetic susceptibility, as in paramagnetism, and exhibit a temperature dependence similar to that encountered in ferromagnetism. The susceptibility increased with temperatures up to a certain point, called the ‘Néel Temperature,’ and then falls with increasing temperatures in accordance with the Curie-Weiss law. The material thus becomes paramagnetic above the Néel temperature, which is analogous to the Curie temperature in the transition from ferromagnetism to paramagnetism. Antiferromagnetism is a property of certain inorganic compounds such as MnO, FeO, FeF2 and MnS. It results from interactions between neighbouring atoms leading and an antiparallel arrangement of adjacent magnetic dipole moments, least of mention. A system of two equal and opposite charges placed at a very short distance apart. The product of either of the charges and the distance between them is known as the ‘electric dipole moments. A small loop carrying a current behaves as a magnetic dipole and is equal to IA, where A being the area of the loop.
The energy associated with a quantum state of an atom or other system that is fixed, or determined by a given set of quantum numbers. It is one of the various quantum states that can be assumed by an atom under defined conditions. The term is often used to mean the state itself, which is incorrect by ways of: (1) the energy of a given state may be changed by externally applied fields, and (2) there may be a number of states of equal energy in the system.
The electrons in an atom can occupy any of an infinite number of bound states with discrete energies. For an isolated atom the energy for a given state is exactly determinate except for the effects of the ‘uncertainty principle’. The ground state with lowest energy has an infinite lifetime, hence the energy is if, in at all as a principle that is exactly determinate. The energies of these states are most accurately measured by finding the wavelength of the radiation emitted or absorbed in transitions between them, i.e., from their line spectra. Theories of the atom have been developed to predict these energies by calculating such a system that emit electromagnetic radiation continuously and consequently no permanent atom would be possible, hence this problem was solved by the developments of quantum theory. An exact calculation of the energies and other particles of the quantum state is only possible for the simplest atom but there are various approximate methods that give useful results as an approximate method of solving a difficult problem, if the equations to be solved, and depart only slightly from those of some problems already solved. For example, the orbit of a single planet round the sun is an ellipse, that the perturbing effect of other planets modifies the orbit slightly in a way calculable by this method. The technique finds considerable application in ‘wave mechanics’ and in ‘quantum electrodynamics’. Phenomena that are not amendable to solution by perturbation theory are said to be non-perturbative.
The energies of unbound states of positive total energy form a continuum. This gives rise to the continuos background to an atomic spectrum, as electrons are captured from unbound state, the energy of an atomic state can be changed by the ‘Stark Effect’ or the ‘Zeeman Effect.’
The vibrational energies of molecules also have discrete values, for example, in a diatomic molecule the atoms oscillate in the line joining them. There is an equilibrium distance at which the force is zero, and the atoms deflect when closer and attract when further apart. The restraining force is very nearly proportional to the displacement, hence the oscillations are simple harmonic. Solution of the ‘Schrödinger wave equation’ gives the energies of a harmonic oscillation as:
En = ( n + ½ ) hƒ
Where ‘h’ is the Planck constant, ƒ is the frequency, and ‘n’ is the vibrational quantum number, which can be zero or any positive integer. The lowest possible vibrational energy of an oscillator is thus not zero but ½hƒ. This is the cause of zero-point energy. The potential energy of interaction of atoms is described more exactly by the Morse equation, which shows that the oscillations are anharmonic. The vibrations of molecules are investigated by the study of ‘band spectra’.
The rotational energy of a molecule is quantized also, according to the Schrödinger equation a body with moments of inertia I about the axis of rotation have energies given by:
Ej = h2J(J + 1 )/8π2 I,
Where ‘J’ is the rotational quantum number, which can be zero or a positive integer. Rotational energies are found from ‘band spectra’.
The energies of the states of the ‘nucleus’ can be determined from the gamma ray spectrum and from various nuclear reactions. Theory has been less successful in predicting these energies than those of electrons in atoms because the interactions of nucleons are very complicated. The energies are very little affected by external influences, but the ‘Mössbauer Effect’ has permitted the observation of some minute changes.
When X-rays are scattered by atomic centres arranged at regular intervals, interference phenomena occur, crystals providing grating of a suitable small interval. The interference effects may be used to provide a spectrum of the beam of X-rays, since, according to ‘Bragg’s Law,’ the angle of reflection of X-rays from a crystal depends on the wavelength of the rays. For lower-energy X-rays mechanically ruled grating can be used. Each chemical element emits characteristic X-rays in sharply defined groups in more widely separated regions. They are known as the K, L’s, M, N. etc., promote lines of any series toward shorter wavelengths as the atomic number of the elements concerned increases. If a parallel beam of X-rays, wavelength λ, strikes a set of crystal planes it is reflected from the different planes, interferences occurring between X-rays reflect from adjacent planes. Bragg’s Law states that constructive interference takes place when the difference in path-lengths, BAC, is equal to an integral number of wavelengths
2d sin θ = nλ,
In which ‘n’ is an integer, ‘d’ is the interplanar distance, and ‘θ’ is the angle between the incident X-ray and the crystal plane. This angle is called the ‘Bragg’s Angle,’ and a bright spot will be obtained on an interference pattern at this angle. A dark spot will be obtained, however, if be 2d sin θ = mλ. Where ‘m’ is half-integral. The structure of a crystal can be determined from a set of interference patterns found at various angles from the different crystal faces.
A concept originally introduced by the ancient Greeks, as a tiny indivisible component of matter, developed by Dalton, as the smallest part of an element that can take part in a chemical reaction, and made experiment in the late-19th and early 20th century. Following the discovery of the electron (1897), they recognized that atoms had structure, since electrons are negatively charged, a neutral atom must have a positive component. The experiments of Geiger and Marsden on the scattering of alpha particles by thin metal foils led Rutherford to propose a model (1912) in which nearly all mass of the atom is concentrated at its centre in a region of positive charge, the nucleus is a region of positive charge, the nucleus, radiuses of the order 10-15 metre. The electrons occupy the surrounding space to a radius of 10-11 to 10-10 m. Rutherford also proposed that the nucleus have a charge of ‘Ze’, is surrounded by ‘Z’ electrons (‘Z’ is the atomic number). According to classical physics such a system must emit electromagnetic radiation continuously and consequently no permanent atom would be possible. This problem was solved by the developments of the ‘Quantum Theory.’
The ‘Bohr Theory of the Atom’ (1913) introduced the notion that an electron in an atom is normally in a state of lowest energy (ground state) in which it remains indefinitely unless disturbed by absorption of electromagnetic radiation or collision with other particle the atom may be excited-that is, electrons moved into a state of higher energy. Such excited states usually have short life spans (typically nanoseconds) and the electron returns to the ground state, commonly by emitting one or more ‘quanta’ of electromagnetic radiation. The original theory was only partially successful in predicting the energies and other properties of the electronic states. Postulating elliptic orbits made attempts to improve the theory (Sommerfeld 1915) and electron spin (Pauli 1925) but a satisfactory theory only became possible upon the development of ‘Wave Mechanics’ 1925.
According to modern theories, an electron does not follow a determinate orbit as envisaged by Bohr, but is in a state described by the solution of the wave equation. This determines the ‘probability’ that the electron may be found in a given element of volume. A set of four quantum numbers has characterized each state, and according to the ‘Pauli Exclusion Principle’, not more than one electron can be in a given state.
An exact calculation of the energies and other properties of the quantum states is possible for the simplest atoms, but various approximate methods give useful results, i.e., as an approximate method of solving a difficult problem if the equations to be solved and depart only slightly from those of some problems already solved. The properties of the innermost electron states of complex atoms are found experimentally by the study of X-ray spectra. The outer electrons are investigated using spectra in the infrared, visible, and ultraviolet. Certain details have been studied using microwaves. As administered by a small difference in energy between the energy levels of the
2P½ states of hydrogen. In accord with Lamb Shift, these levels would have the same energy according to the wave mechanics of Dirac. The actual shift can be explained by a correction to the energies based on the theory of the interaction of electromagnetic fields with matter, in of which the fields themselves are quantized. Yet, other information may be obtained form magnetism and other chemical properties.
Its appearance potential concludes as, (1)the potential differences through which an electron must be accelerated from rest to produce a given ion from its parent atom or molecule. (2) This potential difference multiplied bu the electron charge giving the least energy required to produce the ion. A simple ionizing process gives the ‘ionization potential’ of the substance, for example:
Ar + e ➝ Ar + + 2e.
Higher appearance potentials may be found for multiplying charged ions:
Ar + e ➝ Ar + + + 3r.
The number of protons in a nucleus of an atom or the number of electrons resolving around the nucleus is among some concerns of atomic numbers. The atomic number determines the chemical properties of an element and the element’s position in the periodic table, because of which the clarification of chemical elements, in tabular form, in the order of their atomic number. The elements show a periodicity of properties, chemically similar recurring in a definite order. The sequence of elements is thus broken into horizontal ‘periods’ and vertical ‘groups’ the elements in each group showing close chemical analogies, i.e., in valency, chemical properties, etc. all the isotopes of an element have the same atomic number although different isotopes gave mass numbers.
An allowed ‘wave function’ of an electron in an atom obtained by a solution of the Schrödinger wave equation. In a hydrogen atom, for example, the electron moves in the electrostatic field of the nucleus and its potential energy is -e2, where ‘e’ is the electron charge. ‘r’ its distance from the nucleus, as a precise orbit cannot be considered as in Bohr’s theory of the atom, but the behaviour of the electron is described by its wave function, Ψ, which is a mathematical function of its position with respect to the nucleus. The significance of the wave function is that
Ψ
2dt, is the probability of finding the electron in the element of volume ‘dt’.
Solution of Schrödinger’s equation for hydrogen atom shows that the electron can only have certain allowed wave functions (eigenfunction). Each of these corresponds to a probability distribution in space given by the manner in which
Ψ
2 varies with position. They also have an associated value of energy ‘E’. These allowed wave functions, or orbitals, are characterized by three quantum numbers similar to those characterizing the allowed orbits in the quantum theory of the atom: ‘n’, the ‘principle quantum number’, can have values of 1, 2, 3, etc. the orbital with n=1 has the lowest energy. The states of the electron with n=1, 2, 3, etc., are called ‘shells’ and designated the K, L, M shells, etc. ‘I’ the ‘azimuthal quanta number’ which for a given value of ‘n’ can have values of 0, 1, 2, . . . (n ‒1). Similarly, the ’M’ shell (n = 3) has three Subshell with I = 0, I = 1, and I = 2. Orbitals with I = 0, 1, 2, and 3 are called s, p, d, and orbitals respectively. The significance of the I quantum number is that it gives the angular momentum of the electron. The orbital annular momentum of an electron is given by:
√[1(I + 1)(h2π)]
‘m’ the ‘magnetic quanta number’, which for a given value of ‘I’ can have values of -I,-(I ‒ 1), . . . , 0, . . . (I‒ 1). Thus for ‘p’ orbital for which I = 1, there is in fact three different orbitals with m =-1, 0, and 1. These orbitals with the same values of ‘n’ and ‘I ‘ but different ‘m’ values, have the same energy. The significance of this quantum number is that it shows the number of different levels that would be produced if the atom were subjected to an external magnetic field
According to wave theory the electron may be at any distance from the nucleus, but in fact there is only a reasonable chance of it being within a distance of-5 x 1011 metre. In fact the maximum probability occurs when r = a0 where a0 is the radius of the first Bohr orbit. Representing an orbit that there is no arbitrarily decided probability is customary (say 95%) of finding them an electron. Notably taken, is that although ‘s’ orbitals are spherical (I = 0), orbitals with I > 0, have an angular dependence. Finally. The electron in an atom can have a fourth quantum number, ‘M’ characterizing its spin direction. This can be + ½ or ‒ ½ and according to the Pauli Exclusion principle, each orbital can hold only two electrons. The fourth quantum numbers lead to an explanation of the periodic table of the elements.
The least distance in a progressive wave between two surfaces with the same phase arises to a wavelength. If ‘v’ is the phase speed and ‘v’ the frequency, the wavelength is given by v = vλ. For electromagnetic radiation the phase speed and wavelength in a material medium are equal to their values in a free space divided by the ‘refractive index’. The wavelengths of spectral lines are normally specified for free space.
Optical wavelengths are measure absolutely using interferometers or diffraction gratings, or comparatively using a prism spectrometer. The wavelength can only have an exact value for an infinite waver train if an atomic body emits a quantum in the form of a train of waves of duration τ the fractional uncertainty of the wavelength, Δλ/λ, is approximately λ/2cτ, where ‘c’ is the speed in free space. This is associated with the indeterminacy of the energy given by the uncertainty principle.
Whereas, a mathematical quantity analogous to the amplitude of a wave that appears in the equation of wave mechanics, particularly the Schrödinger waves equation. The most generally accepted interpretation is that
Ψ
2dV represents the probability that a particle is within the volume element dV. The wavelengths, as a set of waves that represent the behaviour, under appropriate conditions, of a particle, e.g., its diffraction by a particle. The wavelength is given by the ‘de Broglie Equation.’ They are sometimes regarded as waves of probability, times the square of their amplitude at a given point represents the probability of finding the particle in unit volume at that point. These waves were predicted by de Broglie in 1924 and observed in 1927 in the Davisson-Germer Experiment. Still, ‘Ψ’ is often a might complex quality.
The analogy between ‘Ψ’ and the amplitude of a wave is purely formal. There is no macroscopic physical quantity with which ‘Ψ’ can be identified, in contrast with, for example, the amplitude of an electromagnetic wave, which is expressed in terms of electric and magnetic field intensities.
Overall, there are an infinite number of functions satisfying a wave equation but only some of these will satisfy the boundary conditions. ‘Ψ’ must be finite and single-valued at every point, and the spatial derivative must be continuous at an interface? For a particle subject to a law of conservation of numbers, the integral of
Ψ
2dV over all space must remain equal to 1, since this is the probability that it exists somewhere to satisfy this condition the wave equation must be of the first order in (dΨ/dt). Wave functions obtained when these conditions are applied from a set of characteristic functions of the Schrödinger wave equation. These are often called eigenfunctions and correspond to a set of fixed energy values in which the system may exist describe stationary states on the system. For certain bound states of a system the eigenfunctions do not charge the sign or reversing the co-ordinated axes. These states are said to have even parity. For other states the sign changes on space reversal and the parity is said to be odd.
It’s issuing case of eigenvalue problems in physics that take the form:
ΩΨ = λΨ
Where Ω is come mathematical operation (multiplication by a number, differentiation, etc.) on a function Ψ, which is called the ‘eigenfunction’. (λ) is called the ‘eigenvalue’, which in a physical system will be identified with an observable quantity, as, too, an atom to other systems that are fixed, or determined, by a given set of quantum numbers? It is one of the various quantum states that can be assumed by an atom
Eigenvalue problems are ubiquitous in classical physics and occur whenever the mathematical description of a physical system yields a series of coupled differential equations. For example, the collective motion of a large number of interacting oscillators may be described by a set of coupled differential equations. Each differential equation describes the motion of one of the oscillators in terms of the positions of all the others. A ‘harmonic’ solution may be sought, in which each displacement is assumed as a simple harmonic motion in time. The differential equations then reduce to ‘3N’ linear equations with 3N unknowns. Where ‘N’ is the number of individual oscillators, each problem is from each one of three degrees of freedom. The whole problem I now easily recast as a ‘matrix’ equation of the form:
Mχ = ῳ2χ.
Where ‘M’ is an N x N matrix called the ‘a dynamic matrix, χ is an N x 1 column matrix, and ῳ2 of the harmonic solution. The problem is now an eigenvalue problem with eigenfunctions’ χ, where are the normal modes of the system, with corresponding eigenvalues ῳ2. As χ can be expressed as a column vector, χ is a vector in some-dimensional vector space. For this reason, χ is also often called an eigenvector.
When the collection of oscillators is a complicated three-dimensional molecule, the casting of the problem into normal modes s and effective simplification of the system. The symmetry principles of group theory, the symmetry operations in any physical system must be posses the properties of the mathematical group. As the group of rotation, both finite and infinite, are important in the analysis of the symmetry of atoms and molecules, which underlie the quantum theory of angular momentum. Eigenvalue problems arise in the quantum mechanics of atomic arising in the quantum mechanics of atomic or molecular systems yield stationary states corresponding to the normal mode oscillations of either electrons in-an atom or atoms within a molecule. Angular momentum quantum numbers correspond to a labelling system used to classify these normal modes, analysing the transitions between them can lead and theoretically predict of atomic or a molecular spectrum. Whereas, the symmetrical principle of group theory can then be applied, from which allow their classification accordingly. In which, this kind of analysis requires an appreciation of the symmetry properties of the molecules (rotations, inversions, etc.) that leave the molecule invariant make up the point group of that molecule. Normal modes sharing the same ῳ eigenvalues are said to correspond to the irreducible representations of these molecules’ point group. It is among these irreducible representations that one will find the infrared absorption spectrum for the vibrational normal modes of the molecule.
Eigenvalue problems play a particularly important role in quantum mechanics. In quantum mechanics, physically observable as location momentum energy etc., are represented by operations (differentiations with respect to a variable, multiplication by a variable), which act on wave functions. Wave functioning differs from classical waves in that they carry no energy. For classical waves, the square modulus of its amplitude measures its energy. For a wave function, the square modulus of its amplitude, at a location χ represents not energy bu probability, i.e., the probability that a particle-a localized packet of energy will be observed in a detector is placed at that location. The wave function therefore describes the distribution of possible locations of the particle and is perceptible only after many location detectors events have occurred. A measurement of position of a quantum particle may be written symbolically as:
X Ψ(χ) = χΨ(χ),
Where Ψ (χ) is said to be an eigenvector of the location operator and ‘χ’ is the eigenvalue, which represents the location. Each Ψ(χ) represents amplitude at the location ‘χ’,
Ψ (χ)
2 is the probability that the particle will be found in an infinitesimal volume at that location. The wave function describing the distribution of all possible locations for the particle is the linear superposition of all Ψ (χ) for zero ≤χ ≥ ∞. These principles that hold generally in physics wherever linear phenomena occur. In elasticity, the principle stares that the same strains whether it acts alone accompany each stress or in conjunction with others, it is true so long as the total stress does not exceed the limit of proportionality. In vibrations and wave motion the principle asserts that one set is unaffected by the presence of another set. For example, two sets of ripples on water will pass through one anther without mutual interaction so that, at a particular instant, the resultant distribution at any point traverse by both sets of waves is the sum of the two component disturbances.’
The superposition of two vibrations, y1 and y2, both of frequency , produces a resultant vibration of the same frequency, its amplitude and phase functions of the component amplitudes and phases, that:
y1 = a1 sin(2πt + δ1)
y2 = a2 sin(sin(2πt + δ2)
Then the resultant vibration, y, is given by:
y1 + y2 = A sin(2πt + Δ),
Where amplitude A and phase Δ is both functions of a1, a2, δ1, and δ2.
However, the eigenvalue problems in quantum mechanics therefore represent observable representations as made by possible states (position, in the case of χ) that the quantum system can have to stationary states, of which states that the product of the uncertainty of the resulting value of a component of momentum (pχ) and the uncertainties in the corresponding co-ordinate position (χ) is of the same set-order of magnitude as the Planck Constant. It produces an accurate measurement of position is possible, as a resultant of the uncertainty principle. Subsequently, measurements of the position acquire a spread themselves, which makes the continuos monitoring of the position impossibly.
As in, classical mechanics may take differential or matrix forms. Both forms have been shown to be equivalent. The differential form of quantum mechanics is called wave mechanics (Schrödinger), where the operators are differential operators or multiplications by variables. Eigenfunctions in wave mechanics are wave functions corresponding to stationary wave states that responding to stationary conditions. The matrix forms of quantum mechanics are often matrix mechanics: Born and Heisenberg. Matrices acting of eigenvectors represent the operators.
The relationship between matrix and wave mechanics is similar to the relationship between matrix and differential forms of eigenvalue problems in classical mechanics. The wave functions representing stationary states are really normal modes of the quantum wave. These normal modes may be thought of as vectors that span on a vector space, which have a matrix representation.
Pauli, in 1925, suggested that each electron could exist in two states with the same orbital motion. Uhlenbeck and Goudsmit interpreted these states as due to the spin of the electron about an axis. The electron is assumed to have an intrinsic angular momentum on addition, to any angular momentum due to its orbital motion. This intrinsic angular momentum is called ‘spin’ It is quantized in values of
s(s + 1)h/2π,
Where ‘s’ is the ‘spin quantum number’ and ‘h’ the Planck constant. For an electron the component of spin in a given direction can have values of + ½ and – ½, leading to the two possible states. An electron with spin that is behaviourally likens too small magnetic moments, in which came alongside an intrinsic magnetic moment. A ‘magneton gives of a fundamental constant, whereby the intrinsic magnetic moment of an electron acquires the circulatory current created by the angular momentum ‘p’ of an electron moving in its orbital produces a magnetic moment μ = ep/2m, where ‘e and ‘m’ are the charge and mass of the electron, by substituting the quantized relation p = jh/2π(h = the Planck constant; j = magnetic quantum number ), μ-jh/4πm. When j is taken as unity the quantity eh/4πm is called the Bohr magneton, its value is:
9.274 0780 x 10-24 Am2.
According to the wave mechanics of Dirac, the magnetic moment associated with the spin of the electron would be exactly one Bohr magnetron, although quantum electrodynamics show that a small difference can v=be expected. The nuclear magnetron, ‘μN’ is equal to (me/mp)μB. Where mp is the mass of the proton. The value of μN is:
5.050 8240 x 10-27 A m2
The magnetic moment of a proton is, in fact, 2.792 85 nuclear magnetos. The two states of different energy result from interactions between the magnetic field due to the electron’s spin and that caused by its orbital motion. These are two closely spaced states resulting from the two possible spin directions and these lead to the two limes in the doublet.
In an external magnetic field the angular momentum vector of the electron precesses. For an explicative example, if a body is of a spin, it holds about its axis of symmetry OC (where O is a fixed point) and C is rotating round an axis OZ fixed outside the body, the body is said to be precessing round OZ. OZ is the precession axis. A gyroscope precesses due to an applied torque called the precessional torque. If the moment of inertia a body about OC is I and its angular momentum velocity is ω, a torque ‘K’, whose axis is perpendicular to the axis of rotation will produce an angular velocity of precession Ω about an axis perpendicular to both ῳ and the torque axis where:
Ω = K/Iω.
It is . . . , wholly orientated of the vector to the field direction are allowed, there is a quantization so that the component of the angular momentum along the direction I restricted of certain values of h/2π. The angular momentum vector has allowed directions such that the component is mS(h2π), where mS is the magnetic so in quantum number. For a given value of s, mS has the value’s, ( s-1), . . . -s. For example, formerly the s = 1, mS is I, O, and – 1. The electron has a spin of ½ and thus mS is + ½ and – ½. Thus, the components of its spin of angular momentum along the field direction are ± ½(h/2π). These phenomena are called ‘a space quantization’.
The resultant spin of a number of particles is the vector sum of the spins (s) of the individual particles and is given by symbol S. for example, in an atom two electrons with spin of ½ could combine to give a resultant spin of S = ½ + ½ = 1 or a resultant of S = ½ – ½ =1 or a resultant of S = ½ – ½ =0.
Alternative symbols used for spin is J, for elementary particles or standard theory and I (for a nucleus). Most elementary particles have a non-zero spin, which either be integral of half integral. The spin of a nucleus is the resultant of the spin of its constituent’s nucleons.
For most generally accepted interpretations is that
ψ
2dV represents the probability that particle is located within the volume element dV, as well, ‘Ψ’ is often a complex quantity. The analogy between ‘Ψ’ and the amplitude of a wave is purely formal. There is no macroscopic physical quantity with which ‘Ψ’ can be identified, in contrast with, for example, the amplitude of an electromagnetic wave, which are expressed in terms of electric and magnetic field intensities. There are an infinite number of functions satisfying a wave equation, but only some of these will satisfy the boundary condition. ‘Ψ’ must be finite and single-valued at each point, and the spatial derivatives must be continuous at an interface? For a particle subject to a law of conservation of numbers; The integral of
Ψ
2dV over all space must remain equal to 1, since this is the probability that it exists somewhere. To satisfy this condition the wave equation must be of the first order in (dΨdt). Wave functions obtained when these conditions are applied form of set of ‘characteristic functions’ of the Schrödinger wave equation. These are often called ‘eigenfunctions’ and correspond to a set of fixed energy values in which the system may exist, called ‘eigenvalues’. Energy eigenfunctions describe stationary states of a system. For example, bound states of a system the eigenfunctions do not change signs on reversing the co-ordinated axes. These states are said to have ‘even parity’. For other states the sign changes on space reversal and the parity is said to be ‘odd’.
The least distance in a progressive wave between two surfaces with the same phase. If ‘v’ is the ‘phase speed’ and ‘v’ the frequency, the wavelength is given by v = vλ. For ‘electromagnetic radiation’ the phase speed and wavelength in a material medium are equal to their values in free space divided by the ‘refractive index’. The wavelengths are spectral lines are normally specified for free space. Optical wavelengths are measured absolutely using interferometers or diffraction grating, or comparatively using a prism spectrometer.
The wavelength can only have an exact value for an infinite wave train. If an atomic body emits a quantum in the form of a train of waves of duration ‘τ’ the fractional uncertainty of the wavelength, Δλ/λ, is approximately λ/2πcτ, where ‘c’ is the speed of free space. This is associated with the indeterminacy of the energy given by the ‘uncertainty principle’.
A moment of momentum about an axis, represented as Symbol: L, the product of the moment of inertia and angular velocity (Iѡ) angular momentum is a ‘pseudo vector quality’. It is conserved in an isolated system, as the moment of inertia contains itself of a body about an axis. The sum of the products of the mass of each particle of a body and square of its perpendicular distance from the axis: This addition is replaced by an integration in the case of continuous body. For a rigid body moving about a fixed axis, the laws of motion have the same form as those of rectilinear motion, with moments of inertia replacing mass, angular velocity replacing linear momentum, etc. hence the ‘energy’ of a body rotating about a fixed axis with angular velocity ѡ is ½Iѡ2, which corresponds to ½mv2 for the kinetic energy of a body mass ‘m’ translated with Velocity ‘v’.
The linear momentum of a particle ‘p’ bears the product of the mass and the velocity of the particle. It is a ‘vector’ quality directed through the particle of a body or a system of particles is the vector sum of the linear momentums of the individual particles. If a body of mass ‘M’ is translated (the movement of a body or system in which a way that all points are moved in parallel directions through equal distances), with a velocity ‘V’, it has its mentum as ‘MV’, which is the momentum of a particle of mass ‘M’ at the centre of gravity of the body. The product of ‘moment of inertia and angular velocity’. Angular momentum is a ‘pseudo vector quality and is conserved in an isolated system, and equal to the linear velocity divided by the radial axes per. sec.
If the moment of inertia of a body of mass ‘M’ about an axis through the centre of mass is I, the moment of inertia about a parallel axis distance ‘h’ from the first axis is I + Mh2. If the radius of gyration is ‘k’ about the first axis, it is (k2 + h2 ) about the second. The moment of inertia of a uniform solid body about an axis of symmetry is given by the product of the mass and the sum of squares of the other semi-axes, divided by 3, 4, 5 according to whether the body is rectangular, elliptical or ellipsoidal.
The circle is a special case of the ellipse. The Routh’s rule works for a circular or elliptical cylinder or elliptical discs it works for all three axes of symmetry. For example, for a circular disk of the radius ‘an’ and mass ‘M’, the moment of inertia about an axis through the centre of the disc and lying (a) perpendicular to the disc, (b) in the plane of the disc is:
(a) ¼M(a2 + a2) = ½Ma2
(b) ¼Ma2.
A formula for calculating moments of inertia I:
I = mass x (a2 /3 + n) + b2 /(3 + nʹ ),
Where n and nʹ are the numbers of principal curvatures of the surface that ends the semiaxes in question and ‘a’ and ‘b’s’ are the lengths of the semiaxes. Thus, if the body is a rectangular parallelepiped, n = nʹ = 0, and,
I =-mass x (a2 / 3 + b2 /3).
If the body is a cylinder then, for an axis through its centre, perpendicular to the cylinder axis, n = 0 and nʹ = 1, it substantiates that if,
I = mass x (a2 / 3 + b2 /4).
If ‘I’ is desired about the axis of the cylinder, then n= nʹ = 1 and
a = b = r (the cylinder radius) and; I = mass x (r2 /2).
An array of mathematical concepts, which is similar to a determinant but differ from it in not having a numerical value in the ordinary sense of the term is called a matrix. It obeys the same rules of multiplication, addition. Etc. an array of ‘mn’ numbers set out in ‘m’ rows and ‘n’ columns are a matrix of the order of m x n. the separate numbers are usually called elements, such arrays of numbers, tarted as single entities and manipulated by the rules of matrix algebra, are of use whenever simultaneous equations are found, e.g., changing from one set of Cartesian axes to another set inclined the first: Quantum theory, electrical networks. Matrixes are very prominent in the mathematical expression of quantum mechanics.
A mathematical form of quantum mechanics that was developed by Born and Heisenberg and originally simultaneously with but independently of wave mechanics. Waving mechanics is equivalent, but in it the wave function of wave mechanics is replaced by ‘vectors’ in a seemly space (Hilbert space) and observable things of the physical world, such as energy, momentum, co-ordinates, etc., is represented by ‘matrices’.
The theory involves the idea that a maturement on a system disturbs, to some extent, the system itself. With large systems this is of no consequence, and the system this is of no classical mechanics. On the atomic scale, however, the results of the order in which the observations are made. Tote up if ‘p’ denotes an observation of a component of momentum and ‘q. An observer of the corresponding co-ordinates pq ≠ qp. Here ‘p’ and ‘q’ are not physical quantities but operators. In matrix mechanics and obey the relationship where ‘h’ is the Planck constant that equals to 6.626•076 x 10 34 j s.
pq ‒ qp = ih/2π
The matrix elements are connected with the transition probability between various states of the system.
A quantity with magnitude and direction. It can be represented by a line whose length is propositional to the magnitude and whose direction is that of the vector, or by three components in rectangular co-ordinate system. Their angle between vectors is 90%, that the product and vector product base a similarity to unit vectors such, are to either be equated to being zero or one.
A true vector, or polar vector, involves the displacement or virtual displacement. Polar vectors include velocity, acceleration, force, electric and magnetic strength. The deigns of their components are reversed on reversing the co-ordinated axes. Their dimensions include length to an odd power.
A Pseudo vector, or axial vector, involves the orientation of an axis in space. The direction is conventionally obtained in a right-handed system by sighting along the axis so that the rotation appears clockwise, Pseudo-vectors includes angular velocity, vector area and magnetic flux density. The signs of their components are unchanged on reversing the co-ordinated axes. Their dimensions include length to an even power.
Polar vectors and axial vectors obey the same laws of the vector analysis (a) Vector addition: If two vectors ‘A’ and ‘B’ are represented in magnitude and direction by the adjacent sides of a parallelogram, the diagonal represents the vector sun (A + B) in magnitude and direction, forces, velocity, etc., combine in this way. (b) Vector multiplying: There are two ways of multiplying vectors (i) the ‘scalar product’ of two vectors equals the product of their magnitudes and the co-sine of the angle between them, and is scalar quantity. It is usually written
A • B ( reads as A dot B )
(ii) The vector product of two vectors: A and B are defined as a pseudo vector of magnitude AB sin θ, having a direction perpendicular to the plane containing them. The sense of the product along this perpendicular is defined by the rule: If ‘A’ is turned toward ‘B’ through the smaller angle, this rotation appears of the vector product. A vector product is usually written:
A x B ( reads as A cross B ).
Vectors should be distinguished from scalars by printing the symbols in bold italic letters.
A theory that seeks to unite the properties of gravitational, electromagnetic, weak, and strong interactions to predict all their characteristics. At present it is not known whether such a theory can be developed, or whether the physical universe is amenable to a single analysis about the current concepts of physics. There are unsolved problems in using the framework of a relativistic quantum field theory to encompass the four elementary particles. It can occupy a certain position that using extended objects, as superstring and super-symmetric theories, may, however, still, will enable a future synthesis for achieving obtainability.
A unified quantum field theory of the electromagnetic, weak and strong interactions, in most models, the known interactions are viewed as a low-energy manifestation of a single unified interaction, the unification taking place at energies (Typically 1015 GeV) very much higher than those currently accessible in particle accelerations. One feature of the Grand Unified Theory is that ‘baryon’ number and ‘lepton’ number would no longer be absolutely conserved quantum numbers, with the consequences that such processes as ‘proton decay’, for example, the decay of a proton into a positron and a π0, and p → e+π0 would be expected to be observed. Predicted lifetimes for proton decay are very long, typically 1035 years. Searchers for proton decay are being undertaken by many groups, using large underground detectors, so far without success.
One of the mutual attractions binding the universe of its owing totality, but independent of electromagnetism, strong and weak nuclear forces of interactive bondages is one of gravitation. Newton showed that the external effect of a spherical symmetric body is the same as if the whole mass were concentrated at the centre. Astronomical bodies are roughly spherically symmetric so can be treated as point particles to a very good approximation. On this assumption Newton showed that his law consistent with Kepler’s laws? Until recently, all experiments have confirmed the accuracy of the inverse square law and the independence of the law upon the nature of the substances, but in the past few years evidence has been found against both.
The size of a gravitational field at any point is given by the force exerted on unit mass at that point. The field intensity at a distance ‘χ’ from a point mass ‘m’ is therefore Gm/χ2, and acts toward ‘m’. Gravitational field strength is measured in ‘newtons’ per kilogram. The gravitational potential ‘V’ at that point is the work done in moving a unit mass from infinity to the point against the field, due to a point mass.
X
V = Gm ∞ dχ / χ2 = ‒ Gm / χ.
‘V’ is a scalar measurement in joules per kilogram. The following special cases are also important (a) Potential at a point distance χ from the centre of a hollow homogeneous spherical shell of mass ‘m’ and outside the shell:
V = ‒Gm / χ.
The potential is the same as if the mass of the shell is assumed concentrated at the centre (b) At any point inside the spherical shell the potential is equal to its value at the surface:
V = ‒Gm / r
Where ‘r’ is the radius of the shell. Thus, there is no resultant force acting at any point inside the shell, since no potential difference acts between any two points, then, the potential at a point distance ‘χ’ from the centre of a homogeneous solid sphere and outside the spheres the same as that for a shell:
V = ‒Gm / χ
(d) At a point inside the sphere, of radius ‘r’.
V = ‒Gm( 3r2 ‒ χ2 ) /2r3.
The essential property of gravitation is that it causes a change in motion, in particular the acceleration of free fall (g) in the earth’s gravitational field. According to the general theory of relativity, gravitational fields change the geometry of space-timer, causing it to become curved. It is this curvature that is geometrically responsible for an inseparability of the continuum of ‘space-time’ and its forbearing product is to a vicinity mass, entrapped by the universality of space-time, that in ways described by the pressures of their matter, that controls the natural motions of fording bodies. General relativity may thus be considered as a theory of gravitation, differences between it and Newtonian gravitation only appearing when the gravitational fields become very strong, as with ‘black-holes’ and ‘neutron stars’, or when very accurate measurements can be made.
Another binding characteristic embodied universally is the interaction between elementary particle arising as a consequence of their associated electric and magnetic fields. The electrostatic force between charged particles is an example. This force may be described in terms of the exchange of virtual photons, because of the uncertainty principle being broken by an amount ~E providing this only occurring for a time is possible for the law of conservation of mass and energy such that:
ΔEΔt ≤ h/4π.
This makes it possible for particles to be created for short periods of time where their creation would normally violate conservation laws of energy. These particles are called ‘virtual particles’. For example, in a complete vacuum-that no ‘real’ particle’s exist, as pairs of virtual electrons and positron are continuously forming and rapidly disappearing (in less then 10-23 seconds). Other conservation laws such as those applying to angular momentum, Isospin, etc., cannot be violated even for short periods of time.
Because its strength lies between strong and weak nuclear interactions, the exchanging electromagnetic interaction of particles decaying by electromagnetic interaction, do so with a lifetime shorter than those decaying by weak interaction, but longer than those decaying under the influence of strong interaction. For example, of electromagnetic decay is:
π0 → γ + γ.
This decay process, with a mean lifetime covering 8.4 x 10-17, may be understood as the annihilation of the quark and the antiquark, making up the π0, into a pair of photons. The quantum numbers having to be conserved in electromagnetic interactions are, angular momentum, charge, baryon number, Isospin quantum number I3, strangeness, charm, parity and charge conjugation parity are unduly influenced.
Quanta’s electrodynamic descriptions of the photon-mediated electromagnetic interactions have been verified over a great range of distances and have led to highly accurate predictions. Quantum electrodynamics are a ‘gauge theory; as in quantum electrodynamics, the electromagnetic force can be derived by requiring that the equation describing the motion of a charged particle remain unchanged in the course of local symmetry operations. Specifically, if the phase of the wave function, by which charged particle is described is alterable independently, at which point in space, quantum electrodynamics require that the electromagnetic interaction and its mediating photon exist in order to maintain symmetry.
A kind of interaction between elementary particles that is weaker than the strong interaction force by a factor of about 1012. When strong interactions can occur in reactions involving elementary particles, the weak interactions are usually unobserved. However, sometimes strong and electromagnetic interactions are prevented because they would violate the conservation of some quantum number, e.g., strangeness, that has to be conserved in such reactions. When this happens, weak interactions may still occur.
The weak interaction operates over an extremely short range (about 2 x 10-18 m) it is mediated by the exchange of a very heavy particle (a gauge boson) that may be the charged W+ or W‒ particle (mass about 80 GeV/c2) or the neutral Z0 particles (mass about 91 GeV/c2). The gauge bosons that mediate the weak interactions are analogous to the photon that mediates the electromagnetic interaction. Weak interactions mediated by W particles involve a change in the charge and hence the identity of the reacting particle. The neutral Z0 does not lead to such a change in identity. Both sorts of weak interaction can violate parity.
Most of the long-lived elementary particles decay as a result of weak interactions. For example, the kaon decay K+ ➝ μ+ vμ may be thought of for being due to the annihilation of the u quark and antiquark in the K+ to produce a virtual W+ boson, which then converts into a positive muon and a neutrino. This decay action or and electromagnetic interaction because strangeness is not conserved, Beta decay is the most common example of weak interaction decay. Because it is so weak, particles that can only decay by weak interactions that do so slowly, i.e., they have a very long lifetimes. Other examples of weak interactions include the scattering of the neutrino by other particles and certain very small effects on electrons within the atom.
Understanding of weak interactions is based on the electroweak theory, in which it is proposed that the weak and electromagnetic interactions are different manifestations of a single underlying force, known as the electroweak force. Many of the predictions of the theory have been confirmed experimentally.
A gauge theory, also called quantum flavour dynamics, that provides a unified description of both the electromagnetic and weak interactions. In the Glashow-Weinberg-Salam theory, also known as the standard model, electroweak interactions arise from the exchange of photons and of massive charged W+ and neutral Z0 bosons of spin 1 between quarks and leptons. The extremely massive charged particle, symbol W+ or W‒, that mediates certain types of weak interaction. The neutral Z-particle, or Z boson, symbol Z0, mediates the other types. Both are gauge bosons. The W-and Z-particles were first detected at CERN (1983) by studying collisions between protons and antiprotons with total energy 540 GeV in centre-of -mass co-ordinates. The rest masses were determined as about 80 GeV/c2 and 91 GeV/c2 for the W-and Z-particles, respectively, as had been predicted by the electroweak theory.
The interaction strengths of the gauge bosons to quarks and leptons and the masses of the W and Z bosons themselves are predicted by the theory, the Weinberg Angle θW, which must be determined by experiment. The Glashow-Weinberg-Salam theory successfully describes all existing data from a wide variety of electroweak processes, such as neutrino-nucleon, neutrino-electron and electron-nucleon scattering. A major success of the model was the direct observation in 1983-84 of the W± and Z0 bosons with the predicted masses of 80 and 91 GeV/c2 in high energy proton-antiproton interactions. The decay modes of the W± and Z0 bosons have been studied in very high pp and e+ e‒ interactions and found to be in good agreement with the Standard model. The six known types (or flavours) of quarks and the six known leptons are grouped into three separate generations of particles as follows:
1st generation: e‒ ve u d
2nd generation: μ‒ vμ c s
3rd generation: τ‒ vτ t b
The second and third generations are essentially copies of the first generation, which contains the electron and the ‘up’ and ‘down’ quarks making up the proton and neutron, but involve particles of higher mass. Communication between the different generations occurs only in the quark sector and only for interactions involving W± bosons. Studies of Z0 bosons production in very high energy electron-positron interactions has shown that no further generations of quarks and leptons can exist in nature (an arbitrary number of generations is a priori possible within the standard model) provided only that any new neutrinos are approximately massless.
The Glashow Weinberg-Salam model also predicts the existence of a heavy spin 0 particle, not yet observed experimentally, known as the Higgs boson. The spontaneous symmetry-breaking mechanism used to generate non-zero masses for W± and Z bosons in the electroweak theory, whereby the mechanism postulates the existence of two new complex fields, φ (χμ) = φ1 + I φ2 and Ψ (χμ) = Ψ1 + I Ψ2 that are functional distributors to χμ = χ, y, z and t, and form a doublet? (φ, Ψ) this doublet of complex fields transforms in the same way as leptons and quarks under electroweak gauge transformations. Such gauge transformations rotate φ1, φ2, Ψ1, Ψ2 into each other without changing the nature of the physical science.
The vacuum does not share the symmetry of the fields
(φ, Ψ) and a spontaneous breaking of the vacuum symmetry occurs via the Higgs mechanism. Consequently, the fields φ and Ψ have non-zero values in the vacuum. A particular orientation of φ1, φ2, Ψ1, Ψ2 may be chosen so that all the components of φ ( φ1 ). This component responds to electroweak fields in a way that is analogous to the response of a plasma to electromagnetic fields. Plasmas oscillate in the presence of electromagnetic waves, however, electromagnetic waves can only propagate at a frequency above the plasma frequency ωp2 given by the expression:
ωp2 = ne2 / mε
Where ‘n’ is the charge number density, ‘e’ the electrons charge. ‘m’ the electrons mass and ‘ε’ is the Permittivity of the plasma. In quantum field theory, this minimum frequency for electromagnetic waves may be thought of as a minimum energy for the existence of a quantum of the electromagnetic field (a photon) within the plasma. This minimum energy or mass for the photon, which becomes a field quantum of a finite ranged force. Thus, in its plasma, photons acquire a mass and the electromagnetic interaction has a finite range.
The vacuum field φ1 responds to weak fields by giving a mass and finite range to the W± and Z bosons, however, the electromagnetic field is unaffected by the presence of φ1 so the photon remains massless. The mass acquired by the weak interaction bosons is proportional to the vacuum of φ1 and to the weak charge strength. A quantum of the field φ1 is an electrically neutral particle called the Higgs boson. It interacts with all massive particles with a coupling that is proportional to their mass. The standard model does not predict the mass of the Higgs boson, but it is known that it cannot be too heavy. Not much more than about 1000 proton masses. Since this would lead to complicated self-interaction, such self-interaction is not believed to be present, because the theory does not account for them, but nevertheless successfully predicts the masses of the W± and Z bosons. These of the particle results from the so-called spontaneous symmetry breaking mechanisms, and used to generate non-zero masses for the W± and Z0 bosons and is presumably too massive to have been produced in existing particle accelerators.
We now turn our attentions belonging to the third binding force of unity, in, and of itself, its name implicates a physicality in the belonging nature that holds itself the binding of strong interactions that portray of its owing universality, simply because its universal. Interactions between elementary particles involving the strong interaction force. This force is about one hundred times greater than the electromagnetic force between charged elementary particles. However, it is a short range force-it is only important for particles separated by a distance of less than abut 10-15-and is the force that holds protons and neutrons together in atomic nuclei for ‘soft’ interactions between hadrons, where small-scale transfers of momentum are involved, the strong interactions may be described in terms of the exchange of virtual hadrons, just as electromagnetic interactions between charged particles may be described in terms of the exchange of virtual photons. At a more fundamental level, the strong interaction arises as the result of the exchange of Gluons between quarks and/and antiquarks as described by quantum chromodynamics.
In the hadron exchange picture, any hadron can act as the exchanged particle provided certain quantum numbers are conserved. These quantum numbers are the total angular momentum, charge, baryon number, Isospin (both I and I3), strangeness, parity, charge conjugation parity, and G-parity. Strong interactions are investigated experimentally by observing how beams of high-energy hadrons are scattered when they collide with other hadrons. Two hadrons colliding at high energy will only remain near to each other for a very short time. However, during the collision they may come sufficiently close to each other for a strong interaction to occur by the exchanger of a virtual particle. As a result of this interaction, the two colliding particles will be deflected (scattered) from their original paths. ‘I’ the virtual hadron exchanged during the interaction carries some quantum numbers from one particle to the other, the particles found after the collision may differ from those before it. Sometimes the number of particles is increased in a collision.
In hadron-hadron interactions, the number of hadrons produced increases approximately logarithmically with the total centre of mass energy, reaching about 50 particles for proton-antiproton collisions at 900 GeV, for example in some of these collisions, two oppositely-directed collimated ‘jets’ of hadrons are produced, which are interpreted as due to an underlying interaction involving the exchange of an energetic gluon between, for example, a quark from the proton and an antiquark from the antiproton. The scattered quark and antiquark cannot exist as free particles, but instead ‘fragments’ into a large number of hadrons (mostly pions and kaon) travelling approximately along the original quark or antiquark direction. This results in collimated jets of hadrons that can be detected experimentally. Studies of this and other similar processes are in good agreement with quantum chromodynamics predictions.
The interaction between elementary particles arising as a consequence of their associated electric and magnetic fields. The electrostatic force between charged particles is an example. This force may be described in terms of the exchange of virtual photons, because its strength lies between strong and weak interactions, particles decaying by electromagnetic interaction do so with a lifetime shorter than those decaying by weak interaction, but longer than those decaying by strong interaction. An example of electromagnetic decay is:
π0 ➝ ϒ + ϒ.
This decay process (mean lifetime 8.4 x 10-17 seconds) may be understood as the ‘annihilation’ of the quark and the antiquark making up the π0, into a pair of photons. The following quantum numbers have to be conserved in electromagnetic interactions: Angular momentum, charm, baryon number, Isospin quantum number I3, strangeness, charm, parity, and charge conjugation parity.
A particle that, as far as is known, is not composed of other simpler particles. Elementary particles represent the most basic constituents of matter and are also the carriers of the fundamental forces between particles, namely the electromagnetic, weak, strong, and gravitational forces. The known elementary particles can be grouped into three classes, leptons, quarks, and gauge bosons, hadrons, such strongly interacting particles as the proton and neutron, which are bound states of quarks and antiquarks, are also sometimes called elementary particles.
Leptons undergo electromagnetic and weak interactions, but not strong interactions. Six leptons are known, the negatively charged electron, muon, and tauons plus three associates neutrinos: ve, vμ and vτ. The electron is a stable particle but the muon and tau leptons decay through the weak interactions with lifetimes of about 10-8 and 10-13 seconds. Neutrinos are stable neutral leptons, which interact only through the weak interaction.
Corresponding to the leptons are six quarks, namely the up (u), charm (one c) and top (t) quarks with electric charge equal to +⅔ that of the proton and the down (d), strange (s), and bottom (b) quarks of charge -⅓ the proton charge. Quarks have not been observed experimentally as free particles, but reveal their existence only indirectly in high-energy scattering experiments and through patterns observed in the properties of hadrons. They are believed to be permanently confined within hadrons, either in baryons, half integer spin hadrons containing three quarks, or in mesons, integer spin hadrons containing a quark and an antiquark. The proton, for example, is a baryon containing two ‘up’ quarks and an ‘anti-down (d) quark, while the π+ is a positively charged meson containing an up quark and an anti-down (d) antiquark. The only hadron that is stable as a free particle is the proton. The neutron is unstable when free. Within a nucleus, proton and neutrons are generally both stable but either particle may bear into a transformation into the other, by ‘Beta Decay or Capture’.
Interactions between quarks and leptons are mediated by the exchange of particles known as ‘gauge bosons’, specifically the photon for electromagnetic interactions, W± and Z0 bosons for the weak interaction, and eight massless Gluons, in the case of the strong integrations.
A class of eigenvalue problems in physics that take the form
ΩΨ = λΨ,
Where ‘Ω’ is some mathematical operation (multiplication by a number, differentiation, etc.) on a function ‘Ψ’, which is called the ‘eigenfunction’. ‘λ’ is called the eigenvalue, which in a physical system will be identified with an observable quantity analogous to the amplitude of a wave that appears in the equations of wave mechanics, particularly the Schrödinger wave equation, the most generally accepted interpretation is that
Ψ
2dV, representing the probability that a particle is located within the volume element dV, mass in which case a particle of mass ‘m’ moving with a velocity ‘v’ will, under suitable experimental conditions exhibit the characteristics of a wave of wave length λ, given by the equation? λ = h/mv, where ‘h’ is the Planck constant that equals to 6.626 076 x 10-34 J s.? This equation is the basis of wave mechanics.
Eigenvalue problems are ubiquitous in classical physics and occur whenever the mathematical description of a physical system yields a series of coupled differential equations. For example, the collective motion of a large number of interacting oscillators may be described by a set of coupled differential educations. Each differential equation describes the motion of one of the oscillators in terms of the position of all the others. A ‘harmonic’ solution may be sought, in which each displacement is assumed to have a ‘simple harmonic motion’ in time. The differential equations then reduce to 3N linear equations with 3N unknowns, where ‘N’ is the number of individual oscillators, each with three degrees of freedom. The whole problem is now easily recast as a ‘matrix education’ of the form:
Mχ = ω2χ
Where ‘M’ is an N x N matrix called the ‘dynamical matrix’, and χ is an N x 1 ‘a column matrix, and ω2 is the square of an angular frequency of the harmonic solution. The problem is now an eigenvalue problem with eigenfunctions ‘χ’ which is the normal mode of the system, with corresponding eigenvalues ω2. As ‘χ’ can be expressed as a column vector, χ is a vector in some N-dimensional vector space. For this reason, χ is often called an eigenvector.
When the collection of oscillators is a complicated three-dimensional molecule, the casting of the problem into normal modes is an effective simplification of the system. The symmetry principles of ‘group theory’ can then be applied, which classify normal modes according to their ‘ω’ eigenvalues (frequencies). This kind of analysis requires an appreciation of the symmetry properties of the molecule. The sets of operations (Rotations, inversions, etc.) that leave the molecule invariant make up the ‘point group’ of that molecule. Normal modes sharing the same ‘ω’ eigenvalues are said to correspond to the ‘irreducible representations’ of the molecule’s point group. It is among these irreducible representations that one will find the infrared absorption spectrum for the vibrational normal modes of the molecule.
Eigenvalue problems play a particularly important role in quantum mechanics. In quantum mechanics, physically observable (location, momentum, energy, etc.) are represented by operations (differentiation with respect to a variable, multiplication by a variable), which act on wave functions. Wave functions differ from classical waves in that they carry no energy. For classical waves, the square modulus of its amplitude measure its energy. For a wave function, the square modulus of its amplitude (at a location χ) represent not energy but probability, i.e., the probability that a particle-a localized packet of energy will be observed if a detector is placed at that location. The wave function therefore describes the distribution of possible locations of the particle and is perceptible only after many location detection events have occurred. A measurement of position on a quantum particle may be written symbolically as:
X Ψ( χ ) = χΨ( χ )
Where Ψ (χ) is said to be an eigenvector of the location operator and ‘χ’ is the eigenvalue, which represents the location. Each Ψ(χ) represents amplitude at the location χ,
Ψ(χ)
2 is the probability that the particle will be located in an infinitesimal volume at that location. The wave function describing the distribution of all possible locations for the particle is the linear super-position of all Ψ (χ) for 0 ≤ χ ≤ ∞ that occur, its principle states that each stress is accompanied by the same strains whether it acts alone or in conjunction with others, it is true so long as the total stress does not exceed the limit of proportionality. Also, in vibrations and wave motion the principle asserts that one set of vibrations or waves are unaffected by the presence of another set. For example, two sets of ripples on water will pass through one another without mutual interactions so that, at a particular instant, the resultant disturbance at any point traversed by both sets of waves is the sum of the two component disturbances.
The eigenvalue problem in quantum mechanics therefore represents the act of measurement. Eigenvectors of an observable presentation were the possible states (Position, in the case of χ) that the quantum system can have. Stationary states of a quantum non-demolition attribute of a quantum system, such as position and momentum, are related by the Heisenberg Uncertainty Principle, which states that the product of the uncertainty of the measured value of a component of momentum (pχ) and the uncertainty in the corresponding co-ordinates of position (χ) is of the same set-order of significance as the Planck constant. Attributes related in this way are called ‘conjugate’ attributes. Thus, while an accurate measurement of position is possible, as a result of the uncertainty principle it produces a large momentum spread. Subsequent measurements of the position acquire a spread themselves, which makes the continuous monitoring of the position impossible.
The eigenvalues are the values that observables take on within these quantum states. As in classical mechanics, eigenvalue problems in quantum mechanics may take differential or matrix forms. Both forms have been shown to be equivalent. The differential form of quantum mechanics is called ‘wave mechanics’ (Schrödinger), where the operators are differential operators or multiplications by variables. Eigenfunctions in wave mechanics are wave functions corresponding to stationary wave states that satisfy some set of boundary conditions. The matrix form of quantum mechanics is often called matrix mechanics (Bohr and Heisenberg). Matrix acting on eigenvectors represents the operators.
The relationship between matrix and wave mechanics is very similar to the relationship between matrix and differential forms of eigenvalue problems in classical mechanics. The wave functions representing stationary states are really normal modes of the quantum wave. These normal modes may be thought of as vectors that span a vector space, which have a matrix representation.
Once, again, the Heisenberg uncertainty relation, or indeterminacy principle of ‘quantum mechanics’ that associate the physical properties of particles into pairs such that both together cannot be measured to within more than a certain degree of accuracy. If ‘A’ and ‘V’ form such a pair is called a conjugate pair, then: ΔAΔV > k, where ‘k’ is a constant and ΔA and ΔV’s are a variance in the experimental values for the attributes ‘A’ and ‘V’. The best-known instance of the equation relates the position and momentum of an electron: ΔpΔχ > h, where ‘h’ is the Planck constant. This is the Heisenberg uncertainty principle. Still, the usual value given for Planck’s constant is 6.6 x 10-27 ergs’ sec. Since Planck’s constant is not zero, mathematical analysis reveals the following: The ‘spread’, or uncertainty, in position times the ‘spread’, or uncertainty of momentum is greater than, or possibly equal to, the value of the constant or, or accurately, Planck’s constant divided by 2π, if we choose to know momentum exactly, then us knowing nothing about position, and vice versa.
The presence of Plank’s constant calls that we approach quantum physics a situation in which the mathematical theory does not allow precise prediction of, or exist in exact correspondences with, the physical reality. If nature did not insist on making changes or transitions in precise chunks of Planck’s quantum of action, or in multiples of these chunks, there would be no crisis. However, whether it is of our own determinacy, such that a cancerous growth in the body of an otherwise perfect knowledge of the physical world or the grounds for believing, in principle at least, in human freedom, one thing appears certain-it is an indelible feature of our understanding of nature.
In order too further explain how fundamental the quantum of action is to our present understanding of the life of nature, let us attempt to do what quantum physics says we cannot do and visualize its role in the simplest of all atoms-the hydrogen atom. It can be thought that standing at the centre of the Sky Dome at roughly where the pitcher’s mound is. Place a grain of salt on the mound, and picture a speck of dust moving furiously around the orbital’s outskirts of the Sky Dome’s fulfilling circle, around which the grain of salt remains referential of the topic. This represents, roughly, the relative size of the nucleus and the distance between electron and nucleus inside the hydrogen atom when imaged in its particle aspect.
In quantum physics, however, the hydrogen atom cannot be visualized with such macro-level analogies. The orbit of the electron is not a circle, in which a plantlike object moves, and each orbit is described in terms of a probability distribution for finding the electron in an average position corresponding to each orbit as opposed to an actual position. Without observation or measurement, the electron could be in some sense anywhere or everywhere within the probability distribution, also, the space between probability distributions is not empty, it is infused with energetic vibrations capable of manifesting itself as the befitting quanta.
The energy levels manifest at certain distances because the transition between orbits occurs in terms of precise units of Planck’s constant. If any attentive effects to comply with or measure where the particle-like aspect of the electron is, in that the existence of Planck’s constant will always prevent us from knowing precisely all the properties of that electron that we might presume to be they’re without measurement. Also, the two-split experiment, as our presence as observers and what we choose to measure or observe are inextricably linked to the results obtained. Since all complex molecules are built from simpler atoms, what is to be done, is that liken to the hydrogen atom, of which case applies generally to all material substances.
The grounds for objecting to quantum theory, the lack of a one-to-one correspondence between every element of the physical theory and the physical reality it describes, may seem justifiable and reasonable in strict scientific terms. After all, the completeness of all previous physical theories was measured against that criterion with enormous success. Since it was this success that gave physicists the reputation of being able to disclose physical reality with magnificent exactitude, perhaps a more complex quantum theory will emerge by continuing to insist on this requirement.
All indications are, however, that no future theory can circumvent quantum indeterminacy, and the success of quantum theory in co-ordinating our experience with nature is eloquent testimony to this conclusion. As Bohr realized, the fact that we live in a quantum universe in which the quantum of action is a given or an unavoidable reality requires a very different criterion for determining the completeness of physical theory. The new measure for a complete physical theory is that it unambiguously confirms our ability to co-ordinate more experience with physical reality.
If a theory does so and continues to do so, which is a distinctive feature of the case with quantum physics, then the theory must be deemed complete. Quantum physics not only works exceedingly well, it is, in these terms, the most accurate physical theory that has ever existed. When we consider that this physics allows us to predict and measure quantities like the magnetic moment of electrons to the fifteenth decimal place, we realize that accuracy perse is not the real issue. The real issue, as Bohr rightly intuited, is that this complete physical theory effectively undermines the privileged relationships in classical physics between physical theory and physical reality. Another measure of success in physical theory is also met by quantum physics-eloquence and simplicity. The quantum recipe for computing probabilities given by the wave function is straightforward and can be successfully employed by any undergraduate physics student. Take the square of the wave amplitude and compute the probability of what can be measured or observed with a certain value. Yet there is a profound difference between the recipe for calculating quantum probabilities and the recipe for calculating probabilities in classical physics.
In quantum physics, one calculates the probability of an event that can happen in alternative ways by adding the wave functions, and then taking the square of the amplitude. In the two-split experiment, for example, the electron is described by one wave function if it goes through one slit and by another wave function if it goes through the other slit. In order to compute the probability of where the electron is going to end on the screen, we add the two wave functions, compute the obsolete value of their sum, and square it. Although the recipe in classical probability theory seems similar, it is quite different. In classical physics, one would simply add the probabilities of the two alternative ways and let it go at that. That classical procedure does not work here because we are not dealing with classical atoms in quantum physics additional terms arise when the wave functions are added, and the probability is computed in a process known as the ‘superposition principle’. That the superposition principle can be illustrated with an analogy from simple mathematics. Add two numbers and then take the square of their sum, as opposed to just adding the squares of the two numbers. Obviously, (2 + 3)2 is not equal to 22 + 32. The former is 25, and the latter are 13. In the language of quantum probability theory:
Ψ1 + Ψ2
2 ≠
Ψ1
2 +
Ψ2
2
Where Ψ1 and Ψ2 are the individual wave functions on the left-hand side, the superposition principle results in extra terms that cannot be found on the right-handed side the left-hand faction of the above relation is the way a quantum physicists would compute probabilities and the right-hand side is the classical analogue. In quantum theory, the right-hand side is realized when we know, for example, which slit through which the electron went. Heisenberg was among the first to compute what would happen in an instance like this. The extra superposition terms contained in the left-hand side of the above relation would not be there, and the peculiar wave-like interference pattern would disappear. The observed pattern on the final screen would, therefore, be what one would expect if electrons were behaving like bullets, and the final probability would be the sum of the individual probabilities. However, when we know which slit the electron went through, this interaction with the system causes the interference pattern to disappear.
In order to give a full account of quantum recipes for computing probabilities, one ‘g’ has to examine what would happen in events that are compounded. Compound events are events that can be broken down into a series of steps, or events that consist of a number of things happening independently the recipe here calls for multiplying the individual wave functions, and then following the usual quantum recipe of taking the square of the amplitude.
The quantum recipe is
Ψ1 • Ψ2
2, and, in this case, it would be the same if we multiplied the individual probabilities, as one would in classical theory. Thus the recipes of computing results in quantum theory and classical physics can be totally different from quantum superposition effects are completely non-classical, and there is no mathematical justification to why the quantum recipes work. What justifies the use of quantum probability theory is the same thing that justifies the use of quantum physics-it has allowed us in countless experiments to extend our ability to co-ordinate experience with nature vastly.
The view of probability in the nineteenth century was greatly conditioned and reinforced by classical assumptions about the relationships between physical theory and physical reality. In this century, physicists developed sophisticated statistics to deal with large ensembles of particles before the actual character of these particles was understood. Classical statistics, developed primarily by James C. Maxwell and Ludwig Boltzmann, was used to account for the behaviour of a molecule in a gas and to predict the average speed of a gas molecule in terms of the temperature of the gas.
The presumption was that the statistical average were workable approximations those subsequent physical theories, or better experimental techniques, would disclose with precision and certainty. Since nothing was known about quantum systems, and since quantum indeterminacy is small when dealing with micro-level effects, this presumption was quite reasonable. We know, however, that quantum mechanical effects are present in the behaviour of gasses and that the choice to ignore them is merely a matter of convincing in getting workable or practical resulted. It is, therefore, no longer possible to assume that the statistical averages are merely higher-level approximations for a more exact description.
Perhaps the best-known defence of the classical conception of the relationship between physical theory ands physical reality is the celebrated animal introduced by the Austrian physicist Erin Schrödinger (1887-1961) in 1935, in a ‘thought experiment’ showing the strange nature of the world of quantum mechanics. The cat is thought of as locked in a box with a capsule of cyanide, which will break if a Geiger counter triggers. This will happen if an atom in a radioactive substance in the box decays, and there is a chance of 50% of such an event within an hour. Otherwise, the cat is alive. The problem is that the system is in an indeterminate state. The wave function of the entire system is a ‘superposition’ of states, fully described by the probabilities of events occurring when it is eventually measured, and therefore ‘contains equal parts of the living and dead cat’. When we look and see we will find either a breathing cat or a dead cat, but if it is only as we look that the wave packet collapses, quantum mechanic forces us to say that before we looked it was not true that the cat was dead and not true that it was alive, the thought experiment makes vivid the difficulty of conceiving of quantum indetermincies when these are translated to the familiar world of everyday objects.
The ‘electron,’ is a stable elementary particle having a negative charge, ‘e’, equal to:
1.602 189 25 x 10-19 C
and a rest mass, m0 equal to:
9.109 389 7 x 10-31 kg
equivalent to: 0.511 0034 MeV/c2
It has a spin of ½ and obeys Fermi-Dirac Statistics. As it does not have strong interactions, it is classified as a ‘lepton’.
The discovery of the electron was reported in 1897 by Sir J. J. Thomson, following his work on the rays from the cold cathode of a gas-discharge tube, it was soon established that particles with the same charge and mass were obtained from numerous substances by the ‘photoelectric effect’, ‘thermionic emission’ and ‘beta decay’. Thus, the electron was found to be part of all atoms, molecules, and crystals.
Free electrons are studied in a vacuum or a gas at low pressure, whereby beams are emitted from hot filaments or cold cathodes and are subject to ‘focussing’, so that the particles in which an electron beam in, for example, a cathode-ray tube, where in principal methods as (i) Electrostatic focussing, the beam is made to converge by the action of electrostatic fields between two or more electrodes at different potentials. The electrodes are commonly cylinders coaxial with the electron tube, and the whole assembly forms an electrostatic electron lens. The focussing effect is usually controlled by varying the potential of one of the electrodes, called the focussing electrode. (ii) Electromagnetic focussing, by way that the beam is made to converge by the action of a magnetic field that is produced by the passage of direct current, through a focussing coil. The latter are commonly a coil of short axial length mounted so as to surround the electron tube and to be coaxial with it.
The force FE on an electron or magnetic field of strengths is given by FE = Ee and is in the direction of the field. On moving through a potential difference V, the electron acquires a kinetic energy eV, hence obtaining beams of electrons of accurately known kinetic energy is possible. In a magnetic field of magnetic flux density ‘B’, an electron with speed ‘v’ is subject to a force, FB = Bev sin θ, where θ is the angle between ‘B’ and ‘v’. This force acts at right angles to the plane containing ‘B’ and ‘v’.
The mass of any particle increases with speed according to the theory of relativity. If an electron is accelerated from rest through 5kV, the mass is 1% greater than it is at rest. Thus, accountably, must be taken of relativity for calculations on electrons with quite moderate energies.
According to ‘wave mechanics’ a particle with momentum ‘mv’ exhibits’ diffraction and interference phenomena, similar to a wave with wavelength λ = h/mv, where ‘h’ is the Planck constant. For electrons accelerated through a few hundred volts, this gives wavelengths a preferably less than typical interatomic spacing in crystals. Hence, a crystal can act as a diffraction grating for electron beams.
Owing to the fact that electrons are associated with a wavelength λ given by λ = h/mv, where ‘h’ is the Planck constant and (mv) the momentum of the electron, a beam of electrons suffers diffraction in its passage through crystalline material, similar to that experienced by a beam of X-rays. The diffraction pattern depends on the spacing of the crystal planes, and the phenomenon can be employed to investigate the structure of surface and other films, and under suitable conditions exhibit the characteristics of a wave of the wavelength given by the equation λ = h/mv, which is the basis of wave mechanics. A set of waves that represent the behaviour, under appropriate conditions, of a particle, e.g., its diffraction by a crystal lattice, that is given the ‘de Broglie equation.’ They are sometimes regarded as waves of probability, since the square of their amplitude at a given point represents the probability of finding the particle in unit volume at that point.
The first experiment to demonstrate ‘electron diffraction’, and hence the wavelike nature of particles. A narrow pencil of electrons from a hot filament cathode was projected ‘in vacua’ onto a nickel crystal. The experiment showed the existence of a definite diffracted beam at one particular angle, which depended on the velocity of the electrons, assuming this to be the Bragg angle, stating that the structure of a crystal can be determined from a set of interference patterns found at various angles from the different crystal faces, least of mention, the wavelength of the electrons was calculated and found to be in agreement with the ‘de Broglie equation.’
At kinetic energies less than a few electro-volts, electrons undergo elastic collision with atoms and molecules, simply because of the large ratio of the masses and the conservation of momentum, only an extremely small transfer of kinetic energy occurs. Thus, the electrons are deflected but not slowed appreciatively. At higher energies collisions are inelastic. Molecules may be dissociated, and atoms and molecules may be excited or ionized. Thus it is the least energy that causes an ionization:
A ➝ A+ + e‒
Where the Ion and the electron are far enough apart for their electrostatic interaction to be negligible and no extra kinetic energy removed is that in the outermost orbit, i.e., the level strongly bound electrons. Considering removal of electrons from inner orbits is also possible, in which their binding energy is greater. As an excited particle or recombining, ions emit electromagnetic radiation mostly in the visible or ultraviolet.
For electron energies of the order of several GeV upwards, X-rays are generated. Electrons of high kinetic energy travel considerable distances through matter, leaving a trail of positive ions and free electrons. The energy is mostly lost in small increments ( about 30 eV ) with only an occasional major interaction causing X-ray emissions. The range increases at higher energies. The positron-the antiparticle of the electron, i.e., an elementary particle with electron mass and positive charge equal to that of the electron. According to the relativistic wave mechanics of Dirac, space contains a continuum of electrons in states of negative energy. These states are normally unobservable, but if sufficient energy can be given, an electron may be raised into a state of positive energy and suggested itself observably. The vacant state of negativity behaves as a positive particle of positive energy, which is observed as a positron.
The simultaneous formation of a positron and an electron from a photon is called ‘pair production’, and occurs when the annihilation of Gamma-rays photons with an energy of 1.02 MeV passes close to an atomic nucleus, whereby the interaction between the particle and its antiparticle disappear and photons or other elementary particles or antiparticles are so created, as accorded to energy and momentum conservation.
At low energies, an electron and a positron annihilate to produce electromagnetic radiation. Usually the particles have little kinetic energy or momentum in the laboratory system before interaction, hence the total energy of the radiation is nearly 2m0c2, where m0 is the rest mass of an electron. In nearly all cases two photons are generated. Each of 0.511 MeV, in almost exactly opposite directions to conserve momentum. Occasionally, three photons are emitted all in the same plane. Electron-positron annihilation at high energies has been extensively studied in particle accelerators. Generally, the annihilation results in the production of a quark, and an antiquark, fort example, e+ e‒ ➝ μ+ μ‒ or a charged lepton plus an antilepton (e+e‒ ➝ μ+μ‒). The quarks and antiquarks do not appear as free particles but convert into several hadrons, which can be detected experimentally. As the energy available in the electron-positron interaction increases, quarks and leptons of progressively larger rest mass can be produced. In addition, striking resonances are present, which appear as large increases in the rate at which annihilations occur at particular energies. The I / PSI particle and similar resonances containing an antiquark are produced at an energy of about 3 GeV, for example, giving rise to abundant production of charmed hadrons. Bottom (b) quark production occurs at greater energies than about 10 GeV. A resonance at an energy of about 90 GeV, due to the production of the Z0 gauge boson involved in weak interaction is currently under intensive study at the LEP and SLC e+ e‒ colliders. Colliders are the machines for increasing the kinetic energy of charged particles or ions, such as protons or electrons, by accelerating them in an electric field. A magnetic field is used to maintain the particles in the desired direction. The particle can travel in a straight, spiral, or circular paths. At present, the highest energies are obtained in the proton synchrotron.
The Super Proton Synchrotron at CERN (Geneva) accelerates protons to 450 GeV. It can also cause proton-antiproton collisions with total kinetic energy, in centre-of-mass co-ordinates of 620 GeV. In the USA the Fermi National Acceleration Laboratory proton synchrotron gives protons and antiprotons of 800 GeV, permitting collisions with total kinetic energy of 1600 GeV. The Large Electron Positron (LEP) system at CERN accelerates particles to 60 GeV.
All the aforementioned devices are designed to produce collisions between particles travelling in opposite directions. This gives effectively very much higher energies available for interaction than our possible targets. High-energy nuclear reaction occurs when the particles, either moving in a stationary target collide. The particles created in these reactions are detected by sensitive equipment close to the collision site. New particles, including the tauon, W, and Z particles and requiring enormous energies for their creation, have been detected and their properties determined.
While, still, a ‘nucleon’ and ‘anti-nucleon’ annihilating at low energy, produce about half a dozen pions, which may be neutral or charged. By definition, mesons are both hadrons and bosons, justly as the pion and kaon are mesons. Mesons have a substructure composed of a quark and an antiquark bound together by the exchange of particles known as Gluons.
The conjugate particle or antiparticle that corresponds with another particle of identical mass and spin, but has such quantum numbers as charge (Q), baryon number (B), strangeness (S), charms, and Isospin (I3) of equal magnitude but opposite signs. Examples of a particle and its antiparticle include the electron and positron, proton and antiproton, the positive and negatively charged pions, and the ‘up’ quark and ‘up’ antiquark. The antiparticle corresponding to a particle with the symbol ‘an’ is usually denoted ‘ā’. When a particle and its antiparticle are identical, as with the photon and neutral pion, this is called a ‘self-conjugate particle’.
The critical potential to excitation energy required to change am atom or molecule from one quantum state to another of higher energy, is equal to the difference in energy of the states and is usually the difference in energy between the ground state of the atom and a specified excited state. Which the state of a system, such as an atom or molecule, when it has a higher energy than its ground state.
The ground state contributes the state of a system with the lowest energy. An isolated body will remain indefinitely in it, such that having possession of two or more ground states is possible for a system, of equal energy but with different sets of quantum numbers. In the case of atomic hydrogen there are two states for which the quantum numbers n, I, and m are 1, 0, and 0 respectively, while the spin may be + ½ with respect to a defined direction. An allowed wave function of an electron in an atom obtained by a solution of the ‘Schrödinger wave equation’ in which a hydrogen atom, for example, the electron moves in the electrostatic field of the nucleus and its potential energy is -e2 / r, where ‘e’ is the electron charge and ‘r’ its distance from the nucleus. A precise orbit cannot be considered as in Bohr’s theory of the atom, but the behaviour of the electron is described by its wave function, Ψ, which is a mathematical function of its position with respect to the nucleus. The significance of the wave function is that
Ψ
2 dt is the probability of locating the electron in the element of volume ‘dt’.
Solution of Schrödinger’s equation for the hydrogen atom shows that the electron can only have certain allowed wave functions (eigenfunctions). Each of these corresponds to a probability distribution in space given by the manner in which
Ψ
2 varies with position. They also have an associated value of the energy ‘E’. These allowed wave functions, or orbitals, are characterized by three quantum numbers similar to those characterized the allowed orbits in the earlier quantum theory of the atom: ‘n’, the ‘principal quantum number, can have values of 1, 2, 3, etc. the orbital with n =1 has the lowest energy. The states of the electron with n = 1, 2, 3, etc., are called ‘shells’ and designate the K, L, M shells, etc. ‘I’, the ‘azimuthal quantum numbers’, which for a given value of ‘n’ can have values of 0, 1, 2, . . . ( –1 ). An electron in the ‘L’ shell of an atom with n = 2 can occupy two sub-shells of different energy corresponding to I = 0, I = 1, and I = 2. Orbitals with I = 0, 1, 2 and 3 are called s, p, d, and ƒ orbitals respectively. The significance of I quantum number is that it gives the angular momentum of the electron. The orbital angular momentum of an electron is given by:
[I( I + 1 )( h/2π).
the ‘magnetic quantum number, which for a given value of ‘I’ can have values’ represented by a ‘p’ orbital for orbits with m = 1, 0, 1. These orbitals, with the same values of ‘n’ and ‘I’ but different ‘m’ values, have the same energy. The significance of this quantum number is that it indicates the number of different levels that would be produced if the atom were subjected to an external magnetic field.
According to wave theory the electron may be at any distance from the nucleus, but in fact, there is only a reasonable chance of it being within a distance of-5 x 10-11 metre. Enshrouded by the maximum probability that occurs when r-a0 and where a0 is the radius of the first Bohr orbit. Representing an orbital by a surface enclosing a volume within which there is an arbitrarily decided probability is customary (say 95%) of finding the electron.
Finally, the electron in an atom can have a fourth quantum number MS, characterizing its spin direction. This can be + ½ or ‒ ½, and according to the ‘Pauli Exclusion Principle,’ each orbital can hold only two electrons. The four quantum numbers lead to an explanation of the periodic table of the elements.
In earlier mention, the concerns referring to the ‘moment’ had been to our exchanges to issue as, i.e., the moment of inertia, moment of momentum. The moment of a force about an axis is the product of the perpendicular distance of the axis from the line of action of the force, and the component of the force in the plane perpendicular to the axis. The moment of a system of coplanar forces about an axis perpendicular to the plane containing them is the algebraic sum of the moments of the separate forces about that axis of a anticlockwise moment appear taken controventionally to be positive and clockwise of ones Uncomplementarity. The moment of momentum about an axis, symbol L is the product to the moment of inertia and angular velocity (Iω). Angular momentum is a pseudo-vector quality, as it is connected in an isolated system. It is a scalar and is given a positive or negative sign as in the moment of force. When contending to systems, in which forces and motions do not all lie in one plane, the concept of the moment about a point is needed. The moment of a vector P, e.g., forces or momentous pulsivity, from which a point ‘A’ is a pseudo-vector M equal to the vector product of r and P, where r is any line joining ‘A’ to any point ‘B’ on the line of action of P. The vector product M = r x p is independent of the position of ‘B’ and the relation between the scalar moment about an axis and the vector moment about which a point on the axis is that the scalar is the component of the vector in the direction of the axis.
The linear momentum of a particle ‘p’ is the product of the mass and the velocity of the particle. It is a vector quality directed through the particle in the direction of motion. The linear momentum of a body or of a system of particles is the vector sum of the linear momenta of the individual particle. If a body of mass ‘M’ is translated with a velocity ‘V’, its momentum is MV, which is the momentum of a particle of mass ‘M’ at the centre of gravity of the body. (1) In any system of mutually interacting or impinging particles, the linear momentum in any fixed direction remains unaltered unless there is an external force acting in that direction. (2) Similarly, the angular momentum is constant in the case of a system rotating about a fixed axis provided that no external torque is applied.
Subatomic particles fall into two major groups: The elementary particles and the hadrons. An elementary particle is not composed of any smaller particles and therefore represents the most fundamental form of matter. A hadron is composed of panicles, including the major particles called quarks, the most common of the subatomic particles, includes the major constituents of the atom-the electron is an elementary particle, and the proton and the neutron (hadrons). An elementary particle with zero charge and a rest mass equal to:
1.674 9542 x 10-27 kg,
i.e., 939.5729 MeV / c2.
It is a constituent of every atomic nucleus except that of ordinary hydrogen, free neutrons decay by ‘beta decay’ with a mean life of 914 s. the neutron has spin ½, Isospin ½, and positive parity. It is a ‘fermion’ and is classified as a ‘hadron’ because it has strong interaction.
Neutrons can be ejected from nuclei by high-energy particles or photons, the energy required is usually about 8 MeV, although sometimes it is less. The fission is the most productive source. They are detected using all normal detectors of ionizing radiation because of the production of secondary particles in nuclear reactions. The discovery of the neutron (Chadwick, 1932) involved the detection of the tracks of protons ejected by neutrons by elastic collisions in hydrogenous materials.
Unlike other nuclear particles, neutrons are not repelled by the electric charge of a nucleus so they are very effective in causing nuclear reactions. When there is no ‘threshold energy’, the interaction ‘cross sections’ become very large at low neutron energies, and the thermal neutrons produced in great numbers by nuclear reactions cause nuclear reactions on a large scale. The capture of neutrons by the (n, ϒ) process produces large quantities of radioactive materials, both useful nuclides such as 66Co for cancer therapy and undesirable by-product. The least energy required to cause a certain process, in particular a reaction in nuclear or particle physics. It is often important to distinguish between the energies required in the laboratory and in centre-of-mass co-ordinates. In ‘fission’ the splitting of a heavy nucleus of an atom into two or more fragments of comparable size usually as the result of the impact of a neutron on the nucleus. It is normally accompanied by the emission of neutrons or gamma rays. Plutonium, uranium, and thorium are the principle fissionable elements
In nuclear reaction, a reaction between an atonic nucleus and a bombarding particle or photon leading to the creation of a new nucleus and the possible ejection of one or more particles. Nuclear reactions are often represented by enclosing brackets and symbols for the incoming and final nuclides being shown outside the brackets. For example:
14N ( α, p )17O.
Energy from nuclear fissions, in its gross effect, finds the nucleuses of atoms of moderate size are more tightly held together than the largest nucleus, so that if the nucleus of a heavy atom can be induced to split into two nuclei and moderate mass, there should be considerable release of energy. By Einstein’ s law of the conservation of mass and energy, this mass and energy difference is equivalent to the energy released when the nucleons binding differences are equivalent to the energy released when the nucleons bind together. Y=this energy is the binding energy, the graph of binding per nucleon, EB/A increases rapidly up to a mass number of 50-69 (iron, nickel, etc.) and then decreases slowly. There are therefore two ways in which energy can be released from a nucleus, both of which can be released from the nucleus, both of which entail a rearrangement of nuclei occurring in the lower as having to curve into form its nuclei, in the upper, higher-energy part of the curve. The fission is the splitting of heavy atoms, such as uranium, into lighter atoms, accompanied by an enormous release of energy. Fusion of light nuclei, such as deuterium and tritium, releases an even greater quantity of energy.
The work that must be done to detach a single particle from a structure of free electrons of an atom or molecule to form a negative ion. The process is sometimes called ‘electron capture, but the term is more usually applied to nuclear processes. As many atoms, molecules and free radicals from stable negative ions by capturing electrons to atoms or molecules to form a negative ion. The electron affinity is the least amount of work that must be done to separate from the ion. It is usually expressed in electro-volts.
The uranium isotope 235U will readily accept a neutron but one-seventh of the nuclei stabilized by gamma emissions while six-sevenths split into two parts. Most of the energy released amounts to about 170 MeV, in the form of the kinetic energy of these fission fragments. In addition an averaged of 2.5 neutrons of average energy 2 MeV and some gamma radiation is produced. Further energy is released later by radioactivity of the fission fragments. The total energy released is about 3 x 10-11 joule per atom fissioned, i.e., 6.5 x 1013 joule per kg conserved.
To extract energy in a controlled manner from fissionable nuclei, arrangements must be made for a sufficient proportion of the neutrons released in the fissions to cause further fissions in their turn, so that the process is continuous, the minium mass of a fissile material that will sustain a chain reaction seems confined to nuclear weaponry. Although, a reactor with a large proportion of 235U or plutonium 239Pu in the fuel uses the fast neutrons as they are liberated from the fission, such a rector is called a ‘fast reactor’. Natural uranium contains 0.7% of 235U and if the liberated neutrons can be slowed before they have much chance of meeting the more common 238U atom and then cause another fission. To slow the neutron, a moderator is used containing light atoms to which the neutrons will give kinetic energy by collision. As the neutrons eventually acquire energies appropriate to gas molecules at the temperatures of the moderator, they are then said to be thermal neutrons and the reactor is a thermal reactor.
Then, of course, the Thermal reactors, in typical thermal reactors, the fuel elements are rods embedded as a regular array in which the bulk of the moderator that the typical neutron from a fission process has a good chance of escaping from the narrowed fuel rod and making many collisions with nuclei in the moderator before again entering a fuel element. Suitable moderators are pure graphite, heavy water (D2O), are sometimes used as a coolant, and ordinary water (H2O). Very pure materials are essential as some unwanted nuclei capture neutrons readily. The reactor core is surrounded by a reflector made of suitable material to reduce the escape of neutrons from the surface. Each fuel element is encased, e.g., in magnesium alloy or stainless steel, to prevent escape of radioactive fission products. The coolant, which may be gaseous or liquid, flows along the channels over the canned fuel elements. There is an emission of gamma rays inherent in the fission process and, many of the fission products are intensely radioactive. To protect personnel, the assembly is surrounded by a massive biological shield, of concrete, with an inner iron thermal shield to protect the concrete from high temperatures caused by absorption of radiation.
To keep the power production steady, control rods are moved in or out of the assembly. These contain material that captures neutrons readily, e.g., cadmium or boron. The power production can be held steady by allowing the currents in suitably placed ionization chambers automatically to modify the settings of the rods. Further absorbent rods, the shut-down rods, are driven into the core to stop the reaction, as in an emergence if the control mechanism fails. To attain high thermodynamic efficiency so that a large proportion of the liberated energy can be used, the heat should be extracted from the reactor core at a high temperature.
In fast reactors no mediator is used, the frequency of collisions between neutrons and fissile atoms being creased by enriching the natural uranium fuel with 239Pu or additional 235U atoms that are fissioned by fast neutrons. The fast neutrons are thus built up a self-sustaining chain reaction. In these reactions the core is usually surrounded by a blanket of natural uranium into which some of the neutrons are allowed to escape. Under suitable conditions some of these neutrons will be captured by 238U atoms forming 239U atoms, which are converted to 239Pu. As more plutonium can be produced than required to enrich the fuel in the core, these are called ‘fast breeder reactors’.
Thus and so, a neutral elementary particle with spin ½, that only takes part in weak interactions. The neutrino is a lepton and exists in three types corresponding to the three types of charged leptons, that is, there are the electron neutrinos (ve) tauon neutrinos (vμ) and tauon neutrinos (vτ). The antiparticle of the neutrino is the antineutrino.
Neutrinos were originally thought to have a zero mass, but recently there have been some advances to an indirect experiment that evince to the contrary. In 1985 a Soviet team reported a measurement for the first time, of a non-zero neutrino mass. The mass measured was extremely small, some 10 000 times smaller than the mass of the electron. However, subsequent attempts to reproduce the Soviet measurement were unsuccessful. More recent (1998-99), the Super-Kamiokande experiment in Japan has provided indirect evidence for massive neutrinos. The new evidence is based upon studies of neutrinos, which are created when highly energetic cosmic rays bombard the earth’s upper atmosphere. By classifying the interaction of these neutrinos according to the type of neutrino involved (an electron neutrino or muon neutrino), and counting their relative numbers as a function: An oscillatory behaviour may be shown to occur. Oscillation in this sense is the charging back and forth of the neutrino’s type as it travels through space or matter. The Super-Kamiokande result indicates that muon neutrinos are changing into another type of neutrino, e.g., sterile neutrinos. The experiment does not, however, determine directly the masses, though the oscillations suggest very small differences in mass between the oscillating types.
The neutrino was first postulated (Pauli 1930) to explain the continuous spectrum of beta rays. It is assumed that there is the same amount of energy available for each beta decay of a particle nuclide and that energy is shared according to a statistical law between the electron and a light neutral particle, now classified as the anti-neutrino, ύe Later it was shown that the postulated particle would also conserve angular momentum and linear momentum in the beta decays.
In addition to beta decay, the electron neutrino is also associated with, for example, positron decay and electron capture:
22Na → 22Ne + e+ + ve
55Fe + e‒ → 55Mn + ve
The absorption of anti-neutrinos in matter by the process
1H + ΰe ➝ n + e+
Was first demonstrated by Reines and Cowan? The muon neutrino is generated in such processes as:
π+ → μ+ + vμ
Although the interactions of neutrinos are extremely weak the cross sections increase with energy and reaction can be studied at the enormous energies available with modern accelerators in some forms of ‘grand unification theories’, neutrinos are predicted to have a non-zero mass. Nonetheless, no evidences have been found to support this prediction.
The antiparticle of an electron, i.e., an elementary particle with electron mass and positive charge and equal to that of the electron. According to the relativistic wave mechanics of Dirac, space contains a continuum of electrons in states of negative energy. These states are normally unobservable, but if sufficient energy can be given, an electron may be raised into a state of positivity and become observable. The vacant state of negativity seems to behave as a positive particle of positive energy, which is observed as a positron.
A theory of elementary particles based on the idea that the fundamental entities are not point-like particles, but finite lines (strings) or closed loops formed by stings. The original idea was that an elementary particle was the result of a standing wave in a string. A considerable amount of theoretical effort has been put into development string theories. In particular, combining the idea of strings with that of super-symmetry, which has led to the idea with which correlation holds strongly with super-strings. This theory may be a more useful route to a unified theory of fundamental interactions than quantum field theory, simply because it’s probably by some unvoided infinites that arise when gravitational interactions are introduced into field theories. Thus, superstring theory inevitably leads to particles of spin 2, identified as gravitons. String theory also shows why particles violate parity conservation in weak interactions.
Superstring theories involve the idea of higher dimensional spaces: 10 dimensions for fermions and 26 dimensions for bosons. It has been suggested that there are the normal four space-time dimensions, with the extra dimension being tightly ‘curved’. Still, there are no direct experimental evidences for super-strings. They are thought to have a length of about 10-35 m and energies of 1014 GeV, which is well above the energy of any accelerator. An extension of the theory postulates that the fundamental entities are not one-dimensional but two-dimensional, i.e., they are super-membranes.
Allocations often other than what are previous than in time, awaiting the formidable combinations of what precedes the presence to the future, because of which the set of invariance of a system, a symmetry operation on a system is an operation that does not change the system. It is studied mathematically using ‘Group Theory.’ Some symmetries are directly physical, for instance the reelections and rotations for molecules and translations in crystal lattices. More abstractively the implicating inclinations toward abstract symmetries involve changing properties, as in the CPT Theorem and the symmetries associated with ‘Gauge Theory.’ Gauge theories are now thought to provide the basis for a description in all elementary particle interactions. The electromagnetic particle interactions are described by quantum electrodynamics, which is called Abelian gauge theory.
Quantum field theory for which measurable quantities remain unchanged under a ‘group transformation’. All these theories consecutive field transformations do not commute. All non-Abelian gauge theories are based on work proposed by Yang and Mills in 1954, describe the interaction between two quantum fields of fermions. In which particles represented by fields whose normal modes of oscillation are quantized. Elementary particle interactions are described by relativistically invariant theories of quantized fields, ie. , By relativistic quantum field theories. Gauge transformations can take the form of a simple multiplication by a constant phase. Such transformations are called ‘global gauge transformations’. In local gauge transformations, the phase of the fields is alterable by amounts that vary with space and time; i.e.,
Ψ ➝ eiθ ( χ ) Ψ,
Where θ (χ) is a function of space-time. As, in Abelian gauge theories, consecutive field transformations commute, i.e.,
Ψ ➝ ei θ ( χ ) ei φ Ψ = ei φ ( χ ) ei φ ( χ ) Ψ,
Where φ (χ) is another function of space and time. Quantum chromodynamics (the theory of the strong interaction) and electroweak and grand unified theories are all non-Abelian. In these theories consecutive field transformations do not commute. All non-Abelian gauge theories are based on work proposed by Yang and Mils, as Einstein’s theory of general relativity can also be formulated as a local gauge theory.
A symmetry including both boson and fermions, in theories based on super-symmetry every boson has a corresponding boson. The boson partners of existing fermions have names formed by prefacing the names of the fermion with an ‘s’ (e.g., selection, squark, lepton). The names of the fermion partners of existing bosons are obtained by changing the terminal-on of the boson to-into (e.g., photons, Gluons, and zino). Although, super-symmetries have not been observed experimentally, they may prove important in the search for a Unified Field Theory of the fundamental interactions.
The quark is a fundamental constituent of hadrons, i.e., of particles that take part in strong interactions. Quarks are never seen as free particles, which is substantiated by lack of experimental evidence for isolated quarks. The explanation given for this phenomenon in gauge theory is known a quantum chromodynamics, by which quarks are described, is that quark interaction become weaker as they come closer together and fall to zero once the distance between them is zero. The converse of this proposition is that the attractive forces between quarks become stronger s they move, as this process has no limited, quarks can never separate from each other. In some theories, it is postulated that at very high-energy temperatures, as might have prevailed in the early universe, quarks can separate, te temperature at which this occurs is called the ‘deconfinement temperatures’. Nevertheless, their existence has been demonstrated in high-energy scattering experiments and by symmetries in the properties of observed hadrons. They are regarded s elementary fermions, with spin ½, baryon number ⅓, strangeness 0 or-1, and charm 0 or + 1. They are classified in six flavours [up (u), charm and top (t), each with charge ⅔ the proton charge, down (d), strange (s) and bottom (b), each with -⅓ the proton charge. Each type has an antiquark with reversed signs of charge, baryon number, strangeness, and charm. The top quark has not been observed experimentally, but there are strong theoretical arguments for its existence. The top quark mass is known to be greater than about 90 GeV/c2.
The fractional charges of quarks are never observed in hadrons, since the quarks form combinations in which the sum of their charges is zero or integral. Hadrons can be either baryons or mesons, essentially, baryons are composed of three quarks while mesons are composed of a quark-antiquark pair. These components are bound together within the hadron by the exchange of particles known as Gluons. Gluons are neutral massless gauge bosons, the quantum field theory of electromagnetic interactions discriminate themselves against the gluon as the analogue of the photon and with a quantum number known as ‘colour’ replacing that of electric charge. Each quark type ( or flavour ) comes in three colours (red, blue and green, say), where colour is simply a convenient label and has no connection with ordinary colour. Unlike the photon in quantum chromodynamics, which is electrically neutral, Gluons in quantum chromodynamics carry colour and can therefore interact with themselves. Particles that carry colour are believed not to be able to exist in free particles. Instead, quarks and Gluons are permanently confined inside hadrons (strongly interacting particles, such as the proton and the neutron).
The gluon self-interaction leads to the property known as ‘asymptotic freedom’, in which the interaction strength for the strong interaction decreases as the momentum transfer involved in an interaction increase. This allows perturbation theory to be used and quantitative comparisons to be made with experiment, similar to, but less precise than those possibilities of quantum chromodynamics. Quantum chromodynamics the being tested successfully in high energy muon-nucleon scattering experiments and in proton-antiproton and electron-positron collisions at high energies. Strong evidence for the existence of colour comes from measurements of the interaction rates for e+e‒ ➝ hadrons and e+e- ➝ μ+ μ‒. The relative rate for these two processes is a factor of three larger than would be expected without colour, this factor measures directly the number of colours, i.e., for each quark flavour.
The quarks and antiquarks with zero strangeness and zero charm are the u, d, û and . They form the combinations:
protons (uud), antiprotons (ūū)
Neutrons (uud), antineutron (ū)
pion: π+ (u), π‒ (ūd), π0 (d, uū).
The charge and spin of these particles are the sums of the charge and spin of the component quarks and antiquarks.
In the strange baryon, e.g., the Λ and Σ meons, either the quark or antiquark is strange. Similarly, the presence of one or more ‘c’ quarks leads to charm baryons’ ‘a’ ‘c’ or ‘č’ to the charmed mesons. It has been found useful to introduce a further subdivision of quarks, each flavour coming in three colours (red, green, blue). Colour as used here serves simply as a convenient label and is unconnected with ordinary colour. A baryon comprises a red, a green, and a blue quark and a meson comprised a red and ant-red, a blue and ant-blue, or a green and antigreen quark and antiquark. In analogy with combinations of the three primary colours of light, hadrons carry no net colour, i.e., they are ‘colourless’ or ‘white’. Only colourless objects can exist as free particles. The characteristics of the six quark flavours are shown in the table.
The cental feature of quantum field theory, is that the essential reality is a set of fields subject to the rules of special relativity and quantum mechanics, all else is derived as a consequence of the quantum dynamics of those fields. The quantization of fields is essentially an exercise in which we use complex mathematical models to analyse the field in terms of its associated quanta. Material reality as we know it in quantum field theory is constituted by the transformation and organization of fields and their associated quanta. Hence, this reality. Reveals a fundamental complementarity, in which particles are localized in space/time, and fields, which are not. In modern quantum field theory, all matter is composed of six strongly interacting quarks and six weakly interacting leptons. The six quarks are called up, down, charmed, strange, top, and bottom and have different rest masses and functional changes. The up and own quarks combine through the exchange of Gluons to form protons and neutrons.
The ‘lepton’ belongs to the class of elementary particles, and does not take part in strong interactions. They have no substructure of quarks and are considered indivisible. They are all; fermions, and are categorized into six distinct types, the electron, muon, and tauon, which are all identically charged, but differ in mass, and the three neutrinos, which are all neutral and thought to be massless or nearly so. In their interactions the leptons appear to observe boundaries that define three families, each composed of a charged lepton and its neutrino. The families are distinguished mathematically by three quantum numbers, Ie, Iμ, and Iv lepton numbers called ‘lepton numbers. In weak interactions their IeTOT, IμTOT and Iτ for the individual particles are conserved.
In quantum field theory, potential vibrations at each point in the four fields are capable of manifesting themselves in their complemtarity, their expression as individual particles. The interactions of the fields result from the exchange of quanta that are carriers of the fields. The carriers of the field, known as messenger quanta, are the ‘coloured’ Gluons for the strong-binding-force, of which the photon for electromagnetism, the intermediate boson for the weak force, and the graviton or gravitation. If we could re-create the energies present in the fist trillionths of trillionths of a second in the life o the universe, these four fields would, according to quantum field theory, become one fundamental field.
The movement toward a unified theory has evolved progressively from super-symmetry to super-gravity to string theory. In string theory the one-dimensional trajectories of particles, illustrated in the Feynman lectures, seem as if, in at all were possible, are replaced by the two-dimensional orbits of a string. In addition to introducing the extra dimension, represented by a smaller diameter of the string, string theory also features another mall but non-zero constant, with which is analogous to Planck’s quantum of action. Since the value of the constant is quite small, it can be generally ignored but at extremely small dimensions. Still, since the constant, like Planck’s constant is not zero, this results in departures from ordinary quantum field theory in very small dimensions.
Part of what makes string theory attractive is that it eliminates, or ‘transforms away’, the inherent infinities found in the quantum theory of gravity. If the predictions of this theory are proven valid in repeatable experiments under controlled coeditions, it could allow gravity to be unified with the other three fundamental interactions. Nevertheless, even if string theory leads to this grand unification, it will not alter our understanding of wave-particle duality. While the success of the theory would reinforce our view of the universe as a unified dynamic process, it applies to very small dimensions, and therefore, does not alter our view of wave-particle duality.
While the formalism of quantum physics predicts that correlations between particles over space-like inseparability, of which are possible, it can say nothing about what this strange new relationship between parts (quanta) and the whole (cosmos) cause to result outside this formalism. This does not, however, prevent us from considering the implications in philosophical terms. As the philosopher of science Errol Harris noted in thinking about the special character of wholeness in modern physics, a unity without internal content is a blank or empty set and is not recognizable as a whole. A collection of merely externally related parts does not constitute a whole in that the parts will not be ‘mutually adaptive and complementary to one-another.’
Wholeness requires a complementary relationship between unity and difference and is governed by a principle of organization determining the interrelationship between parts. This organizing principle must be universal to a genuine whole and implicit in all parts constituting the whole, even the whole is exemplified only in its parts. This principle of order, Harris continued, ‘is nothing really in and of itself. It is the way he parts are organized, and another constituent additional to those that constitute the totality.’
In a genuine whole, the relationship between the constituent parts must be ‘internal or immanent’ ion the parts, as opposed to a more spurious whole in which parts appear to disclose wholeness dur to relationships that are external to the arts. The collection of parts that would allegedly constitute the whole in classical physics is an example of a spurious whole. Parts continue a genuine whole when the universal principle of order is inside the parts and hereby adjusts each to all so that they interlock and become mutually complementary. This not only describes the character of the whole revealed in both relativity theory and quantum mechanics. It is also consistent with the manner in which we have begun to understand the relations between parts and whole in modern biology.
Modern physics also reveals, claimed Harris, complementary relationship between the differences between parts that constitute and the universal ordering principle that are immanent in each part. While the whole cannot be finally disclosed in the analysis of the parts, the study of the differences between parts provides insight
into the dynamic structure of the whole present in each part. The part can never, however, be finally isolated from the web of relationships that discloses the interconnections with the whole, and any attempt to do so results in ambiguity.
Much of the ambiguity in attempts to explain the character of wholes in both physics and biology derives from the assumption that order exists between or outside parts. Yet order in complementary relationships between difference and sameness in any physical event is never external to that event, and the cognations are immanent in the event. From this perspective, the addition of non-locality to this picture of the distributive constitution in dynamic function of wholeness is not surprising. The relationships between part, as quantum event apparent in observation or measurement, and the indivisible whole, calculate on in but are not described by the instantaneous correlations between measurements in space-like separate regions, is another extension of the part-whole complementarity in modern physics.
If the universe is a seamlessly interactive system that evolves to higher levels of complex and complicating regularities of which ae lawfully emergent in property of systems, we can assume that the cosmos is a single significant whole that evinces progressive order in complementary relations to its parts. Given that this whole exists in some sense within all parts (quanta), one can then argue that in operates in self-reflective fashion and is the ground from all emergent plexuities. Since human consciousness evinces self-reflective awareness in te human brain (well protected between the cranium walls) and since this brain, like all physical phenomena, can be viewed as an emergent property of the whole, concluding it is unreasonable, in philosophical terms at least, that the universe is conscious.
Even so, since the actual character of this seamless whole cannot be represented or reduced to its parts, it lies, quite laterally, beyond all human representation or descriptions. If one chooses to believe that the universe be a self-reflective and self-organizing whole, this lends no support whatsoever to conceptual representation of design, meaning, purpose, intent, or plan associated with mytho-religious or cultural heritage. However, if one does not accept this view of the universe, there is noting in the scientific description of nature that can be used to refute this position. On the other hand, it is no longer possible to argue that a profound sense of unity with the whole, which has long been understood as foundation of religious experiences, but can be dismissed, undermined, or invalidated with appeals to scientific knowledge.
While we have consistently tried to distinguish between scientific knowledge and philosophical speculation based on this of what is obtainable, let us be quite clear on one point-there is no empirically valid causal linkage between the former and the latter. Those who wish to dismiss the speculative base on which is obviously free to do as done. However, there is another conclusion to be drawn, in that is firmly grounded in scientific theory and experiment there is no basis in the scientific descriptions of nature for believing in the radical Cartesian division between mind and world sanctioned by classical physics. Clearly, his radical separation between mind and world was a micro-level illusion fostered by limited awareness of the actual character of physical reality nd by mathematical idealizations extended beyond the realms of their applicability.
All the same, the philosophical implications might prove in themselves as a criterial motive in debative consideration to how our proposed new understanding of the relationship between parts and wholes in physical reality might affect the manner in which we deal with some major real-world problems. This will issue to demonstrate why a timely resolution of these problems is critically dependent on a renewed dialogue between members of the cultures of human-social scientists and scientists-engineers. We will also argue that the resolution of these problems could be dependent on a renewed dialogue between science and religion.
As many scholars have demonstrated, the classical paradigm in physics has greatly influenced and conditioned our understanding and management of human systems in economic and political realities. Virtually all models of these realities treat human systems as if they consist of atomized units or parts that interact with one another in terms of laws or forces external to or between the parts. These systems are also viewed as hermetic or closed and, thus, its discreteness, separateness and distinction.
Consider, for example, how the classical paradigm influenced or thinking about economic reality. In the eighteenth and nineteenth centuries, the founders of classical economics-figures like Adam Smith, David Ricardo, and Thomas Malthus conceived of the economy as a closed system in which intersections between parts (consumer, produces, distributors, etc.) are controlled by forces external to the parts (supply and demand). The central legitimating principle of free market economics, formulated by Adam Smith, is that lawful or law-like forces external to the individual units function as an invisible hand. This invisible hand, said Smith, frees the units to pursue their best interests, moves the economy forward, and usually legislates the behaviour of parts in the best vantages of the whole. (The resemblance between the invisible hand and Newton’s universal law of gravity and between the relations of parts and wholes in classical economics and classical physics should be transparent.)
After roughly 1830, economists shifted the focus to the properties of the invisible hand in the interactions between pats using mathematical models. Within these models, the behaviour of pats in the economy is assumed to be analogous to the awful interactions between pats in classical mechanics. It is, therefore, not surprising that differential calculus was employed to represent economic change in a virtual world in terms of small or marginal shifts in consumption or production. The assumption was that the mathematical description of marginal shifts n the complex web of exchanges between parts (atomized units and quantities) and whole (closed economy) could reveal the lawful, or law-like, machinations of the closed economic system.
These models later became one of the fundamentals for microeconomics. Microeconomics seek to describe interactions between parts in exact quantifiable measures-such as marginal cost, marginal revenue, marginal utility, and growth of total revenue as indexed against individual units of output. In analogy with classical mechanics, the quantities are viewed as initial conditions that can serve to explain subsequent interactions between parts in the closed system in something like deterministic terms. The combination of classical micro-analysis with micro-analysis resulted in what Thorstein Veblen in 1900 termed neoclassical economics-the model for understanding economic reality that is widely used today.
Beginning in the 1939s, the challenge became to subsume the understanding of the interactions between parts in closed economic systems with more sophisticated mathematical models using devices like linear programming, game theory, and new statistical techniques. In spite of the growing mathematical sophistication, these models are based on the same assumptions from classical physics featured in previous neoclassical economic theory-with one exception. They also appeal to the assumption that systems exist in equilibrium or in perturbations from equilibria, and they seek to describe the state of the closed economic system in these terms.
One could argue that the fact that our economic models are assumptions from classical mechanics is not a problem by appealing to the two-domain distinction between micro-level micro-level processes expatiated upon earlier. Since classical mechanic serves us well in our dealings with micro-level phenomena in situations where the speed of light is so large and the quantum of action is so small as to be safely ignored for practical purposes, economic theories based on assumptions from classical mechanics should serve us well in dealing with the micro-level behaviour of economic systems.
The obvious problem, . . . acceded peripherally, . . . nature is relucent to operate in accordance with these assumptions, in that the biosphere, the interaction between parts be intimately related to the hole, no collection of arts is isolated from the whole, and the ability of the whole to regulate the relative abundance of atmospheric gases suggests that the whole of the biota appear to display emergent properties that are more than the sum of its parts. What the current ecological crisis reveals in the abstract virtual world of neoclassical economic theory. The real economies are all human activities associated with the production, distribution, and exchange of tangible goods and commodities and the consumption and use of natural resources, such as arable land and water. Although expanding economic systems in the really economy ae obviously embedded in a web of relationships with the entire biosphere, our measure of healthy economic systems disguises this fact very nicely. Consider, for example, the healthy economic system written in 1996 by Frederick Hu, head of the competitive research team for the World Economic Forum-short of military conquest, economic growth is the only viable means for a country to sustain increases in natural living standards . . . An economy is internationally competitive if it does so strongly in three general areas: Abundant productive stimulations from capital, labour, infrastructure and technology, optimal economic policies such as low taxes, little interference, free trade and sound market institutions. Such as the rule of law and protection of property rights.
The prescription for medium-term growth of economies ion countries like Russia, Brazil, and China may seem utterly pragmatic and quite sound. However, the virtual economy described is a closed and hermetically sealed system in which the invisible hand of economic forces allegedly results in a health growth economy if impediments to its operation are removed or minimized. It is, of course, often trued that such prescriptions can have the desired results in terms of increases in living standards, and Russia, Brazil and China are seeking to implement them in various ways.
In the real economy, however, these systems are clearly not closed or hermetically sealed: Russia uses carbon-based fuels in production facilities that produce large amounts of carbon dioxide and other gases that contribute to global warming: Brazil is in the process of destroying a rain forest that is critical to species diversity and the maintenance of a relative abundance of atmospheric gases that regulate Earth temperature, and China is seeking to build a first-world economy based on highly polluting old-world industrial plants that burn soft coal. Not to forget, . . . the victual economic systems that the world now seems to regard as the best example of the benefits that can be derived form the workings of the invisible hand, that of the United States, operates in the real economy as one of the primary contributors to the ecological crisis.
In ‘Consilience,’ Edward O. Wilson makes to comment, the case that effective and timely solutions to the problem threatening human survival is critically dependent on something like a global revolution in ethical thought and behaviour. Nonetheless, his view of the basis for this revolution is quite different from our own. Wilson claimed that since the foundations for moral reasoning evolved in what he termed ‘gene-culture’ evolution, the rules of ethical behaviour re emergent aspects of our genetic inheritance. Based on the assumptions that the behaviour of contemporary hunter-gatherers resembles that of our hunter-gatherers forebears in the Palaeolithic Era, he drew on accounts of Bushman hunter-gatherers living in the centre Kalahari in an effort to demonstrate that ethical behaviour is associated with instincts like bonding, cooperation, and altruism.
Wilson argued that these instincts evolved in our hunter-gatherers accessorial descendabilities, whereby genetic mutation and the ethical behaviour associated with these genetically based instincts provided a survival advantage. He then claimed that since these genes were passed on to subsequent generations of our dependable characteristics, which eventually became pervasive in the human genome, the ethical dimension of human nature has a genetic foundation. When we fully understand the ‘innate epigenetic rules of moral reasoning,’ the rules will probably turn out to be an ensemble of many algorithms whose interlocking activities guide the mind across a landscape of nuances moods and choices.
Any reasonable attempt to lay a firm foundation beneath the quagmire of human ethics in all of its myriad and often contradictory formulations is admirable, and Wilson’s attempt is more admirable than most. In our view, however, there is little or no prospect that will prove as successful for any number of reasons. While the probability for us to discover some linkage between genes and behaviour, seems that the lightened path of human ethical behaviour and ranging advantages of this behaviour is far too complex, not o mentions, inconsistently been reduced to a given set classification of ‘epigenetic ruled of moral reasoning.’
Also, moral codes may derive in part from instincts that confer a survival advantage, but when we are to examine these codes, they are clearly primarily cultural products. This explains why ethical systems are constructed in a bewildering variety of ways in different cultural contexts and why they often sanction or legitimate quite different thoughts and behaviours. Let us not forget that rules of ethical behaviours are quite malleable and have been used sacredly to legitimate human activities such as slavery, colonial conquest, genocide and terrorism. As Cardinal Newman cryptically put it, ‘Oh how we hate one another for the love of God.’
According to Wilson, the ‘human mind evolved to believe in the gods’ and people ‘need a sacred narrative’ to his view are merely human constructs and, therefore, there is no basis for dialogue between the world views of science and religion. ‘Science for its part, will test relentlessly every assumption about the human condition and in time uncover the bedrock of the moral and religiously sentient. The result of the competition between the two world views, is believed, as I, will be the secularization of the human epic and of religion itself.
Wilson obviously has a right to his opinions, and many will agree with him for their own good reasons, but what is most interesting about his thoughtful attempted to posit a more universal basis for human ethics in that it s based on classical assumptions about the character of both physical and biological realities. While Wilson does not argue that human’s behaviour is genetically determined in the strict sense, however, he does allege that there is a causal linkage between genes and behaviour that largely condition this behaviour, he appears to be a firm believer in classical assumption that reductionism can uncover the lawful essences that principally govern the physical aspects attributed to reality, including those associated with the alleged ‘epigenetic rules of moral reasoning.’
Once, again, Wilson’s view is apparently nothing that cannot be reduced to scientific understandings or fully disclosed in scientific terms, and this apparency of hope for the future of humanity is that the triumph of scientific thought and method will allow us to achieve the Enlightenments ideal of disclosing the lawful regularities that govern or regulate all aspects of human experience. Hence, science will uncover the ‘bedrock of moral and religious sentiment, and the entire human epic will be mapped in the secular space of scientific formalism.’ The intent is not to denigrate Wilson’s attentive efforts to posit a more universal basis for the human condition, but is to demonstrate that any attempt to understand or improve upon the behaviour based on appeals to outmoded classical assumptions is unrealistic and outmoded. If the human mind did, in fact, evolve in something like deterministic fashion in gene-culture evolution-and if there were, in fact, innate mechanisms in mind that are both lawful and benevolent. Wilson’s program for uncovering these mechanisms could have merit. Nevertheless, for all the reasons that have been posited, classical determinism cannot explain the human condition and its evolutionary principle that govern in their functional dynamics, as Darwinian evolution should be modified to accommodate the complementary relationships between cultural and biological principles those governing evaluations do have in them a strong, and firm grip upon genetical mutations that have attributively been the distribution in the contribution of human interactions with themselves in the finding to self-realization and undivided wholeness.
Equally important, the classical assumption that the only privileged or valid knowledge is scientific is one of the primary sources of the stark division between the two cultures of humanistic and scientists-engineers, in this view, Wilson is quite correct in assuming that a timely end to the two culture war and a renewer dialogue between members of these cultures is now critically important to human survival. It is also clear, however, that dreams of reason based on the classical paradigm will only serve to perpetuate the two-culture war. Since these dreams are also remnants of an old scientific word view that no longer applies in theory in fact, to the actual character of physical reality, as reality is a probable service to frustrate the solution for which in found of a real world problem.
However, there is a renewed basis for dialogue between the two cultures, it is believed as quite different from that described by Wilson. Since classical epistemology has been displaced, or is the process of being displaced, by the new epistemology of science, the truths of science can no longer be viewed as transcendent and absolute in the classical sense. The universe more closely resembles a giant organism than a giant machine, and it also displays emergent properties that serve to perpetuate the existence of the whole in both physics and biology that cannot be explained in terms of unrestricted determinism, simple causality, first causes, linear movements and initial conditions. Perhaps the first and most important precondition for renewed dialogue between the two cultural conflicting realizations as Einstein explicated upon its topic as, that a human being is a ‘part of the whole’. It is this spared awareness that allows for the freedom, or existential choice of self-decision of choosing our free-will and the power to differentiate a direct care to free ourselves of the ‘optical illusion’ of our present conception of self as a ‘part limited in space and time’, and to widen ‘our circle of compassion to embrace al living creatures and the whole of nature in its beauty’. Yet, one cannot, of course, merely reason oneself into an acceptance of this view, nonetheless, the inherent perceptions of the world are reason that the capacity for what Einstein termed ‘cosmic religious feedings.’ Perhaps, our enabling capability for that which is within us to have the obtainable ability to enabling of ours is to experience the self-realization, that of its realness is to sense its proven existence of a sense of elementarily leaving to some sorted conquering sense of universal consciousness, in so given to arise the existence of the universe, which really makes an essential difference to the existence or its penetrative spark of awakening indebtednesses of reciprocality?
Those who have this capacity will favourably be able to communicate their enhanced scientific understanding of the relations among all aspects, and in part that is our self and the whole that are the universes in ordinary language wit enormous emotional appeal. The task lies before the poets of this renewing reality have nicely been described by Jonas Salk, which ‘man has come to the threshold of a state of consciousness, regarding his nature and his relationship to the Cosmos, in terms that reflects ‘reality’. By using the processes of Nature and metaphor, to describe the forces by which it operates upon and within Man, we come as close to describing ‘reality’ as we can within te limits of our comprehension. Men will be very uneven in their capacity or such understanding, which, naturally, differs for different ages and cultures, and develops and changes over the course of time. For these reasons that using metaphorical and mythical provisions as comprehensive guides to living will always be necessary. In this way. Man’s afforded efforts by the imagination and intellect can be playing the vital roles embarking upon the survival and his endurable evolution.
It is time, if not, only, concluded from evidence in its suggestive conditional relation, for which the religious imagination and the religious experience to engage upon the complementary truths of science in fitting that silence with meaning, as having to antiquate a continual emphasis, least of mention, that does not mean that those who do not believe in the existence of God or Being, should refrain in any sense from assessing the impletions of the new truths of science. Understanding these implications does not necessitate any ontology, and is in no way diminished by the lack of any ontology. One is free to recognize a basis for a dialogue between science and religion for the same reason that one is free to deny that this basis exists-there is nothing in our current scientific world view that can prove the existence of God or Being and nothing that legitimate any anthropomorphic conceptions of the nature of God or Being. The question of belief in some ontology yet remains in what, and it has always been-a question, and the physical universe on the most basic level remains what it always been a riddle. The ultimate answer to the question and the ultimate meaning of the riddle is, and probably will always be, a matter of personal choice and conviction.
The present time is clearly a time of a major paradigm shift, but consider the last great paradigm shift, the one that resulted in the Newtonian framework. This previous paradigm shift was profoundly problematic for the human spirit, it led to the conviction that we are strangers, freaks of nature, conscious beings in a universe that is almost entirely unconscious, and that, since the universe its strictly deterministic, even the free will we feel in regard to the movements of our bodies is an illusion. Yet going through the acceptance of such a paradigm was probably necessary for the Western mind.
The overwhelming success of Newtonian physics led most scientists and most philosophers of the Enlightenment to rely on it exclusively. As far as the quest for knowledge about reality was concerned, they regarded all of the other mode’s of expressing human experience, such as accounts of numinous emergences, poetry, art, and so on, as irrelevant. This reliance on science as the only way to the truth about the universe s clearly obsoletes. Science has to give up the illusion of its self-sufficiency and self-sufficiency of human reason. It needs to unite with other modes of knowing, n particular with contemplation, and help each of us move to higher levels of being and toward the Experience of Oneness.
If this is the direction of the emerging world-view, then the paradigm shifts we are presently going through will prove to e nourishing to the human spirit and in correspondences with its deepest conscious or unconscious yearning-the yearning to emerge out of Plato’s shadows and into the light of luminosity. The Big Bang theory seeks to explain what happened at or soon after the beginning of the universe. Scientists can now model the universe back to 10-43 seconds after the big bang. For the time before that moment, the classical theory of gravity is no longer adequate. Scientists are searching for a theory that merges gravity (as explained by Einstein's general theory of relativity) and quantum mechanics but have not found one yet. Many scientists have hope that string theory, also known as M-theory, will tie together gravity and quantum mechanics and help scientists explore further back in time.
Because scientists cannot look back in time beyond that early epoch, the actual big bang is hidden from them. There is no way at present to detect the origin of the universe. Further, the big bang theory does not explain what existed before the big bang. Perhaps, that time itself began at the big bang, so that it makes no sense to discuss what happened ‘before’ the big bang. According to the big bang theory, the universe expanded rapidly in its first microseconds. A single force existed at the beginning of the universe, and as the universe expanded and cooled, this force separated into those we know today: gravity, electromagnetism, the strong nuclear force, and the weak nuclear force. A theory called the electroweak theory now provides a unified explanation of electromagnetism and the weak nuclear force theory. Physicists are now searching for a grand unification theory also to incorporate the strong nuclear force. String theory seeks to incorporate the force of gravity with the other three forces, providing a theory of everything (TOE).
One widely accepted version of big bang theory includes the idea of inflation. In this model, the universe expanded much more rapidly at first, to about 1050 times its original size in the first 10-32 second, then slowed its expansion. The theory was advanced in the 1980s by American cosmologist Alan Guth and elaborated upon by American astronomer Paul Steinhardt, Russian American scientist Andrei Line, and British astronomer Andreas Albrecht. The inflationary universe theory solves a number of problems of cosmology. For example, it shows that the universe now appears close to the type of flat space described by the laws of Euclid’s geometry: We see only a tiny region of the original universe, similar to the way we do not notice the curvature of the earth because we see only a small part of it. The inflationary universe also shows why the universe appears so homogeneous. If the universe we observe was inflated from some small, original region, it is not surprising that it appears uniform.
Once the expansion of the initial inflationary era ended, the universe continued to expand more slowly. The inflationary model predicts that the universe is on the boundary between being open and closed. If the universe is open, it will keep expanding forever. If the universe is closed, the expansion of the universe will eventually stop and the universe will begin contracting until it collapses. Whether the universe is open or closed depends on the density, or concentration of mass, in the universe. If the universe is dense enough, it is closed.
The theory is based on the mathematical equations, known as the field equations, of the general theory of relativity set forth in 1915 by Albert Einstein. In 1922 Russian physicist Alexander Friedmann provided a set of solutions to the field equations. These solutions have served as the framework for much of the current theoretical work on the big bang theory. American astronomer Edwin Hubble provided some of the greatest supporting evidence for the theory with his 1929 discovery that the light of distant galaxies was universally shifted toward the red end of the spectrum. Once ‘tired light’ theories-that light slowly loses energy naturally, becoming more red over time-were dismissed, this shift proved that the galaxies were moving away from each other. Hubble found that galaxies farther away were moving away proportionally faster, showing that the universe is expanding uniformly. However, the universe’s initial state was still unknown.
In the 1940's Russian-American physicist George Gamow worked out a theory that fit with Friedmann’s solutions in which the universe expanded from a hot, dense state. In 1950 British astronomer Fred Hoyle, in support of his own opposing steady-state theory, referred to Gambas theory as a mere ‘big bang,’ but the name stuck.
The overall framework of the big bang theory came out of solutions to Einstein’s general relativity field equations and remains unchanged, but various details of the theory are still being modified today. Einstein himself initially believed that the universe was static. When his equations seemed to imply that the universe was expanding or contracting, Einstein added a constant term to cancel out the expansion or contraction of the universe. When the expansion of the universe was later discovered, Einstein stated that introducing this ‘cosmological constant’ had been a mistake.
After Einstein’s work of 1917, several scientists, including the Abbé Georges Lemaître in Belgium, Willem de Sitter in Holland, and Alexander Friedmann in Russia, succeeded in finding solutions to Einstein’s field equations. The universes described by the different solutions varied. De Sitter’s model had no matter in it. This model is effectively not a bad approximation, since the average density of the universe is extremely low. Lemaître’s universe expanded from a ‘primeval atom.’ Friedmann’s universe also expanded from a very dense clump of matter, but did not involve the cosmological constant. These models explained how the universe behaved shortly after its creation, but there was still no satisfactory explanation for the beginning of the universe.
In the 1940's George Gamow was joined by his students Ralph Alphen and Robert Herman in working out details of Friedmann’s solutions to Einstein’s theory. They expanded on Gamow’s idea that the universe expanded from a primordial state of matter called ylem consisting of protons, neutrons, and electrons in a sea of radiation. They theorized the universe was very hot at the time of the big bang (the point at which the universe explosively expanded from its primordial state), since elements are heavier than hydrogen can be formed only at a high temperature. Alpher and Hermann predicted that radiation from the big bang should still exist. Cosmic background radiation roughly corresponding to the temperature predicted by Gamow’s team was detected in the 1960s, further supporting the big bang theory, though the work of Alpher, Herman, and Gamow had been forgotten.
The big bang theory seeks to explain what happened at or soon after the beginning of the universe. Scientists can now model the universe back to 10-43 seconds after the big bang. For the time before that moment, the classical theory of gravity is no longer adequate. Scientists are searching for a theory that merges gravity (as explained by Einstein's general theory of relativity) and quantum mechanics but have not found one yet. Many scientists have hopes that string theory, also known as M -theory, will tie together gravity and quantum mechanics and help scientists explore further back in time.
Because scientists cannot look back in time beyond that early epoch, the actual big bang is hidden from them. There is no way at present to detect the origin of the universe. Further, the big bang theory does not explain what existed before the big bang. Perhaps, that time itself began at the big bang, so that it makes no sense to discuss what happened ‘before’ the big bang.
According to the big bang theory, the universe expanded rapidly in its first microseconds. A single force existed at the beginning of the universe, and as the universe expanded and cooled, this force separated into those we know today: gravity, electromagnetism, the strong nuclear force, and the weak nuclear force. A theory called the electroweak theory now provides a unified explanation of electromagnetism and the weak nuclear force theory. Physicists are now searching for a grand unification theory to incorporate the strong nuclear force also. String theory seeks to incorporate the force of gravity with the other three forces, providing a theory of everything (TOE).
One widely accepted version of big bang theory includes the idea of inflation. In this model, the universe expanded much more rapidly at first, to about 1050 times its original size in the first 10-32 second, then slowed its expansion. The theory was advanced in the 1980's by American cosmologist Alan Guth and elaborated upon by American astronomer Paul Steinhardt, Russian American scientist Andrei Linde, and British astronomer Andreas Albrecht. The inflationary universe theory solves several problems of cosmology. For example, it shows that the universe now appears close to the type of flat space described by the laws of Euclid’s geometry: We see only a tiny region of the original universe, similar to the way we do not notice the curvature of the earth because we see only a small part of it. The inflationary universe also shows why the universe appears so homogeneous. If the universe we observe was inflated from some small, original region, it is not surprising that it appears uniform.
Once the expansion of the initial inflationary era ended, the universe continued to expand more slowly. The inflationary model predicts that the universe is on the boundary between being open and closed. If the universe is open, it will keep expanding forever. If the universe is closed, the expansion of the universe will eventually stop and the universe will begin contracting until it collapses. Whether the universe is open or closed, depends on the density, or concentration of mass, in the universe. If the universe is dense enough, it is closed.
The universe cooled as it expanded. After about one second, protons formed. In the following few minutes-often referred to as the ‘first three minutes’-combinations of protons and neutrons formed the isotope of hydrogen known as deuterium and some of the other light elements, principally helium, and some lithium, beryllium, and boron. The study of the distribution of deuterium, helium, and the other light elements is now a major field of research. The uniformity of the helium abundance around the universe supports the big bang theory and the abundance of deuterium can be used to estimate the density of matter in the universe.
From about 380,000 too about one million years after the big bang, the universe cooled to about 3000°C’s (about 5000°F’s) and protons and electrons combined to hydrogen atoms. Hydrogen atoms can only absorb and emit specific colours, or wavelengths, of light. The formation of atoms allowed many other wavelengths of light, wavelengths that had been interfering with the free electrons, to travel much farther than before. This change sets free radiation that we can detect today. After billions of years of cooling, this cosmic background radiation is at 3 K (-270°C/- 454°F). The cosmic background radiation was first detected and identified in 1965 by American astrophysicists Arno Penzias and Robert Wilson.
The Cosmic Background Explorer (COBE) spacecraft, a project of the National Aeronautics and Space Administration (NASA), mapped the cosmic background radiation between 1989 and 1993. It verified that the distribution of intensity of the background radiation precisely matched that of matter that emits radiation because of its temperature, as predicted for the big bang theory. It also showed that cosmic background radiation is not uniform that it varies slightly. These variations are thought to be the seeds from which galaxies and other structures in the universe grew.
Evidence suggests that the matter that scientists detect in the universe be only a small fraction of all the matter that exists. For example, observations of the speeds at which individual galaxies move within clusters of galaxies show that a great deal of unseen matter must exist to exert sufficient gravitational force to keep the clusters from flying apart. Cosmologists now think that much of the universe is dark matter-matter that has gravity but does not give off radiation that we can see or otherwise detect. One kind of dark matter theorized by scientists is cold dark matter, with slowly moving (cold) massive particles. No such particles have yet been detected, though astronomers have made up fanciful names for them, such as Weakly Interacting Massive Particles (WIMPs). Other cold dark matter could be non-radiating stars or planets, which are known as MACHOs (Massive Compact Halo Objects).
An alternative theory that explains the dark-matter model involves hot dark matter, where hot implies that the particles are moving very fast. Neutrinos, fundamental particles that travel at nearly the speed of light, are the prime example of hot dark matter. However, scientists think that the mass of a neutrino is so low that neutrinos can only account for a small portion of dark matter. If the inflationary version of big bang theory is correct, then the amount of dark matter and of whatever else might exist is just enough to bring the universe to the boundary between open and closed.
Scientists develop theoretical models to show how the universe’s structures, such as clusters of galaxies, have formed. Their models invoke hot dark matter, cold dark matter, or a mixture of the two. This unseen matter would have provided the gravitational force needed to bring large structures such as clusters of galaxies together. The theories that include dark matter match the observations, although there is no consensus on the type or types of dark matter that must be included. Supercomputers are important for making such models.
Astronomers continue to make new observations that are also interpreted within the framework of the big bang theory. No major problems with the big bang theory have been found, but scientists constantly adjust the theory to match the observed universe. In particular, a ‘standard model’ of the big bang has been established by results from NASA's Wilkinson Microwave Anisotropy Probe (WMAP), launched in 2001. The probe studied the anisotropies, or rippled, in the temperature of cosmic background radiation at a higher resolution than COBE was fully competent. These ripples suggest that regions of the young universe were hotter or cooler, by a factor of about 1/1000, than adjacent regions. WMAP’s observations suggest that the rate of expansion of the universe, called Hubble’s constant, is about 71 km/s/Mpc (kilometres per second per million parsec, where a parsec is about 3.26 light-years). In other words, the distance between any two objects in space that are separated by a million parsec increases by about 71 km every second in addition to any other motion they may have compared with one another. In combination with previously existing observations, this rate of expansion tells cosmologists that the universe is ‘flat,’ though flatness here does not refer to the actual shape of the universe but rather than the geometric laws that apply to the universe match those of a flat plane.
To be flat, the universe must contain a certain amount of matter and energy, known as the critical density. The distribution of sizes of ripples detected by WMAP show that ordinary matter-like that making up objects and living things on Earth-accounts for only 4.4 percent of the critical density. Dark matter makes up an additional 23 percent. Astoundingly, the remaining 73 percent of the universe is composed of something, but accessorial-of a substance so mysterious that nobody knows much about it. Called ‘dark energy,’ this substance provides the anti-gravity-like negative pressure that causes the universe’s expansion to accelerate rather than slow. This ‘accelerating universe’ was detected independently by two competing groups of astronomers in the last years of the 20th century. The ideas of an accelerating universe and the existence of dark energy have caused astronomers to modify previous ideas of the big bang universe substantially.
WMAP's results also show that cosmic background radiation was set free about 380,000 years after the big bang, later than was previously thought, and that the first stars formed only 200,000 years after the big bang, earlier than anticipated. Further refinements to the big bang theory are expected from WMAP, which continues to collect data. An even more precise mission to study the beginnings of the universe, the European Space Agency’s Planck spacecraft, is scheduled to be launched in 2007.
In the 1950's cosmologists (scientists who study the evolution of the universe) were considering two theories for the origin of the universe. The first, the currently accepted big bang theory, held that the universe was created from one enormous explosion. The second, known as the steady state theory, suggested that the universe had always existed. Russian-American theoretical physicist George Gamow advanced the big bang theory and its underpinnings in a 1956 Scientific American article. Gamow’s estimate of a five-billion-year-old universe is no longer considered accurate; the universe is now thought to be much older.
Most cosmologists believe that the universe began as a dense kernel of matter and radiant energy that started to expand about five billion years ago and later coalesced into galaxies.
Cosmology is the study of the general nature of the universe in space and in time-what it is now, what it was in the past and what it is likely to be in the future. Since the only forces at work between the galaxies that makes up the material universe are the forces of gravity, the cosmological problem is closely connected with the theory of gravitation, in particular with its modern version as comprised in Albert Einstein's general theory of relativity. In the frame of this theory the properties of space, time and gravitation are merged into one harmonious and elegant picture.
The basic cosmological notion of general relativity grew out of the work of great mathematicians of the 19th century. In the middle of the last century two inquisitive mathematical minds-a Russian named Nikolai Lobachevski and a Hungarian named János Bolyai-discovered that the classical geometry of Euclid was not the only possible geometry: in fact, they succeeded in constructing a geometry that was fully as logical and self-consistent as the Euclidean. They began by overthrowing Euclid's axiom about parallel lines: Namely, that only one parallel to a given straight line can be drawn through a point not on that line. Lobachevski and Bolyai both conceived a system of geometry in which a great number of lines parallel to a given line could be drawn through a point outside the line.
To illustrate the differences between Euclidean geometry and their non-Euclidean system considering just two dimensions-that is simplest is, the geometry of surfaces. In our schoolbooks this is known as ‘plane geometry,’ because the Euclidean surface is a flat surface. Suppose, now, we examine the properties of a two-dimensional geometry constructed not on a plane surface but on a curved surface. For the system of Lobachevski and Bolyai we must take the curvature of the surface to be ‘negative,’ which means that the curvature is not like that of the surface of a sphere but like that of a saddle. Now if we are to draw parallel lines or any figure (e.g., a triangle) on this surface, we must decide first of all how we will define a ‘straight line,’ equivalent to the straight line of plane geometry. The most reasonable definition of a straight line in Euclidean geometry is that it is the path of the shortest distance between two points. On a curved surface the line, so defined, becomes a curved line known as a ‘geodesic.’
Considering a surface curved like a saddle, we find that, given a ‘straight’ line or geodesic, we can draw through a point outside that line a great many geodesics that will never intersect the given line, no matter how far they are extended. They are therefore parallel to it, by the definition of parallel.
As a consequence of the overthrow of Euclid's axiom on parallel lines, many of his theorems are demolished in the new geometry. For example, the Euclidean theorem that the sum of the three angles of a triangle is 180 degrees no longer holds on a curved surface. On the saddle-shaped surface the angles of a triangle formed by three geodesics always add up to less than 180 degrees, the actual sum depending on the size of the triangle. Further, a circle on the saddle surface does not have the same properties as a circle in plane geometry. On a flat surface the circumference of a circle increases in proportion to the increase in diameter, and the area of a circle increases in proportion to the square of the increase in diameter. However, on a saddle surface both the circumference and the area of a circle increase at faster rates than on a flat surface with increasing diameters.
After Lobachevski and Bolyai, the German mathematician Bernhard Riemann constructed another non-Euclidean geometry whose two-dimensional model is a surface of positive, rather than negative, curvature-that is, the surface of a sphere. In this case a geodesic line is simply a great circle around the sphere or a segment of such a circle, and since any two great circles must intersect at two points (the poles), there are no parallel lines at all in this geometry. Again the sum of the three angles of a triangle is not 180 degrees: in this case it is always more than 180. The circumference of a circle now increases at a rate slower than in proportion to its increase in diameter, and its area increases more slowly than the square of the diameter.
Now all this is not merely an exercise in abstract reasoning but bears directly on the geometry of the universe in which we live. Is the space of our universe ‘flat,’ as Euclid assumed, or is it curved negatively (per Lobachevski and Bolyai) or curved positively (Riemann)? If we were two-dimensional creatures living in a two-dimensional universe, we could tell whether we were living on a flat or a curved surface by studying the properties of triangles and circles drawn on that surface. Similarly as three-dimensional beings living in three-dimensional space we should be able, by studying geometrical properties of that space, to decide what the curvature of our space is. Riemann in fact developed mathematical formulas describing the properties of various kinds of curved space in three and more dimensions. In the early years of this century Einstein conceived the idea of the universe as a curved system in four dimensions, embodying time as the fourth dimension, and he continued to apply Riemann's formulas to test his idea.
Einstein showed that time can be considered a fourth coordinate supplementing the three coordinates of space. He connected space and time, thus establishing a ‘space-time continuum,’ by means of the speed of light as a link between time and space dimensions. However, recognizing that space and time are physically different entities, he employed the imaginary number Á, or 'í', to express the unit of time mathematically and make the time coordinate formally equivalent to the three coordinates of space.
In his special theory of relativity Einstein made the geometry of the time-space continuum strictly Euclidean, that is, flat. The great idea that he introduced later in his general theory was that gravitation, whose effects had been neglected in the special theory, must make it curved. He saw that the gravitational effect of the masses distributed in space and moving in time was equivalent to curvature of the four-dimensional space-time continuum. In place of the classical Newtonian statement that ‘the sun produces a field of force that impels the earth to deviate from straight-line motion and to move in a circle around the sun,’ Einstein substituted a statement to the effect that ‘the presence of the sun causes a curvature of the space-time continuum in its neighbourhood.’
The motion of an object in the space-time continuum can be represented by a curve called the object's ‘world line.’. . . Einstein declared, in effect: ‘The world line of the earth is a geodesic in the curved four-dimensional space around the sun.’ In other words, the . . . [earth’s ‘world line’]. . . . Corresponds to the shortest four-dimensional distance between the position of the earth in January. . . . Its position in October . . . Einstein's idea of the gravitational curvature of space-time was, of course, triumphantly affirmed by the discovery of perturbations in the motion of Mercury at its closest approach to the sun and of the deflection of light rays by the sun's gravitational field. Einstein next attempted to apply the idea to the universe as a whole. Does it have a general curvature, similar to the local curvature in the sun's gravitational field? He now had to consider not a single centre of gravitational force but countless focal points in a universe full of matter concentrated in galaxies whose distribution fluctuates considerably from region to region in space. However, in the large-scale view the galaxies are spread uniformly throughout space as far out as our biggest telescopes can see, and we can justifiably ‘smooth out’ its matter to a general average (which comes to about one hydrogen atom per cubic metre). On this assumption the universe as a whole has a smooth general curvature.
Nevertheless, if the space of the universe is curved, what is the sign of this curvature? Is it positive, as in our two-dimensional analogy of the surface of a sphere, or is it negative, as in the case of a saddle surface? Since we cannot consider space alone, how is this space curvature related to time?
Analysing the pertinent mathematical equations, Einstein came to the conclusion that the curvature of space must be independent of time, i.e., that the universe as a whole must be unchanging (though it changes internally). However, he found to his surprise that there was no solution of the equations that would permit a static cosmos. To repair the situation, Einstein was forced to introduce an additional hypothesis that amounted to the assumption that a new kind of force was acting among the galaxies. This hypothetical force had to be independent of mass (being the same for an apple, the moon and the sun!) To gain in strength with increasing distance between the interacting objects (as no other forces ever do in physics).
Einstein's new force, called ‘cosmic repulsion,’ allowed two mathematical models of a static universe. One solution, which was worked out by Einstein himself and became known as ‘Einstein's spherical universe,’ gave the space of the cosmos a positive curvature. Like a sphere, this universe was closed and thus had a finite volume. The space coordinates in Einstein's spherical universe were curved in the same way as the latitude or longitude coordinates on the surface of the earth. However, the time axis of the space-time continuum ran quite straight, as in the good old classical physics. This means that no cosmic event would ever recur. The two-dimensional analogy of Einstein's space-time continuum is the surface of a cylinder, with the time axis running parallel to the axis of the cylinder and the space axis perpendicular to it.
The other static solution based on the mysterious repulsion forces was discovered by the Dutch mathematician Willem de Sitter. In his model of the universe both space and time were curved. Its geometry was similar to that of a globe, with longitude serving as the space coordinate and latitude as time.
Unhappily astronomical observations contravened within the polar differences and in finding the parallels between Einstein's and de Sitter's static models of the universe, and they were soon abandoned.
In the year 1922 a major turning point came in the cosmological problem. A Russian mathematician, Alexander A. Friedman (from whom the author of this article learned his relativity), discovered an error in Einstein's proof for a static universe. In carrying out his proof Einstein had divided both sides of an equation by a quantity that, Friedman found, could become zero under certain circumstances. Since division by zero is not permitted in algebraic computations, the possibility of a nonstatic universe could not be excluded under the circumstances in question. Friedman showed that two nonstatic models were possible. One pictured the universe as expanding with time; the other, contracting.
Einstein quickly recognized the importance of this discovery. In the last edition of his book The Meaning of Relativity he wrote: ‘The mathematician Friedman found a way out of this dilemma. He showed that having a finite density in the whole is possible, according to the field equations, (three-dimensional) space, without enlarging these field equations value orientation.’ Einstein remarked to me many years ago that the cosmic repulsion idea was the biggest blunder he had made in his entire life.
Almost at the very moment that Friedman was discovering the possibility of an expanding universe by mathematical reasoning, Edwin P. Hubble at the Mount Wilson Observatory on the other side of the world found the first evidence of actual physical expansion through his telescope. He made a compilation of the distances of a number of far galaxies, whose light was shifted toward the red end of the spectrum, and it was soon found that the extent of the shift was in direct proportion to a galaxy's distance from us, as estimated by its faintness. Hubble and others interpreted the red-shift as the Doppler effect-the well-known phenomenon of lengthening of wavelengths from any radiating source that is moving rapidly away (a train whistle, a source of light or whatever). To date there has been no other reasonable explanation of the galaxies' red-shift. If the explanation is correct, it means that the galaxies are all moving away from one another with increasing velocity as they move farther apart.
Thus, Friedman and Hubble laid the foundation for the theory of the expanding universe. The theory was soon developed further by a Belgian theoretical astronomer, Georges Lemaître. He proposed that our universe started from a highly compressed and extremely hot state that he called the ‘primeval atom.’ (Modern physicists would prefer the term ‘primeval nucleus.’) As this matter expanded, it gradually thinned out, cooled down and reaggregated in stars and galaxies, giving rise to the highly complex structure of the universe as we know it today.
Until a few years ago the theory of the expanding universe lay under the cloud of a very serious contradiction. The measurements of the speed of flight of the galaxies and their distances from us indicated that the expansion had started about 1.8 billion years ago. On the other hand, measurements of the age of ancient rocks in the earth by the clock of radioactivity (i.e., the decay of uranium to lead) showed that some of the rocks were at least three billion years old; more recent estimates based on other radioactive elements raise the age of the earth's crust to almost five billion years. Clearly a universe 1.8 billion years old could not contain five-billion-year-old rocks. Happily the contradiction has now been disposed of by Walter Baade's recent discovery that the distance yardstick (based on the periods of variable stars) was faulty and that the distances between galaxies are more than twice as great as they were thought to be. This change in distances raises the age of the universe to five billion years or more.
Friedman's solution of Einstein's cosmological equation, as mentioned, permits two kinds of universes. We can call one the ‘pulsating’ universe. This model says that when the universe has reached a certain maximum permissible expansion, it will begin to contract; that it will shrink until its matter has been compressed to a certain maximum density, possibly that of atomic nuclear material, which is a hundred million million times denser than water; that it will then begin to expand again-and so on through the cycle ad infinitum. The other model is a ‘hyperbolic’ one: it suggests that from an infinitely thin state an eternity ago the universe contracted until it reached the maximum density, from which it rebounded to an unlimited expansion that will go on indefinitely in the future.
The question whether our universe is forged within ‘pulsating’ or ‘hyperbolic’ should be decidable from the present rate of its expansion. The situation is analogous to the case of a rocket shot from the surface of the earth. If the velocity of the rocket is less than seven miles per second-the ‘escape velocity’-the rocket will climb only to a certain height and then fall back to the earth. (If it were completely elastic, it would bounce up again, etc., etc.). On the other hand, a rocket shot with a velocity of more than seven miles per second will escape from the earth's gravitational field and disappeared in space. The case of the receding system of galaxies is very similar to that of an escape rocket, except that instead of just two interacting bodies (the rocket and the earth, but we have an unlimited number of them escaping from one another. We find that the galaxies are fleeing from one another at seven times the velocity necessary for mutual escape.
Thus we may conclude that our universe corresponds to the ‘hyperbolic’ model, so that its present expansion will never stop. We must make one reservation. The estimate of the necessary escape velocity is based on the assumption that practically all the mass of the universe is concentrated in galaxies. If intergalactic space contained matter whose total mass was more than seven times that in the galaxies, we would have to reverse our conclusion and decide that the universe is pulsating. There has been no indication so far, however, that any matter exists in intergalactic space. It could have escaped detection only if it were in the form of pure hydrogen gas, without other gases or dust
Is the universe finite or infinite? This resolves itself into the question: Is the curvature of space positive or negative-closed like that of a sphere, or open like that of a saddle? We can look for the answer by studying the geometrical properties of its three-dimensional space, just as we examined the properties of figures on two-dimensional surfaces. The most convenient property to investigate astronomically is the relation between the volume of a sphere and its radius. We saw that, in the two-dimensional case, the area of a circle increases with increasing radiuses at a faster rate on a negatively curved surface than on a Euclidean or flat surface. That on a positively curved surface the relatives rate of increases is slower. Similarly the increase of volume is faster in negatively curved space, slower in positively curved space. In Euclidean space the volume of a sphere would increase in proportion to the cube, or third power, of the increase in the radius. In negatively curved space the volume would increase faster than this; in positively curved space, slower. Thus if we look into space and find that the volume of successively larger spheres, as measured by a count of the galaxies within them, increases faster than the cube of the distance to the limit of the sphere (the radius), we can conclude that the space of our universe has negative curvature, and therefore is open and infinite. Similarly, if the number of galaxies increases at a rate slower than the cube of the distance, we live in a universe of positive curvature-closed and finite.
Following this idea, Hubble undertook to study the increase in number of galaxies with distance. He estimated the distances of the remote galaxies by their relative faintness: galaxies vary considerably in intrinsic brightness, but over a very large number of galaxies these variations are expected to average out. Hubbles’ calculations produced the conclusion that the universe is a closed system-a small universe only a few billion light-years in radius!
We know now that the scale he was using was wrong: with the new yardstick the universe would be more than twice as large as he calculated. Still, there is a more fundamental doubt about his result. The whole method is based on the assumption that the intrinsic brightness of a galaxy remains constant. What if it changes with time? We are seeing the light of the distant galaxies as it was emitted at widely different times in the past-500 million, a billion, two billion years ago. If the stars in the galaxies are burning out, the galaxies must dim as they grow older. A galaxy two billion light-years away cannot be put on the same distance scale with a galaxy 500 million light-years away unless we take into account the fact that we are seeing the nearer galaxy at an older, and less bright, age. The remote galaxy is farther away than a mere comparison of the luminosity of the two would suggest.
When a correction is made for the assumed decline in brightness with age, the more distant galaxies are spread out to farther distances than Hubble assumed. In fact, the calculations of volume are changed so drastically that we may have to reverse the conclusion about the curvature of space. We are not sure, because we do not yet know enough about the evolution of galaxies. However, if we find that galaxies wane in intrinsic brightness by only a few per cent in a billion years, we will have to conclude that space is curved negatively and the universe is infinite.
Alternately, there is another line of reasoning which supports the side of infinity. Our universe seems to be hyperbolic and ever-expanding. Mathematical solutions of fundamental cosmological equations show that such a universe is open and infinite.
We have reviewed the questions that dominated the thinking of cosmologists during the first half of this century: the conception of a four-dimensional space-time continuum, of curved space, of an expanding universe and of a cosmos that is either finite or infinite. Now we must consider the major present issue in cosmology: Is the universe in truth evolving, or is it in a steady state of equilibrium that has always existed and will go on through eternity? Most cosmologists take the evolutionary view. However, in 1951 a group at the University of Cambridge, whose chief representative has been Fred Hoyle, advanced the steady-state idea. Essentially their theory is that the universe is infinite in space and time that it has neither a beginning nor an end, that the density of its matter remains constant, that new matter is steadily being created in space at a rate that exactly compensates for the thinning of matter by expansion, that as a consequence new galaxies are continually being born, and that the galaxies of the universe therefore range in age from mere youngsters to veterans of 5, 10, 20 and more billions of years. In my opinion this theory must be considered very questionable because of the simple fact (apart from other reasons) that the galaxies in our neighbourhood all seem to be of the same age as our own Milky Way. Still, the issue is many-sided and fundamental, and can be settled only by extended study of the universe as far as we can observe it, and, at best, an attempt will sum up the evolutionary theory.
We assume that the universe started from a very dense state of matter. In the early stages of its expansion, radiant energy was dominant over the mass of matter. We can measure energy and matter on a common scale by means of the well-known equation E = mc2, which says that the energy equivalent of matter is the mass of the matter multiplied by the square of the velocity of light. Energy can be translated into mass, conversely, by dividing the energy quantity by c2. Thus, we can speak of the ‘mass density’ of energy. Now at the beginning the mass density of the radiant energy was incomparably greater than the density of the matter in the universe. However, in an expanding system the density of radiant energy decreases faster than does the density of matter. The former thins out as the fourth power of the distance of expansion: as the radius of the system doubles, the density of radiant energy drops to one sixteenth. The density of matter declines as the third power; a doubling of the radius means an eightfold increase in volume, or eightfold decrease in density.
Assuming that the universe at the beginning was under absolute rule by radiant energy, we can calculate that the temperature of the universe was 250 million degrees when it was one hour old, dropped to 6,000 degrees (the present temperature of our sun's surface) when it was 200,000 years old and had fallen to about 100 degrees below the freezing point of water when the universe reached its 250-millionth birthday.
This particular birthday was a crucial one in the life of the universe. It was the point at which the density of ordinary matter became greater than the mass density of radiant energy, because of the more rapid fall of the latter. The switch from the reign of radiation to the reign of matter profoundly changed matter's behaviour. During the eons of its subjugation to the will of radiant energy (i.e., light), it must have been spread uniformly through space in the form of thin gas. Nevertheless, as soon as matter became gravitationally more important than the radiant energy, it began to acquire a more interesting character. James Jeans, in his classic studies of the physics of such a situation, proved half a century ago that a gravitating gas filling a very large volume is bound to break up into individual ‘gas balls,’ the size of which is determined by the density and the temperature of the gas. Thus in the year 250,000,000 A.B.E. (after the beginning of expansion), when matter was freed from the dictatorship of radiant energy, the gas broke up into giant gas clouds, slowly drifting apart as the universe continued to expand. Applying Jeans’ mathematical formula for the process to the gas filling the universe at that time, I have found that these primordial balls of gas would have had just about the mass that the galaxies of stars possess today. They were then only ‘proto galaxies’-cold, dark and chaotic. Nonetheless, their gas soon condensed into stars and formed the galaxies as we see them now.
A central question in this picture of the evolutionary universe is the problem of accounting for the formation of the varied kinds of matter composing it, i.e., and the chemical elements . . . My belief is that at the start, matter was composed simply of protons, neutrons and electrons. After five minutes the universe must have cooled enough to permit the aggregation of protons and neutrons into larger units, from deuterons (one neutron and one proton) up to the heaviest elements. This process must have ended after about 30 minutes, for by that time the temperature of the expanding universe must have dropped below the threshold of thermonuclear reactions among light elements, and the neutrons must have been used up in element-building or been converted to protons.
To many a reader the statement that the present chemical constitution of our universe was decided in half an hour five billion years ago will sound nonsensical. Yet consider a spot of ground on the atomic proving ground in Nevada where an atomic bomb was exploded three years ago. Within one microsecond the nuclear reactions generated by the bomb produced a variety of fission products. Today, 100 million-million microseconds later, the site is still ‘hot’ with the surviving fission products. The ratio of one microsecond to three years is the same as the ratio of half an hour to five billion years! If we can accept a time ratio of this order in the one case, why not in the other?
The late Enrico Fermi and Anthony L. Turkevich at the Institute for Nuclear Studies of the University of Chicago undertook a detailed study of thermonuclear reactions such as must have taken place during the first half hour of the universe's expansion. They concluded that the reactions would have produced about equal amounts of hydrogen and helium, making up 99 per cent of the total material, and about 1 per cent of deuterium. We know that hydrogen and helium do in fact make up about 99 per cent of the matter of the universe. This leaves us with the problem of building the heavier elements. Some under which were built by the capture of neutrons, however, since the absence of any stable nucleus of atomic weight five makes it improbable that the heavier elements could have been produced in the first half hour in the abundances now observed, I would agree that the lion's share of the heavy elements might have been formed later in the hot interiors of stars.
All the theories-of the origin, age, extent, composition and nature of the universe-are becoming ever more subject a to test by new instruments and new techniques . . . But we must not forget that the estimate of distances of the galaxies is still founded on the debatable assumption that the brightness of galaxies does not change with time. If galaxies are constantly diminishing in brightness as they age, the calculations cannot be depended upon. Thus the question whether evolution is or is not taking place in the galaxies is of crucial importance at the present stage of our outlook on the universe
After presenting his general theory of relativity in 1915, physicist Albert Einstein tried in vain to unify his theory of gravitation with one that would include all the fundamental forces in nature. Einstein discussed his special and general theories of relativity and his work toward a unified field theory in a 1950 Scientific American article. At the time, he was not convinced that he had discovered a valid solution capable of extending his general theory of relativity to other forces. He died in 1955, leaving this problem unsolved.
Physicists had known since the early 19th century that light is propagated as a transverse wave (a wave in which the vibrations move in a direction perpendicular to the direction of the advancing wave front). They assumed, however, that the wave required some material medium for its transmission, so they postulated an extremely diffuse substance, called ether, as the unobservable medium. Maxwell's theory made such an assumption unnecessary, but the ether concept was not abandoned immediately, because it fit in with the Newtonian concept of an absolute space-time frame for the universe. A famous experiment conducted by the American physicist Albert Abraham Mitchelton and the American chemist Edward Williams Morley in the late 19th century served to dispel the ether concept and was important in the development of the theory of relativity. This work led to the realization that the speed of electromagnetic radiation in a vacuum is an invariant.
At the beginning of the 20th century, however, physicists found that the wave theory did not account for all the properties of radiation. In 1900 the German physicist Max Planck demonstrated that the emission and absorption of radiation occur in finite units of energy, known as quanta. In 1904, Albert Einstein was able to explain some puzzling experimental results on the external photoelectric effect by postulating that electromagnetic radiation can behave like a particle.
Other phenomena, which occur in the interaction between radiation and matter, can also be explained only by the quantum theory. Thus, modern physicists were forced to recognize that electromagnetic radiation can sometimes behave like a particle, and sometimes behave like a wave. The parallel concept-that matter also exhibits the same duality of having particle-like and wavelike characteristics-was developed in 1923 by the French physicist Louis Victor, Prince de Broglie.
Planck’s Constant is the fundamental physical constant, symbol h. It was first discovered (1900) by the German physicist Max Planck. Until that year, light in all forms had been thought to consist of waves. Planck noticed certain deviations from the wave theory of light on the part of radiations emitted by so-called ‘black bodies’, or perfect absorbers and emitters of radiation. He came to the conclusion that these radiations were emitted in discrete units of energy, called quanta. This conclusion was the first enunciation of the quantum theory. According to Planck, the energy of a quantum of light is equal to the frequency of the light multiplied by a constant. His original theory has since had abundant experimental verification, and the growth of the quantum theory has brought about a fundamental change in the physicist's concept of light and matter, both of which are now thought to combine the properties of waves and particles. Thus, Planck's constant has become as important to the investigation of particles of matter as to quanta of light, now called photons. The first successful measurement (1916) of Planck's constant was made by the American physicist Robert Millikan. The present accepted value of the constant is
h = 6.626 × 10-34 joule-second in the metre-kilogram-second system.
As each photon, particle of light energy, or energy that is generated by moving electric charges. Energy generated by moving charges is called electromagnetic radiation. Visible light is one kind of electromagnetic radiation. Other kinds of radiation include radio waves, infrared waves, and X-rays. All such radiation sometimes behaves like a wave and sometimes behaves like a particle. Scientists use the concept of a photon to describe the effects of radiation when it behaves like a particle.
Most photons are invisible to humans. Humans only see photons with energy levels that fall within a certain range. We describe these visible photons as visible light. Invisible photons include radio and television signals, photons that heat food in microwave ovens, the ultraviolet light that causes sunburn, and the X-rays doctors use to view a person’s bones.
The photon is an elementary particle, or a particle that cannot be split into anything smaller. It carries the electromagnetic force, and one of the four fundamental forces of nature, between particles. The electromagnetic force occurs between charged particles or between magnetic materials and charged particles. Electrically charged particles attract or repel each other by exchanging photons back and forth.
Photons are particles with no electrical charge and no mass, but they do have energy and momentum, a property that allows photons to affect other particles when they collide with them. Photons travel at the speed of light, which is about 300,000 km/sec (about 186,000 mi/sec). Only objects without mass can travel at the speed of light. Objects with mass must travel at slower speeds, and nothing can travel at speeds faster than the speed of light.
The energy of a photon is equal to the product of a constant number called Planck’s constant multiplied by the frequency, or number of vibrations per second, of the photon. Scientists write the equation for a photon’s energy as E=hv, where h is Planck’s Constant and v is the frequency. Photons with high frequencies, such as X rays, carry more energy than do photons with low frequencies, such as radio waves. Photons that are visible to the human eye have energy levels around one electron volt (eV) and frequencies from 1014 to 1015 Hz (hertz or cycles per second). The number 1014 is a 1 followed by 14 zeros. The frequency of visible photons corresponds to the colour of their light. Photons of violet light have the highest frequencies of visible light, while photons of red light have the lowest frequencies. Gamma rays, the highest-energy photons of all, have energies in the 1 GeV range (109 eV) and frequencies higher than 1018 Hz. Gamma rays are only produced in special experimental devices called particle accelerators and in outer space.
Although momentum is usually considered a property of objects with mass, photons also have momentum. Momentum determines the amount of force, or pressure, that an object exerts when it hits a surface. In classical physics, or physics that deals with the behaviour of objects we encounter in everyday life, momentum is equal to the product of the mass of an object multiplied by its velocity (the combination of its speed and direction). While photons do not have mass, scientists have found that they exert extremely small amounts of pressure when they strike surfaces. Scientists have redefined momentum to include the force exerted by photons, called light pressure or radiation pressure.
Philosophers from as far back in history as the Greeks of the 5th century Bc have thought about the nature of light. In the 1600's, scientists began to argue over whether light is made of particles or waves. In the 1860's, British physicist James Clerk Maxwell discovered electromagnetic waves, waves of electromagnetic energy that travel at the speed of light. He determined that light is made of these waves, and his theory seemed to settle the wave versus particle issue. His conclusion that light is made of waves is still valid. However, in 1900 German physicist Max Planck renewed the argument that light could also act like particles, and these particles became known as photons. He developed the idea of photons to explain why substances, when heated to higher and higher temperatures, would glow with light of different colours. The wave theory could not explain why the colours changed with temperature changes.
Most scientists did not pay attention to Planck’s theory until 1905, when Albert Einstein used the idea of photons to explain an interaction he had studied called the photoelectric effect. In this interaction, light shining on the surface of a metal causes the metal to emit electrons. Electrons escape the metal by absorbing energy from the light. Einstein showed that light behaves as particles in this situation. If the light behaved like waves, each electron could absorb many light waves and gain ever more energy. He found, however, that a more intense beam of light, with more light waves, did not give each electron more energy. Instead, more light caused the metal to release more electrons, each of which had the same amount of energy. Each electron had to be absorbing a small piece of the light beam, or a particle of light, and all these pieces had the same amount of energy. A beam of light with a higher frequency contained pieces of light with more energy, so when electrons absorbed these particles, they too had more energy. This could only be explained using the photon view of radiation, in which each electron absorbs a single photon and gains enough energy to escape the metal.
Today scientists believe that light behaves both as a wave and as a particle. Scientists detect photons as discrete particles, and photons interact with matter as particles. However, light travels in the form of waves. Some experiments reveal the wave properties of light; for example, in diffraction, light spreads out from a small opening in waves, much like waves of water would behave. Other experiments, such as Einstein’s study of the photoelectric effect, reveal light’s particle properties.
Photon particles of light energy, or energy that is generated by moving electric charges. Energy generated by moving charges is called electromagnetic radiation. Visible light is one kind of electromagnetic radiation. Other kinds of radiation include radio waves, infrared waves, and X rays. All such radiation sometimes behaves like a wave and sometimes behaves like a particle. Scientists use the concept of a photon to describe the effects of radiation when it behaves like a particle.
Most photons are invisible to humans. Humans only see photons with energy levels that fall within a certain range. We describe these visible photons as visible light. Invisible photons include radio and television signals, photons that heat food in microwave ovens, the ultraviolet light that causes sunburn, and the X rays doctors use to view a person’s bones.
The photon is an elementary particle, or a particle that cannot be split into anything smaller. It carries the electromagnetic force. One of the four fundamentals forces of nature, between particles. The electromagnetic force occurs between charged particles or between magnetic materials and charged particles. Electrically charged particles attract or repel each other by exchanging photons back and forth.
Photons are particles with no electrical charge and no mass, but they do have energy and momentum, a property that allows photons to affect other particles when they collide with them. Photons travel at the speed of light, which is about 300,000 km/sec (about 186,000 mi/sec). Only objects without mass can travel at the speed of light. Objects with mass must travel at slower speeds, and nothing can travel at speeds faster than the speed of light.
The energy of a photon is equal to the product of a constant number called Planck’s constant multiplied by the frequency, or number of vibrations per second, of the photon. Scientists write the equation for a photon’s energy as E=hv, where h is Planck’s Constant and v is the frequency. Photons with high frequencies, such as X-rays, carry more energy than do photons with low frequencies, such as radio waves. Photons that are visible to the human eye have energy levels around one electron volt (eV) and frequencies from 1014 to 1015 Hz (hertz or cycles per second). The number 1014 is a 1 followed by 14 zeros. The frequency of visible photons corresponds to the colour of their light. Photons of violet light have the highest frequencies of visible light, while photons of red light have the lowest frequencies. Gamma rays, the highest-energy photons of all, have energies in the 1 GeV range (109 eV) and frequencies higher than 1018 Hz. Gamma rays are only produced in special experimental devices called particle accelerators and in outer space.
Although momentum is usually considered a property of objects with mass, photons also have momentum. Momentum determines the amount of force, or pressure, that an object exerts when it hits a surface. In classical physics, or physics that deals with the behaviour of objects we encounter in everyday life, momentum is equal to the product of the mass of an object multiplied by its velocity (the combination of its speed and direction). While photons do not have mass, scientists have found that they exert extremely small amounts of pressure when they strike surfaces. Scientists have redefined momentum to include the force exerted by photons, called light pressure or radiation pressure.
Philosophers from as far back in history as the Greeks of the 5th century Bc have thought about the nature of light. In the 1600's, scientists began to argue over whether light is made of particles or waves. In the 1860's, British physicist James Clerk Maxwell discovered electromagnetic waves, waves of electromagnetic energy that travel at the speed of light. He determined that light is made of these waves, and his theory seemed to settle the wave versus particle issue. His conclusion that light is made of waves is still valid. However, in 1900 German physicist Max Planck renewed the argument that light could also act like particles, and these particles became known as photons. He developed the idea of photons to explain why substances, when heated to higher and higher temperatures, would glow with light of different colours. The wave theory could not explain why the colours changed with temperature changes.
Today scientists believe that light behaves both as a wave and as a particle. Scientists detect photons as discrete particles, and photons interact with matter as particles. However, light travels in the form of waves. Some experiments reveal the wave properties of light; for example, in diffraction, light spreads out from a small opening in waves, much like waves of water would behave. Other experiments, such as Einstein’s study of the photoelectric effect, reveal light’s particle properties.
Most synonymous with quantum theory is the Uncertainty Principle, in quantum mechanics, theory states that specifying simultaneously the position and momentum of a particle is impossible, such as an electron, with precision. Also called the indeterminacy principle, the theory further states that a more accurate determination of one quantity will result in a less precise measurement of the other, and that the product of both uncertainties is never less than Planck's constant, named after the German physicist Max Planck. Of very small magnitude, the uncertainty results from the fundamental nature of the particles being observed. In quantum mechanics, probability calculations therefore replace the exact calculations of classical mechanics.
Formulated in 1927 by the German physicist Werner Heisenberg, the uncertainty principle was of great significance in the development of quantum mechanics. Its philosophic implications of indeterminacy created a strong trend of mysticism among scientists who interpreted the concept as a violation of the fundamental law of cause and effect. Other scientists, including Albert Einstein, believed that the uncertainty involved in observation in no way contradicted the existence of laws governing the behaviour of the particles or the ability of scientists to discover these laws.
Of a final summation, science is a systematic study of anything that can be examined, tested, and verified. The word science is derived from the Latin word scire, meaning ‘to know.’ From its beginnings, science has developed into one of the greatest and most influential fields of human endeavour. Today different branches of science investigate almost everything that can be observed or detected, and science as a whole shapes the way we understand the universe, our planet, ourselves, and other living things.
Science develops through objective analysis, instead of through personal belief. Knowledge gained in science accumulates as time goes by, building on work carried out earlier. Some of this knowledge—such as our understanding of numbers-stretches back to the time of ancient civilizations, when scientific thought first began. Other scientific knowledge-such as our understanding of genes that cause cancer or of quarks (the smallest known building block of matter)-dates back less than 50 years. However, in all fields of science, old or new, researchers use the same systematic approach, known as the scientific method, to add to what is known.
During scientific investigations, scientists put together and compare new discoveries and existing knowledge. In most cases, new discoveries extend what is currently accepted, providing further evidence that existing ideas are correct. For example, in 1676 the English physicist Robert Hooke discovered that elastic objects, such as metal springs, stretches in proportion to the force that acts on them. Despite all the advances that have been made in physics since 1676, this simple law still holds true.
Scientists utilize existing knowledge in new scientific investigations to predict how things will behave. For example, a scientist who knows the exact dimensions of a lens can predict how the lens will focus a beam of light. In the same way, by knowing the exact makeup and properties of two chemicals, a researcher can predict what will happen when they combine. Sometimes scientific predictions go much further by describing objects or events that are not yet known. An outstanding instance occurred in 1869, when the Russian chemist Dmitry Mendeleyev drew up a periodic table of the elements arranged to illustrate patterns of recurring chemical and physical properties. Mendeleyev used this table to predict the existence and describe the properties of several elements unknown in his day, and when the elements were discovered several years later, his predictions proved to be correct.
In science, important advances can also be made when current ideas are shown to be wrong. A classic case of this occurred early in the 20th century, when the German geologist Alfred Wegener suggested that the continents were at one time connected, a theory known as continental drift. At the time, most geologists discounted Wegener's ideas, because the Earth's crust seemed to be fixed. Nonetheless, following the discovery of plate tectonics in the 1960s, in which scientists found that the Earth’s crust is made of moving plates, continental drift became an important part of geology.
Through advances like these, scientific knowledge is constantly added to and refined. As a result, science gives us an ever more detailed insight into the way the world around us works.
For a large part of recorded history, science had little bearing on people's everyday lives. Scientific knowledge was gathered for its own sake, and it had few practical applications. However, with the dawn of the Industrial Revolution in the 18th century, this rapidly changed. Today, science has a profound effect on the way we live, largely through technology-the use of scientific knowledge for practical purposes.
Some forms of technology have become so well established that forgetting the great scientific achievements that they represent is easy. The refrigerator, for example, owes its existence to a discovery that liquids take in energy when they evaporate, a phenomenon known as latent heat. The principle of latent heat was first exploited in a practical way in 1876, and the refrigerator has played a major role in maintaining public health ever since. The first automobile, dating from the 1880's, made use of many advances in physics and engineering, including reliable ways of generating high-voltage sparks, while the first computers emerged in the 1940's from simultaneous advances in electronics and mathematics.
Other fields of science also play an important role in the things we use or consume every day. Research in food technology has created new ways of preserving and flavouring what we eat. Research in industrial chemistry has created a vast range of plastics and other synthetic materials, which have thousands of uses in the home and in industry. Synthetic materials are easily formed into complex shapes and can be used to make machine, electrical, and automotive parts, scientific and industrial instruments, decorative objects, containers, and many other items.
Alongside these achievements, science has also brought about technology that helps save human life. The kidney dialysis machine enables many people to survive kidney diseases that would once have proved fatal, and artificial valves allow sufferers of coronary heart disease to return to active living. Biochemical research is responsible for the antibiotics and vaccinations that protect us from infectious diseases, and for a wide range of other drugs used to combat specific health problems. As a result, the majority of people on the planet now live longer and healthier lives than ever before.
However, scientific discoveries can also have a negative impact in human affairs. Over the last hundred years, some of the technological advances that make life easier or more enjoyable have proved to have unwanted and often unexpected long-term effects. Industrial and agricultural chemicals pollute the global environment, even in places as remote as Antarctica, and city air is contaminated by toxic gases from vehicle exhausts. The increasing pace of innovation means that products become rapidly obsolete, adding to a rising tide of waste. Most significantly of all, the burning of fossil fuels such as coal, oil, and natural gas releases into the atmosphere carbon dioxide and other substances known as greenhouse gases. These gases have altered the composition of the entire atmosphere, producing global warming and the prospect of major climate change in years to come.
Science has also been used to develop technology that raises complex ethical questions. This is particularly true in the fields of biology and medicine. Research involving genetic engineering, cloning, and in vitro fertilization gives scientists the unprecedented power to bring about new life, or to devise new forms of living things. At the other extreme, science can also generate technology that is deliberately designed to harm or to kill. The fruits of this research include chemical and biological warfare, and nuclear weapons, by far the most destructive weapons that the world has ever known.
Scientific research can be divided into basic science, also known as pure science, and applied science. In basic science, scientists working primarily at academic institutions pursue research simply to satisfy the thirst for knowledge. In applied science, scientists at industrial corporations conduct research to achieve some kind of practical or profitable gain.
In practice, the division between basic and applied science is not always clear-cut. This is because discoveries that initially seem to have no practical use often develop one as time goes by. For example, superconductivity, the ability to conduct electricity with no resistance, was little more than a laboratory curiosity when Dutch physicist Heike Kamerlingh Onnes discovered it in 1911. Today superconducting electromagnets are used in an ever-increasing number of important applications, from diagnostic medical equipment to powerful particle accelerators.
Scientists study the origin of the solar system by analysing meteorites and collecting data from satellites and space probes. They search for the secrets of life processes by observing the activity of individual molecules in living cells. They observe the patterns of human relationships in the customs of aboriginal tribes. In each of these varied investigations the questions asked and the means employed to find answers are different. All the inquiries, however, share a common approach to problem solving known as the scientific method. Scientists may work alone or they may collaborate with other scientists. In all cases, a scientist’s work must measure up to the standards of the scientific community. Scientists submit their findings to science forums, such as science journals and conferences, in order to subject the findings to the scrutiny of their peers.
Whatever the aim of their work, scientists use the same underlying steps to organize their research: (1) they make detailed observations about objects or processes, either as they occur in nature or as they take place during experiments; (2) they collect and analyse the information observed; and (3) they formulate a hypothesis that explains the behaviour of the phenomena observed.
A scientist begins an investigation by observing an object or an activity. Observation typically involves one or more of the humans senses-hearing, sights, smells, taste, and touch. Scientists typically use tools to aid in their observations. For example, a microscope helps view objects too small to be seen with the unaided human eye, while a telescope views objects too far away to be seen by the unaided eye.
Scientists typically apply their observation skills to an experiment. An experiment is any kind of trial that enables scientists to control and change at will the conditions under which events occur. It can be something extremely simple, such as heating a solid to see when it melts, or something highly complex, such as bouncing a radio signal off the surface of a distant planet. Scientists typically repeat experiments, sometimes many times, in order to be sure that the results were not affected by unforeseen factors.
Most experiments involve real objects in the physical world, such as electric circuits, chemical compounds, or living organisms. However, with the rapid progress in electronics, computer simulations can now carry out some experiments instead. If they are carefully constructed, these simulations or models can accurately predict how real objects will behave.
One advantage of a simulation is that it allows experiments to be conducted without any risks. Another is that it can alter the apparent passage of time, speeding up or slowing natural processes. This enables scientists to investigate things that happen very gradually, such as evolution in simple organisms, or ones that happen almost instantaneously, such as collisions or explosions.
During an experiment, scientists typically make measurements and collect results as they work. This information, known as data, can take many forms. Data may be a set of numbers, such as daily measurements of the temperature in a particular location or a description of side effects in an animal that has been given an experimental drug. Scientists typically use computers to arrange data in ways that make the information easier to understand and analyse. Data may be arranged into a diagram such as a graph that shows how one quantity (body temperature, for instance) varies in relation to another quantity (days since starting a drug treatment). A scientist flying in a helicopter may collect information about the location of a migrating herd of elephants in Africa during different seasons of a year. The data collected maybe in the form of geographic coordinates that can be plotted on a map to provide the position of the elephant herd at any given time during a year.
Scientists use mathematics to analyse the data and help them interpret their results. The types of mathematics used include statistics, which is the analysis of numerical data, and probability, which calculates the likelihood that any particular event will occur.
Once an experiment has been carried out and data collected and analysed, scientists look for whatever pattern their results produce and try to formulate a hypothesis that explains all the facts observed in an experiment. In developing a hypothesis, scientists employ methods of induction to generalize from the experiment’s results to predict future outcomes, and deduction to infer new facts from experimental results.
Formulating a hypothesis may be difficult for scientists because there may not be enough information provided by a single experiment, or the experiment’s conclusion may not fit old theories. Sometimes scientists do not have any prior idea of a hypothesis before they start their investigations, but often scientists start out with a working hypothesis that will be proved or disproved by the results of the experiment. Scientific hypotheses can be useful, just as hunches and intuition can be useful in everyday life. Yet they can also be problematic because they tempt scientists, either deliberately or unconsciously, to favour data that support their ideas. Scientists generally take great care to avoid bias, but it remains an ever-present threat. Throughout the history of science, numerous researchers have fallen into this trap, either in the hope or self-advancement that they firmly believe their ideas to be true.
If a hypothesis is borne out by repeated experiments, it becomes a theory-an explanation that seems to fit with the facts consistently. The ability to predict new facts or events is a key test of a scientific theory. In the 17th century German astronomer Johannes Kepler proposed three theories concerning the motions of planets. Kepler’s theories of planetary orbits were confirmed when they were used to predict the future paths of the planets. On the other hand, when theories fail to provide suitable predictions, these failures may suggest new experiments and new explanations that may lead to new discoveries. For instance, in 1928 British microbiologist Frederick Griffith discovered that the genes of dead virulent bacteria could transform harmless bacteria into virulent ones. The prevailing theory at the time was that genes were made of proteins. Nevertheless, studies carried through by Canadian-born American bacteriologist Oswald Avery and colleagues in the 1930's repeatedly showed that the transforming gene was active even in bacteria from which protein was removed. The failure to prove that genes were composed of proteins spurred Avery to construct different experiments and by 1944 Avery and his colleagues had found that genes were composed of deoxyribonucleic acid (DNA), not proteins.
If other scientists do not have access to scientific results, the research may as well not have been put into effect at all. Scientists need to share the results and conclusions of their work so that other scientists can debate the implications of the work and use it to spur new research. Scientists communicate their results with other scientists by publishing them in science journals and by networking with other scientists to discuss findings and debate issues.
In science, publication follows a formal procedure that has set rules of its own. Scientists describe research in a scientific paper, which explains the methods used, the data collected, and the conclusions that can be drawn. In theory, the paper should be detailed enough to enable any other scientist to repeat the research so that the findings can be independently checked.
Scientific papers usually begin with a brief summary, or abstract, that describes the findings that follow. Abstracts enable scientists to consult papers quickly, without having to read them in full. At the end of most papers is a list of citations-bibliographic references that acknowledge earlier work that has been drawn on in the course of the research. Citations enable readers to work backwards through a chain of research advancements to verify that each step is soundly based.
Scientists typically submit their papers to the editorial board of a journal specializing in a particular field of research. Before the paper is accepted for publication, the editorial board sends it out for peer review. During this procedure a panel of experts, or referees, assesses the paper, judging whether or not the research has been carried out in a fully scientific manner. If the referees are satisfied, publication goes ahead. If they have reservations, some of the research may have to be repeated, but if they identify serious flaws, the entire paper may be rejected for publication.
The peer-review process plays a critical role because it ensures high standards of scientific method. However, it can be a contentious area, as it allows subjective views to become involved. Because scientists are human, they cannot avoid developing personal opinions about the value of each other’s work. Furthermore, because referees tend to be senior figures, they may be less than welcoming to new or unorthodox ideas.
Once a paper has been accepted and published, it becomes part of the vast and ever-expanding body of scientific knowledge. In the early days of science, new research was always published in printed form, but today scientific information spreads by many different means. Most major journals are now available via the Internet (a network of linked computers), which makes them quickly accessible to scientists all over the world.
When new research is published, it often acts as a springboard for further work. Its impact can then be gauged by seeing how often the published research appears as a cited work. Major scientific breakthroughs are cited thousands of times a year, but at the other extreme, obscure pieces of research may be cited rarely or not at all. However, citation is not always a reliable guide to the value of scientific work. Sometimes a piece of research will go largely unnoticed, only to be rediscovered in subsequent years. Such was the case for the work on genes done by American geneticist Barbara McClintock during the 1940's. McClintock discovered a new phenomenon in corn cells known as transposable genes, sometimes referred to as jumping genes. McClintock observed that a gene could move from one chromosome to another, where it would break the second chromosome at a particular site, insert itself there, and influence the function of an adjacent gene. Her work was largely ignored until the 1960's when scientists found that transposable genes were a primary means for transferring genetic material in bacteria and more complex organisms. McClintock was awarded the 1983 Nobel Prize in physiology or medicine for her work in transposable genes, more than 35 years after doing the research.
In addition to publications, scientists form associations with other scientists from particular fields. Many scientific organizations arrange conferences that bring together scientists to share new ideas. At these conferences, scientists present research papers and discuss their implications. In addition, science organizations promote the work of their members by publishing newsletters and Web sites; networking with journalists at newspapers, magazines, and television stations to help them understand new findings; and lobbying lawmakers to promote government funding for research.
The oldest surviving science organization is the Academia dei Lincei, in Italy, which was established in 1603. The same century also saw the inauguration of the Royal Society of London, founded in 1662, and the Académie des Sciences de Paris, founded in 1666. American scientific societies date back to the 18th century, when American scientist and diplomat Benjamin Franklin founded a philosophical club in 1727. In 1743 this organization became the American Philosophical Society, which still exists today.
In the United States, the American Association for the Advancement of Science (AAAS) plays a key role in fostering the public understanding of science and in promoting scientific research. Founded in 1848, it has nearly 300 affiliated organizations, many of which originally developed from AAAS special-interest groups.
Since the late 19th century, communication among scientists has also been improved by international organizations, such as the International Bureau of Weights and Measures, founded in 1873, the International Council of Research, founded in 1919, and the World Health Organization, founded in 1948. Other organizations act as international forums for research in particular fields. For example, the Intergovernmental Panel on Climate Change (IPCC), established in 1988, assesses research on how climate change occurs, and what affects change is likely to have on humans and their environment.
Classifying sciences involves arbitrary decisions because the universe is not easily split into separate compartments. This article divides science into five major branches: mathematics, physical sciences, earth sciences, life sciences, and social sciences. A sixth branch, technology, draws on discoveries from all areas of science and puts them to practical use. Each of these branches itself consists of numerous subdivisions. Many of these subdivisions, such as astrophysics or biotechnology, combine overlapping disciplines, creating yet more areas of research. For additional information on individual sciences, refer to separate articles highlighted in the text.
The mathematical sciences investigate the relationships between things that can be measured or quantified in either a real or abstract form. Pure mathematics differs from other sciences because it deals solely with logic, rather than with nature's underlying laws. However, because it can be used to solve so many scientific problems, mathematics is usually considered to be a science itself.
Central to mathematics is arithmetic, the use of numbers for calculation. In arithmetic, mathematicians combine specific numbers to produce a result. A separate branch of mathematics, called algebra, works in a similar way, but uses general expressions that apply to numbers as a whole. For example, if there are three separate items on a restaurant bill, simple arithmetic produces the total amount to be paid. Yet the total can also be calculated by using an algebraic formula. A powerful and flexible tool, algebra enables mathematicians to solve highly complex problems in every branch of science.
Geometry investigates objects and the spaces around them. In its simplest form, it deals with objects in two or three dimensions, such as lines, circles, cubes, and spheres. Geometry can be extended to cover abstractions, including objects in many dimensions. Although we cannot perceive these extra dimensions ourselves, the logic of geometry still holds.
In geometry, working out the exact area of a rectangle or the gradient is easy (slope) of a line, but there are some problems that geometry cannot solve by conventional means. For example, geometry cannot calculate the exact gradient at a point on a curve, or the area that the curve bounds. Scientists find that calculating quantities like this helps them understand physical events, such as the speed of a rocket at any particular moment during its acceleration.
To solve these problems, mathematicians use calculus, which deals with continuously changing quantities, such as the position of a point on a curve. Its simultaneous development in the 17th century by English mathematician and physicist Isaac Newton and German philosopher and mathematician Gottfried Wilhelm Leibniz enabled the solution of many problems that had been insoluble by the methods of arithmetic, algebra, and geometry. Among the advances that calculus helped develop were the determination of Newton’s laws of motion and the theory of electromagnetism.
The physical sciences investigate the nature and behaviour of matter and energy on a vast range of size and scale. In physics itself, scientists study the relationships between matter, energy, force, and time in an attempt to explain how these factors shape the physical behaviour of the universe. Physics can be divided into many branches. Scientists study the motion of objects, a huge branch of physics known as mechanics that involves two overlapping sets of scientific laws. The laws of classical mechanics govern the behaviour of objects in the macroscopic world, which includes everything from billiard balls to stars, while the laws of quantum mechanics govern the behaviour of the particles that make up individual atoms.
Other branches of physics focus on energy and its large-scale effects. Thermodynamics is the study of heat and the effects of converting heat into other kinds of energy. This branch of physics has a host of highly practical applications because heat is often used to power machines. Physicists also investigate electrical energy and energy that are carried in electromagnetic waves. These include radio waves, light rays, and X rays-forms of energy that are closely related and that all obey the same set of rules.
Chemistry is the study of the composition of matter and the way different substances interact-subjects that involve physics on an atomic scale. In physical chemistry, chemists study the way physical laws govern chemical change, while in other branches of chemistry the focus is on particular chemicals themselves. For example, inorganic chemistry investigates substances found in the nonliving world and organic chemistry investigates carbon-based substances. Until the 19th century, these two areas of chemistry were thought to be separate and distinct, but today chemists routinely produce organic chemicals from inorganic raw materials. Organic chemists have learned how to synthesize many substances that are found in nature, together with hundreds of thousands that are not, such as plastics and pesticides. Many organic compounds, such as reserpine, a drug used to treat hypertension, cost less to produce by synthesizing from inorganic raw materials than to isolate from natural sources. Many synthetic medicinal compounds can be modified to make them more effective than their natural counterparts, with less harmful side effects.
The branch of chemistry known as biochemistry deals solely with substances found in living things. It investigates the chemical reactions that organisms use to obtain energy and the reactions up which they use to build themselves. Increasingly, this field of chemistry has become concerned not simply with chemical reactions themselves but also with how the shape of molecules influences the way they work. The result is the new field of molecular biology-one of the fastest-growing sciences today.
Physical scientists also study matter elsewhere in the universe, including the planets and stars. Astronomy is the science of the heavens, while astrophysics is a branch of astronomy that investigates the physical and chemical nature of stars and other objects. Astronomy deals largely with the universe as it appears today, but a related science called cosmology looks back in time to answer the greatest scientific questions of all: how the universe began and how it came to be as it is today.
The earth sciences examine the structure and composition of our planet, and the physical processes that have helped to shape it. Geology focuses on the structure of Earth, while geography is the study of everything on the planet's surface, including the physical changes that humans have brought about from, for example, farming, mining, or deforestation. Scientists in the field of geomorphology study Earth's present landforms, while mineralogists investigate the minerals in Earth's crust and the way they formed.
Water dominates Earth's surface, making it an important subject for scientific research. Oceanographers carry out research in the oceans, while scientists working in the field of hydrology investigate water resources on land, a subject of vital interest in areas prone to drought. Glaciologists study Earth's icecaps and mountain glaciers, and the effects that ice have when it forms, melts, or moves. In atmospheric science, meteorology deals with day-to-day changes in weather, but climatology investigates changes in weather patterns over the longer term.
When living things die their remains are sometimes preserved, creating a rich store of scientific information. Palaeontology is the study of plant and animal remains that have been preserved in sedimentary rock, often millions of years ago. Paleontologists study things long dead and their findings shed light on the history of evolution and on the origin and development of humans. A related science, called palynology, is the study of fossilized spores and pollen grains. Scientists study these tiny structures to learn the types of plants that grew in certain areas during Earth’s history, which also helps identify what Earth’s climates were like in the past.
The life sciences include all those areas of study that deal with living things. Biology is the general study of the origin, development, structure, function, evolution, and distribution of living things. Biology may be divided into botany, the study of plants; zoology, the study of animals; and microbiology, the study of the microscopic organisms, such as bacteria, viruses, and fungi. Many single-celled organisms play important roles in life processes and thus are important to more complex forms of life, including plants and animals.
Genetics is the branch of biology that studies the way in which characteristics are transmitted from an organism to its offspring. In the latter half of the 20th century, new advances made it easier to study and manipulate genes at the molecular level, enabling scientists to catalogue all the genes found in each cell of the human body. Exobiology, a new and still speculative field, is the study of possible extraterrestrial life. Although Earth remains the only place known to support life, many believe that it is only a matter of time before scientists discover life elsewhere in the universe.
While exobiology is one of the newest life sciences, anatomy is one of the oldest. It is the study of plant and animal structures, carried out by dissection or by using powerful imaging techniques. Gross anatomy deals with structures that are large enough to see, while microscopic anatomy deals with much smaller structures, down to the level of individual cells.
Physiology explores how living things’ work. Physiologists study processes such as cellular respiration and muscle contraction, as well as the systems that keep these processes under control. Their work helps to answer questions about one of the key characteristics of life-the fact that most living things maintain a steady internal state when the environment around them constantly changes.
Together, anatomy and physiology form two of the most important disciplines in medicine, the science of treating injury and human disease. General medical practitioners have to be familiar with human biology as a whole, but medical science also includes a host of clinical specialties. They include sciences such as cardiology, urology, and oncology, which investigate particular organs and disorders, and pathology, the general study of disease and the changes that it causes in the human body.
As well as working with individual organisms, life scientists also investigate the way living things interact. The study of these interactions, known as ecology, has become a key area of study in the life sciences as scientists become increasingly concerned about the disrupting effects of human activities on the environment.
The social sciences explore human society past and present, and the way human beings behave. They include sociology, which investigates the way society is structured and how it functions, as well as psychology, which is the study of individual behaviour and the mind. Social psychology draws on research in both these fields. It examines the way society influence’s people's behaviour and attitudes.
Another social science, anthropology, looks at humans as a species and examines all the characteristics that make us what we are. These include not only how people relate to each other but also how they interact with the world around them, both now and in the past. As part of this work, anthropologists often carry out long-term studies of particular groups of people in different parts of the world. This kind of research helps to identify characteristics that all human beings share and those that are the products of local culture, learned and handed on from generation to generation.
The social sciences also include political science, law, and economics, which are products of human society. Although far removed from the world of the physical sciences, all these fields can be studied in a scientific way. Political science and law are uniquely human concepts, but economics has some surprisingly close parallels with ecology. This is because the laws that govern resource use, productivity, and efficiency do not operate only in the human world, with its stock markets and global corporations, but in the nonhuman world as well.
In technology, scientific knowledge is put to practical ends. This knowledge comes chiefly from mathematics and the physical sciences, and it is used in designing machinery, materials, and industrial processes. Overall, this work is known as engineering, a word dating back to the early days of the Industrial Revolution, when an ‘engine’ was any kind of machine.
Engineering has many branches, calling for a wide variety of different skills. For example, aeronautical engineers need expertise in the science of fluid flow, because aeroplanes fly through air, which is a fluid. Using wind tunnels and computer models, aeronautical engineers strive to minimize the air resistance generated by an aeroplane, while at the same time maintaining a sufficient amount of lift. Marine engineers also need detailed knowledge of how fluids behave, particularly when designing submarines that have to withstand extra stresses when they dive deep below the water’s surface. In civil engineering, stress calculations ensure that structures such as dams and office towers will not collapse, particularly if they are in earthquake zones. In computing, engineering takes two forms: hardware design and software design. Hardware design refers to the physical design of computer equipment (hardware). Software design is carried out by programmers who analyse complex operations, reducing them to a series of small steps written in a language recognized by computers.
In recent years, a completely new field of technology has developed from advances in the life sciences. Known as biotechnology, it involves such varied activities as genetic engineering, the manipulation of genetic material of cells or organisms, and cloning, the formation of genetically uniform cells, plants, or animals. Although still in its infancy, many scientists believe that biotechnology will play a major role in many fields, including food production, waste disposal, and medicine.
Science exists because humans have a natural curiosity and an ability to organize and record things. Curiosity is a characteristic shown by many other animals, but organizing and recording knowledge is a skill demonstrated by humans alone.
During prehistoric times, humans recorded information in a rudimentary way. They made paintings on the walls of caves, and they also carved numerical records on bones or stones. They may also have used other ways of recording numerical figures, such as making knots in leather cords, but because these records were perishable, no traces of them remain. However, with the invention of writing about 6,000 years ago, a new and much more flexible system of recording knowledge appeared.
The earliest writers were the people of Mesopotamia, who lived in a part of present-day Iraq. Initially they used a pictographic script, inscribing tallies and lifelike symbols on tablets of clay. With the passage of time, these symbols gradually developed into cuneiform, a much more stylized script composed of wedge-shaped marks.
Because clay is durable, many of these ancient tablets still survive. They show that when writing first appeared that the Mesopotamians already had a basic knowledge of mathematics, astronomy, and chemistry, and that they used symptoms to identify common diseases. During the following 2,000 years, as Mesopotamian culture became increasingly sophisticated, mathematics in particular became a flourishing science. Knowledge accumulated rapidly, and by 1000 Bc the earliest private libraries had appeared.
Southwest of Mesopotamia, in the Nile Valley of northeastern Africa, the ancient Egyptians developed their own form of pictographic script, writing on papyrus, or inscribing text in stone. Written records from 1500 Bc show that, like the Mesopotamians, the Egyptians had a detailed knowledge of diseases. They were also keen astronomers and skilled mathematicians-a fact demonstrated by the almost perfect symmetry of the pyramids and by other remarkable structures they built.
For the peoples of Mesopotamia and ancient Egypt, knowledge was recorded mainly for practical needs. For example, astronomical observations enabled the development of early calendars, which helped in organizing the farming year. It is, nonetheless, that in ancient Greece, often recognized as the birthplace of Western science, a new kind of scientific enquiry began. Here, philosophers sought knowledge largely for its own sake.
Thales of Miletus were one of the first Greek philosophers to seek natural causes for natural phenomena. He travelled widely throughout Egypt and the Middle East and became famous for predicting a solar eclipse that occurred in 585 Bc. At a time when people regarded eclipses as ominous, inexplicable, and frightening events, his prediction marked the start of rationalism, a belief that the universe can be explained by reason alone. Rationalism remains the hallmark of science to this day.
Thales and his successors speculated about the nature of matter and of Earth itself. Thales himself believed that Earth was a flat disk floating on water, but the followers of Pythagoras, one of ancient Greece's most celebrated mathematicians, believed that Earth was spherical. These followers also thought that Earth moved in a circular orbit-not around the Sun but around a central fire. Although flawed and widely disputed, this bold suggestion marked an important development in scientific thought: the idea that Earth might not be, after all, the Centre of the universe. At the other end of the spectrum of scientific thought, the Greek philosopher Leucippus and his student Democritus of Abdera proposed that all matter be made up of indivisible atoms, more than 2,000 years before the idea became a part of modern science.
As well as investigating natural phenomena, ancient Greek philosophers also studied the nature of reasoning. At the two great schools of Greek philosophy in Athens-the Academy, founded by Plato, and the Lyceum, founded by Plato's pupil Aristotle-students learned how to reason in a structured way using logic. The methods taught at these schools included induction, which involve taking particular cases and using them to draw general conclusions, and deduction, the process of correctly inferring new facts from something already known.
In the two centuries that followed Aristotle's death in 322 Bc, Greek philosophers made remarkable progress in a number of fields. By comparing the Sun's height above the horizon in two different places, the mathematician, astronomer, and geographer Eratosthenes calculated Earth's circumference, producing a figure accurate to within 1 percent. Another celebrated Greek mathematician, Archimedes, laid the foundations of mechanics. He also pioneered the science of hydrostatics, the study of the behaviour of fluids at rest. In the life sciences, Theophrastus founded the science of botany, providing detailed and vivid descriptions of a wide variety of plant species as well as investigating the germination process in seeds.
By the 1st century Bc, Roman power was growing and Greek influence had begun to wane. During this period, the Egyptian geographer and astronomer Ptolemy charted the known planets and stars, putting Earth firmly at the Centre of the universe, and Galen, a physician of Greek origin, wrote important works on anatomy and physiology. Although skilled soldiers, lawyers, engineers, and administrators, the Romans had little interest in basic science. As a result, scientific growth made little advancement in the days of the Roman Empire. In Athens, the Lyceum and Academy were closed down in ad 529, bringing the first flowering of rationalism to an end.
For more than nine centuries, from about ad 500 to 1400, Western Europe made only a minor contribution to scientific thought. European philosophers became preoccupied with alchemy, a secretive and mystical pseudoscience that held out the illusory promise of turning inferior metals into gold. Alchemy did lead to some discoveries, such as sulfuric acid, which was first described in the early 1300s, but elsewhere, particularly in China and the Arab world, much more significant progress in the sciences was made.
Chinese science developed in isolation from Europe, and followed a different pattern. Unlike the Greeks, who prized knowledge as an end to self-splendour, the Chinese excelled at turning scientific discoveries to practical ends. The list of their technological achievements is dazzling: it includes the compass, invented in about AD 270; wood-block printing, developed around 700, and gunpowder and movable type, both invented around the year 1000. The Chinese were also capable mathematicians and excellent astronomers. In mathematics, they calculated the value of pi to within seven decimal places by the year 600, while in astronomy, one of their most celebrated observations was that of the supernova, or stellar explosion, that took place in the Crab Nebula in 1054. China was also the source of the world's oldest portable star map, dating from about 940.
The Islamic world, which in medieval times extended as far west as Spain, also produced many scientific breakthroughs. The Arab mathematician Muhammad al -Khwarizmi introduced Hindu-Arabic numerals to Europe many centuries after they had been devised in southern Asia. Unlike the numerals used by the Romans, Hindu-Arabic numerals include zero, a mathematical device unknown in Europe at the time. The value of Hindu-Arabic numerals depends on their place: in the number 300, for example, the numeral three is worth ten times as much as in 30. Al-Khwarizmi also wrote on algebra (itself derived from the Arab word al-jabr), and his name survives in the word algorithm, a concept of great importance in modern computing.
In astronomy, Arab observers charted the heavens, giving many of the brightest stars the names we use today, such as Aldebaran, Altair, and Deneb. Arab scientists also explored chemistry, developing methods to manufacture metallic alloys and test the quality and purity of metals. As in mathematics and astronomy, Arab chemists left their mark in some of the names they used-alkali and alchemy, for example, are both words of Arabic origin. Arab scientists also played a part in developing physics. One of the most famous Egyptian physicists, Alhazen, published a book that dealt with the principles of lenses, mirrors, and other devices used in optics. In this work, he rejected the then-popular idea that eyes give out light rays. Instead, he correctly deduced that eyes work when light rays enter the eye from outside.
In Europe, historians often attribute the rebirth of science to a political event—the capture of Constantinople (now İIstanbul) by the Turks in 1453. At the time, Constantinople was the capital of the Byzantine Empire and a major seat of learning. Its downfall led to an exodus of Greek scholars to the West. In the period that followed, many scientific works, including those originally from the Arab world, were translated into European languages. Through the invention of the movable type printing press by Johannes Gutenberg around 1450, copies of these texts became widely available.
The Black Death, a recurring outbreak of bubonic plague that began in 1347, disrupted the progress of science in Europe for more than two centuries. Yet in 1543 two books were published that had a profound impact on scientific progress. One was De Corporis Humani Fabrica (On the Structure of the Human Body, 7 volumes, 1543), by the Belgian anatomist Andreas Vesalius. Vesalius studied anatomy in Italy, and his masterpiece, which was illustrated by superb woodcuts, corrected errors and misunderstandings about the body before which had persisted since the time of Galen more than 1,300 years. Unlike Islamic physicians, whose religion prohibited them from dissecting human cadavers, Vesalius investigated the human body in minute detail. As a result, he set new standards in anatomical science, creating a reference work of unique and lasting value.
The other book of great significance published in 1543 was De Revolutionibus Orbium Coelestium (On the Revolutions of the Heavenly Spheres), written by the Polish astronomer Nicolaus Copernicus. In it, Copernicus rejected the idea that Earth was the Centre of the universe, as proposed by Ptolemy in the 1st century Bc. Instead, he set out to prove that Earth, together with the other planets, follows orbits around the Sun. Other astronomers opposed Copernicus's ideas, and more ominously, so did the Roman Catholic Church. In the early 1600's, the church placed the book on a list of forbidden works, where it remained for more than two centuries. Despite this ban and despite the book's inaccuracies (for instance, Copernicus believed that Earth's orbit was circular rather than elliptical), De Revolutionibus remained a momentous achievement. It also marked the start of a conflict between science and religion since which has dogged Western thought ever.
In the first decade of the 17th century, the invention of the telescope provided independent evidence to support Copernicus's views. Italian physicist and astronomer Galileo Galilei used the new device to remarkable effect. He became the first person to observe satellites circling Jupiter, the first to make detailed drawings of the surface of the Moon, and the first to see how Venus waxes and wanes as it circles the Sun.
These observations of Venus helped to convince Galileo that Copernicus’s Sun-centred view of the universe had been correct, but he fully understood the danger of supporting such heretical ideas. His Dialogue on the Two Chief World Systems, Ptolemaic and Copernican, published in 1632, was carefully crafted to avoid controversy. Even so, he was summoned before the Inquisition (tribunal established by the pope for judging heretics) the following year and, under threat of torture, forced to recant.
In less contentious areas, European scientists made rapid progress on many fronts in the 17th century. Galileo himself investigated the laws governing falling objects, and discovered that the duration of a pendulum's swing is constant for any given length. He explored the possibility of using this to control a clock, an idea that his son put into practice in 1641. Two years later another Italian, mathematician and physicist Evangelists Torricelli, made the first barometer. In doing so he discovered atmospheric pressure and produced the first artificial vacuum known to science. In 1650 German physicist Otto von Guericke invented the air pump. He is best remembered for carrying out a demonstration of the effects of atmospheric pressure. Von Guericke joined two large, hollow bronze hemispheres, and then pumped out the air within them to form a vacuum. To illustrate the strength of the vacuum, von Guericke showed how two teams of eight horses pulling in opposite directions could not separate the hemispheres. Yet the hemispheres fell apart as soon as air was let in.
Throughout the 17th century major advances occurred in the life sciences, including the discovery of the circulatory system by the English physician William Harvey and the discovery of microorganisms by the Dutch microscope maker Antoni van Leeuwenhoek. In England, Robert Boyle established modern chemistry as a full-fledged science, while in France, philosopher and scientist René Descartes made numerous discoveries in mathematics, as well as advancing the case for rationalism in scientific research.
Seemingly, the century's greatest achievements came in 1665, when the English physicist and mathematician Isaac Newton fled from Cambridge to his rural birthplace in Woolsthorpe to escape an epidemic of the plague. There, in the course of a single year, he made a series of extraordinary breakthroughs, including new theories about the nature of light and gravitation and the development of calculus. Newton is perhaps best known for his proof that the force of gravity extends throughout the universe and that all objects attract each other with a precisely defined and predictable force. Gravity holds the Moon in its orbit around the Earth and is the principal cause of the Earth’s tides. These discoveries revolutionized how people viewed the universe and they marked the birth of modern science.
Newton’s work demonstrated that nature was governed by basic rules that could be identified using the scientific method. This new approach to nature and discovery liberated 18th-century scientists from passively accepting the wisdom of ancient writings or religious authorities that had never been tested by experiment. In what became known as the Age of Reason, or the Age of Enlightenment, scientists in the 18th century began to apply rational thought actively, careful observation, and experimentation to solve a variety of problems.
Advances in the life sciences saw the gradual erosion of the theory of spontaneous generation, a long-head notion that life could spring from nonliving matter. It also brought the beginning of scientific classification, pioneered by the Swedish naturalist Carolus Linnaeus, who classified close to 12,000 living plants and animals into a systematic arrangement.
By 1700 the first steam engine had been built. Improvements in the telescope enabled German-born British astronomer Sir William Herschel to discover the planet Uranus in 1781. Throughout the 18th century science began to play an increasing role in everyday life. New manufacturing processes revolutionized the way that products were made, heralding the Industrial Revolution. In An Inquiry Into the Nature and Causes of the Wealth of Nations, published in 1776, British economist Adam Smith stressed the advantages of division of labour and advocated the use of machinery to increase production. He urged governments to allow individuals to compete within a free market in order to produce fair prices and maximum social benefit. Smith’s work for the first time gave economics the stature of an independent subject of study and his theories greatly influenced the course of economic thought for more than a century.
With knowledge in all branches of science accumulating rapidly, scientists began to specialize in particular fields. Specialization did not necessarily mean that discoveries were specializing as well: From the 19th century onward, research began to uncover principles that unite the universe as a whole.
In chemistry, one of these discoveries was a conceptual one: that all matter is made of atoms. Originally debated in ancient Greece, atomic theory was revived in a modern form by the English chemist John Dalton in 1803. Dalton provided clear and convincing chemical proof that such particles exist. He discovered that each atom has a characteristic mass and that atoms remain unchanged when they combine with other atoms to form compound substances. Dalton used atomic theory to explain why substances always combine in fixed proportions-a field of study known as quantitative chemistry. In 1869 Russian chemist Dmitry Mendeleyev used Dalton’s discoveries about atoms and their behaviour to draw up his periodic table of the elements.
Other 19th-century discoveries in chemistry included the world's first synthetic fertilizer, manufactured in England in 1842. In 1846 German chemist Christian Schoenbein accidentally developed the powerful and unstable explosive nitrocellulose. The discovery occurred after he had spilled a mixture of nitric and sulfuric acids and then mopped it up with a cotton apron. After the apron had been hung up to dry, it exploded. He later learned that the cellulose in the cotton apron combined with the acids to form a highly flammable explosive.
In 1828 the German chemist Friedrich Wöhler showed that making carbon-containing was possible, organic compounds from inorganic ingredients, a breakthrough that opened up an entirely new field of research. By the end of the 19th century, hundreds of organic compounds had been synthesized, including mauve, magenta, and other synthetic dyes, as well as aspirin, still one of the world's most useful drugs.
In physics, the 19th century is remembered chiefly for research into electricity and magnetism, which were pioneered by physicists such as Michael Faraday and James Clerk Maxwell of Great Britain. In 1821 Faraday demonstrated that a moving magnet could set an electric current flowing in a conductor. This experiment and others he performed led to the development of electric motors and generators. While Faraday’s genius lay in discovery by experiment, Maxwell produced theoretical breakthroughs of even greater note. Maxwell's famous equations, devised in 1864, uses mathematics to explain the interactions between electric and magnetic fields. His work demonstrated the principles behind electromagnetic waves, created when electric and magnetic fields oscillate simultaneously. Maxwell realized that light was a form of electromagnetic energy, but he also thought that the complete electromagnetic spectrum must include many other forms of waves as well. With the discovery of radio waves by German physicist Heinrich Hertz in 1888 and X rays by German physicist Wilhelm Roentgen in 1895, Maxwell’s ideas were proved correct. In 1897 British physicist Sir Joseph J. Thomson discovered the electron, a subatomic particle with a negative charge. This discovery countered the long-head notion that atoms were the basic unit of matter.
As in chemistry, these 19th-century discoveries in physics proved to have immense practical value. No one was more adept at harnessing them than American physicist and prolific inventor Thomas Edison. Working from his laboratories in Menlo Park, New Jersey, Edison devised the carbon-granule microphone in 1877, which greatly improved the recently invented telephone. He also invented the phonograph, the electric light bulb, several kinds of batteries, and the electric metre. Edison was granted more than 1,000 patents for electrical devices, a phenomenal feat for a man who had no formal schooling.
In the earth sciences, the 19th century was a time of controversy, with scientists debating Earth's age. Estimates ranged from less than 100,000 years to several hundred million years. In astronomy, greatly improved optical instruments enabled important discoveries to be made. The first observation of an asteroid, Ceres, took place in 1801. Astronomers had long noticed that Uranus exhibited an unusual orbit. French astronomer Urbain Jean Joseph Leverrier predicted that another planet nearby caused Uranus’s odd orbit. Using mathematical calculations, he narrowed down where such a planet would be located in the sky. In 1846, with the help of German astronomer Johann Galle, Leverrier discovered Neptune. The Irish astronomer William Parsons, the third Earl of Rosse, became the first person to see the spiral form of galaxies beyond our own solar system. He did this with the Leviathan, a 183-cm. (72-in.) reflecting telescope, built on the grounds of his estate in Parsonstown (now Birr), Ireland, in the 1840s. His observations were hampered by Ireland's damp and cloudy climate, but his gigantic telescope remained the world's largest for more than 70 years.
In the 19th century the study of microorganisms became increasingly important, particularly after French biologist Louis Pasteur revolutionized medicine by correctly deducing that some microorganisms are involved in disease. In the 1880's Pasteur devised methods of immunizing people against diseases by deliberately treating them with weakened forms of the disease-causing organisms themselves. Pasteur’s vaccine against rabies was a milestone in the field of immunization, one of the most effective forms of preventive medicine the world has yet seen. In the area of industrial science, Pasteur invented the process of pasteurization to help prevent the spread of disease through milk and other foods.
Also during the 19th century, the Austrian monk Gregor Mendel laid the foundations of genetics, although his work, published in 1866, was not recognized until after the century had closed. However, the British scientist Charles Darwin towers above all other scientists of the 19th century. His publication of On the Origin of Species in 1859 marked a major turning point for both biology and human thought. His theory of evolution by natural selection (independently and simultaneously developed by British naturalist Alfred Russel Wallace) initiated a violent controversy that still has not subsided. Particularly controversial was Darwin’s theory that humans resulted from a long process of biological evolution from apelike ancestors. The greatest opposition to Darwin’s ideas came from those who believed that the Bible was an exact and literal statement of the origin of the world and of humans. Although the public initially castigated Darwin’s ideas, by the late 1800s most biologists had accepted that evolution occurred, although not all agreed on the mechanism, known as natural selection, that Darwin proposed.
In the 20th century, scientists achieved spectacular advances in the fields of genetics, medicine, social sciences, technology, and physics.
At the beginning of the 20th century, the life sciences entered a period of rapid progress. Mendel's work in genetics was rediscovered in 1900, and by 1910 biologists had become convinced that genes are located in chromosomes, the threadlike structures that contain proteins and deoxyribonucleic acid (DNA). During the 1940s American biochemists discovered that DNA taken from one kind of bacterium could influence the characteristics of another. From these experiments, DNA is clearly the chemical that makes up genes and thus the key to heredity.
After American biochemist James Watson and British biophysicist Francis Crick established the structure of DNA in 1953, geneticists became able to understand heredity in chemical terms. Since then, progress in this field has been astounding. Scientists have identified the complete genome, or genetic catalogue, of the human body. In many cases, scientists now know how individual genes become activated and what affects they have in the human body. Genes can now be transferred from one species to another, side stepping the normal processes of heredity and creating hybrid organisms that are unknown in the natural world.
At the turn of the 20th century, Dutch physician Christian Eijkman showed that disease can be caused not only by microorganisms but by a dietary deficiency of certain substances now called vitamins. In 1909 German bacteriologist Paul Ehrlich introduced the world's first bactericide, a chemical designed to kill specific kinds of bacteria without killing the patient's cells as well. Following the discovery of penicillin in 1928 by British bacteriologist Sir Alexander Fleming, antibiotics joined medicine’s chemical armoury, making the fight against bacterial infection almost a routine matter. Antibiotics cannot act against viruses, but vaccines have been used to great effect to prevent some of the deadliest viral diseases. Smallpox, once a worldwide killer, was completely eradicated by the late 1970's, and in the United States the number of polio cases dropped from 38,000 in the 1950's to less than 10 a year by the 21st century.
By the middle of the 20th century scientists believed they were well on the way to treating, preventing, or eradicating many of the most deadly infectious diseases that had plagued humankind for centuries. By any whimpering gait, by ways of an operative measure, the 1980s contributed the medical community’s confidence in its ability to control infectious diseases had been shaken by the emergence of new types of disease-causing microorganisms. New cases of tuberculosis developed, caused by bacteria strains that were resistant to antibiotics. New, deadly infections for which there was no known cure also appeared, including the viruses that cause hemorrhagic fever and the human immunodeficiency virus (HIV), the cause of acquired immunodeficiency syndrome.
In other fields of medicine, the diagnosis of disease has been revolutionized by the use of new imaging techniques, including magnetic resonance imaging and computed tomography. Scientists were also on the verge of success in curing some diseases using gene therapy, in which the insertions of normal or genetically altered genes into a patient’s cells replace nonfunctional or missing genes.
Improved drugs and new tools have made surgical operations that were once considered impossible now routine. For instance, drugs that suppress the immune system enable the transplant of organs or tissues with a reduced risk of rejection. Endoscopy permits the diagnosis and surgical treatment of a wide variety of ailments using minimally invasive surgery. Advances in high-speed fibreoptic connections permit surgery on a patient using robotic instruments controlled by surgeons at another location. Known as telemedicine, this form of medicine makes it possible for skilled physicians to treat patients in remote locations or places that lack medical help.
In the 20th century the social sciences emerged from relative obscurity to become prominent fields of research. Austrian physician Sigmund Freud founded the practice of psychoanalysis, creating a revolution in psychology that led him to be called the ‘Copernicus of the mind.’ In 1948 the American biologist Alfred Kinsey published Sexual Behaviour in the Human Male, which proved to be one of the best-selling scientific works of all time. Although criticized for his methodology and conclusions, Kinsey succeeded in making human sexuality an acceptable subject for scientific research.
The 20th century also brought dramatic discoveries in the field of anthropology, with new fossil finds helping to piece together the story of human evolution. A completely new and surprising source of anthropological information became available from studies of the DNA in mitochondria, cell structures that provide energy to fuel the cell’s activities. Mitochondrial DNA has been used to track certain genetic diseases and to trace the ancestry of a variety of organisms, including humans.
In the field of communications, Italian electrical engineer Guglielmo Marconi sent his first radio signal across the Atlantic Ocean in 1901. American inventor Lee De Forest invented the triode, or vacuum tube, in 1906. The triode eventually became a key component in nearly all early radio, radar, television, and computer systems. In 1920 Scottish engineer John Logie Baird developed the Baird Televisor, a primitive television that provided the first transmission of a recognizable moving image. In the 1920's and 1930's American electronic engineer Vladimir Kosma Zworykin significantly improved the television’s picture and reception. In 1935 British physicist Sir Robert Watson-Watt used reflected radio waves to locate aircraft in flight. Radar signals have since been reflected from the Moon, planets, and stars to learn their distance from Earth and to track their movements.
In 1947 American physicists John Bardeen, Walter Brattain, and William Shockley invented the transistor, an electronic device used to control or amplify an electrical current. Transistors are much smaller, far less expensive, require less power to operate, and are considerably more reliable than triodes. Since their first commercial use in hearing aids in 1952, transistors have replaced triodes in virtually all applications.
During the 1950's and early 1960's minicomputers were developed using transistors rather than triodes. Earlier computers, such as the electronic numerical integrator and computer (ENIAC), first introduced in 1946 by American physicist John W. Mauchly and American electrical engineer John Presper Eckert, Jr., used as many as 18,000 triodes and filled a large room. However, the transistor initiated a trend toward microminiaturization, in which individual electronic circuits can be reduced to microscopic size. This drastically reduced the computer's size, cost, and power requirements and eventually enabled the development of electronic circuits with processing speeds measured in billionths of a second.
Further miniaturization led in 1971 to the first microprocessor-a computer on a chip. When combined with other specialized chips, the microprocessor becomes the central arithmetic and logic unit of a computer smaller than a portable typewriter. With their small size and a price less than that of a used car, today’s personal computers are many times more powerful than the physically huge, multimillion-dollar computers of the 1950’s. Once used only by large businesses, computers are now used by professionals, small retailers, and students to perform a wide variety of everyday tasks, such as keeping data on clients, tracking budgets, and writing school reports. People also use computers to speak with worldwide communications networks, such as the Internet and the World Wide Web, to send and receive E-mail, to shop, or to find information on just about any subject.
During the early 1950's public interest in space exploration developed. The focal event that opened the space age was the International Geophysical Year from July 1957 to December 1958, during which hundreds of scientists around the world coordinated their efforts to measure the Earth’s near-space environment. As part of this study, both the United States and the Soviet Union announced that they would launch artificial satellites into orbit for nonmilitary space activities.
When the Soviet Union launched the first Sputnik satellite in 1957, the feat spurred the United States to intensify its own space exploration efforts. In 1958 the National Aeronautics and Space Administration (NASA) was founded for the purpose of developing human spaceflight. Throughout the 1960s NASA experienced its greatest growth. Among its achievements, NASA designed, manufactured, tested, and eventually used the Saturn rocket and the Apollo spacecraft for the first manned landing on the Moon in 1969. In the 1960s and 1970's, NASA also developed the first robotic space probes to explore the planet’s Mercury, Venus, and Mars. The success of the Mariner probes paved the way for the unmanned exploration of the outer planets in Earth’s solar system.
In the 1970's through 1990's, NASA focussed its space exploration efforts on a reusable space shuttle, which was first deployed in 1981. In 1998 the space shuttle, along with its Russian counterpart known as Soyuz, became the workhorses that enabled the construction of the International Space Station.
Unlike the laws of classical physics, quantum theory deals with events that occur on the smallest of scales. Quantum theory explains how subatomic particles form atoms, and how atoms interact when they combine to form chemical compounds. Quantum theory deals with a world where the attributes of any single particle can never be completely known-an idea known as the uncertainty principle, put forward by the German physicist Werner Heisenberg in 1927. Nevertheless, while there is uncertainty on the subatomic level, quantum physics successfully predicts the overall outcome of subatomic events, a fact that firmly relates it to the macroscopic world-that is, the one in which we live.
In 1934 Italian-born American physicist Enrico Fermi began a series of experiments in which he used neutrons (subatomic particles without an electric charge) to bombard atoms of various elements, including uranium. The neutrons combined with the nuclei of the uranium atoms to produce what he thought were elements heavier than uranium, known as transuranium elements. In 1939 other scientists demonstrated that in these experiments’ Fermi had not formed heavier elements, but instead had achieved the splitting, or fission, of the uranium atom's nucleus. These early experiments led to the development of fission as both an energy source and a weapon.
These fission studies, coupled with the development of particle accelerators in the 1950's, initiated a long and remarkable journey into the nature of subatomic particles that continues today. Far from being indivisible, scientists now know that atoms are made up of 12 fundamental particles known as quarks and leptons, which combine in different ways to make all the kinds of matter currently known.
Advances in particle physics have been closely linked to progress in cosmology. From the 1920's onward, when the American astronomer Edwin Hubble showed that the universe is expanding, cosmologists have sought to rewind the clock and establish how the universe began. Today, most scientists believe that the universe started with a cosmic explosion some time between 10 and 20 billion years ago. However, the exact sequence of events surrounding its birth, and its ultimate fate, are still matters of ongoing debate.
Particle Accelerators, in physics, are the devices used to accelerate charged elementary particles or ions to high energies. Particle accelerators today are some of the largest and most expensive instruments used by physicists. They all have the same three basic parts: a source of elementary particles or ions, a tube pumped to a partial vacuum in which the particles can travel freely, and some means of speeding up the particles.
Charged particles can be accelerated by an electrostatic field. For example, by placing electrodes with a large potential difference at each end of an evacuated tube, British scientists’ John D. Cockcroft and Ernest Thomas Sinton Walton were able to accelerate protons to 250,000 eV. Another electrostatic accelerator is the Van de Graaff accelerator, which was developed in the early 1930's by the American physicist Robert Jemison Van de Graaff. This accelerator uses the same principles as the Van de Graaff Generator. The Van de Graaff accelerator builds up a potential between two electrodes by transporting charges on a moving belt. Modern Van de Graaff accelerators can accelerate particles to energies as high as 15 MeV (15 million electron volts).
Another machine, first conceived in the late 1920's, is the linear accelerator, or linac, which uses alternating voltages of high magnitude to push particles along in a straight line. Particles pass through a line of hollow metal tubes enclosed in an evacuated cylinder. An alternating voltage is timed so that a particle is pushed forward each time it goes through a gap between two of the metal tubes. Theoretically, a linac of any energy can be built. The largest linac in the world, at Stanford University, is 3.2 km. (2 mi.) long. It is capable of accelerating electrons to an energy of 50 GeV (50 billion, or giga, electron volts). Stanford's linac is designed to collide two beams of particles accelerated on different tracks of the accelerator.
The American physicist Ernest O. Lawrence won the 1939 Nobel Prize in physics for a breakthrough in accelerator design in the early 1930's. He developed the cyclotron, the first circular accelerator. A cyclotron is to some extent like a linac wrapped into a tight spiral. Instead of many tubes, the machine had only two hollow vacuum chambers, called dees, that are shaped like capital letter Ds back to back. A magnetic field, produced by a powerful electromagnet, keeps the particles moving in a circle. Each time the charged particles pass through the gap between the dees, they are accelerated. As the particles gain energy, they spiral out toward the edge of the accelerator until they gain enough energy to exit the accelerator. The world's most powerful cyclotron, the K1200, began operating in 1988 at the National Superconducting Cyclotron Laboratory at Michigan State University. The machine is capable of accelerating nuclei to an energy approaching 8 GeV.
When nuclear particles in a cyclotron gain an energy of 20 MeV or more, they become appreciably more massive, as predicted by the theory of relativity. This tends to slow them and throws the acceleration pulses at the gaps between the dees out of phase. A solution to this problem was suggested in 1945 by the Soviet physicist Vladimir I. Veksler and the American physicist Edwin M. McMillan. The solution, the synchrocyclotron, is sometimes called the frequency-modulated cyclotron. In this instrument, the oscillator (radio-frequency generator) that accelerates the particles around the dees is automatically adjusted to stay in step with the accelerated particles; as the particles gain mass, the frequency of accelerations is lowered slightly to keep in step with them. As the maximum energy of a synchrocyclotron increases, so must its size, for the particles must have more space in which to spiral. The largest synchrocyclotron is the 600-cm. (236-in.) phasotron at the Dubna Joint Institute for Nuclear Research in Russia; it accelerates protons to more than 700 MeV and has magnets weighing 6984 metric tons (7200 tons).
When electrons are accelerated, they undergo a large increase in mass at a low energy. At 1 MeV energy, an electron weighs two and one-half times as much as an electron at rest. Synchrocyclotrons cannot be adapted to make allowance for such large increases in mass. Therefore, another type of cyclic accelerator, the betatron, is employed to accelerate electrons. The betatron consists of a doughnut-shaped evacuated chamber placed between the poles of an electromagnet. The electrons are kept in a circular path by a magnetic field called a guide field. By applying an alternating current to the electromagnet, the electromotive force induced by the changing magnetic flux through the circular orbit accelerates the electrons. During operation, both the guide field and the magnetic flux are varied to keep the radius of the orbit of the electrons constant.
The synchrotron is the most recent and most powerful member of the accelerator family. A synchrotron consists of a tube in the shape of a large ring through which the particles travel; the tube is surrounded by magnets that keep the particles moving through the centre of the tube. The particles enter the tube after having already been accelerated to several million electron volts. Particles are accelerated at one or more points on the ring each time the particles make a complete circle around the accelerator. To keep the particles in a rigid orbit, the strengths of the magnets in the ring are increased as the particles gain energy. In a few seconds, the particles reach energies greater than 1 GeV and are ejected, either directly into experiments or toward targets that produce a variety of elementary particles when struck by the accelerated particles. The synchrotron principle can be applied to either protons or electrons, although most of the large machines are proton-synchrotrons.
The first accelerator to exceed the 1 GeV mark was the cosmotron, a proton-synchrotron at Brookhaven National Laboratory, in Brookhaven, New York. The cosmotron was operated at 2.3 GeV in 1952 and later increased to 3 GeV. In the mid-1960's, two operating synchrotrons were regularly accelerating protons to energies of about 30 GeV. These were the Alternating Gradient Synchrotron at Brookhaven National Laboratory, and a similar machine near Geneva, Switzerland, operated by CERN (also known as the European Organization for Nuclear Research). By the early 1980s, the two largest proton-synchrotrons were a 500-GeV device at CERN and a similar one at the Fermi National Accelerator Laboratory (Fermilab) near Batavia, Illinois. The capacity of the latter, called Tevatron, was increased to a potential 1 TeV (trillion, or tera, eV) in 1983 by installing superconducting magnets, making it the most powerful accelerator in the world. In 1989, CERN began operating the Large-Electron Positron Collider (LEP), a 27-km. (16.7-mi.) rings that can accelerate electrons and positrons to an energy of 50 GeV.
A storage ring collider accelerator is a synchrotron that produces more energetic collisions between particles than a conventional synchrotron, which slams accelerated particles into a stationary target. A storage ring collider accelerates two sets of particles that rotate in opposite directions in the ring, then collides the two set of particles. CERN's Large Electron-Positron Collider is a storage ring collider. In 1987, Fermilab converted the Tevatron into a storage ring collider and installed a three-story-high detector that observed and measured the products of the head-on particle collisions.
As powerful as today's storage ring colliders are, physicists need even more powerful devices to test today's theories. Unfortunately, building larger rings is extremely expensive. CERN is considering building the Large Hadron Collider (LHC) in the existing 27-km. (16.7-mi.) tunnel that currently houses the Large Electron-Positron Collider. In 1988, the United States began planning for the construction of the Superconducting Super Collider (SSC) near Waxahachie, Texas. The SSC was to be an enormous storage ring collider accelerator 87 km. (54 mi.) long. However, after about one-fifth of the tunnel had been completed, the Congress of the United States voted to cancel the project in October 1993, as a result of the accelerator's projected cost of more than $10 billion.
Accelerators are used to explore atomic nuclei, thereby allowing nuclear scientists to identify new elements and to explain phenomena that affect the entire nucleus. Machines exceeding 1 GeV are used to study the fundamental particles that compose the nucleus. Several hundred of these particles have been identified. High-energy physicists hope to discover rules or principles that will permit an orderly arrangement of the proportion of sub-nuclear particles. Such an arrangement would be as useful to nuclear science as the periodic table of the chemical elements is to chemistry. Fermilab's accelerator and collider detector permit scientists to study violent particle collisions that mimic the state of the universe when it was just microseconds old. Continued study of their findings should increase scientific understanding of the makeup of the universe.
In addition, Particle Detectors, are described as instruments used to detect and study fundamental nuclear particles, as these detectors range in complexity from the well-known portable Geiger counter to room-sized spark and bubble chambers.
One of the first detectors to be used in nuclear physics was the ionization chamber, which consists essentially of a closed vessel containing a gas and equipped with two electrodes at different electrical potentials. The electrodes, depending on the type of instrument, may consist of parallel plates or coaxial cylinders, or the walls of the chamber may act as one electrode and a wire or rod inside the chamber act as the other. When ionizing particles of radiation enter the chamber they ionize the gas between the electrodes. The ions that are thus produced migrate to the electrodes of opposite sign (negatively charged ions move toward the positive electrode, and vice versa), creating a current that may be amplified and measured directly with an electrometer-an electroscope equipped with a scale-or amplified and recorded by means of electronic circuits.
Ionization chambers adapted to detect individual ionizing particles of radiation are called counters. The Geiger-Müller counter is one of the most versatile and widely used instruments of this type. It was developed by the German physicist Hans Geiger from an instrument first devised by Geiger and the British physicist Ernest Rutherford; it was improved in 1928 by Geiger and by the German American physicist Walther Müller. The counting tube is filled with a gas or a mixture of gases at low pressure, the electrodes being the thin metal wall of the tube and a fine wire, usually made of tungsten, stretched lengthwise along the axis of the tube. A strong electric field maintained between the electrodes accelerates the ions; these then collide with atoms of the gas, detaching electrons and thus producing more ions. When the voltage was raised sufficiently, the rapidly increasing current produced by a single particle sets off a discharge throughout the counter. The pulse caused by each particle is amplified electronically and then actuates a loudspeaker or a mechanical or electronic counting device.
Detectors that enable researchers to observe the tracks that particles leave behind are called track detectors. Spark and bubble chambers are track detectors, as are the cloud chamber and nuclear emulsions. Nuclear emulsions resemble photographic emulsions but are thicker and not as sensitive to light. A charged particle passing through the emulsion ionizes silver grains along its track. These grains become black when the emulsion is developed and can be studied with a microscope.
The fundamental principle of the cloud chamber was discovered by the British physicist C. T. R. Wilson in 1896, although an actual instrument was not constructed until 1911. The cloud chamber consists of a vessel several centimetres or more in diameter, with a glass window on one side and a movable piston on the other. The piston can be dropped rapidly to expand the volume of the chamber. The chamber is usually filled with dust-free air saturated with water vapour. Dropping the piston causes the gas to expand rapidly and causes its temperature to fall. The air is now supersaturated with water vapour, but the excess vapour cannot condense unless ions are present. Charged nuclear or atomic particles produce such ions, and any such particles passing through the chamber leave behind them a trail of ionized particles upon which the excess water vapour will condense, thus making visible the course of the charged particle. These tracks can be photographed and the photographs then analysed to provide information on the characteristics of the particles.
Because the paths of electrically charged particles are bent or deflected by a magnetic field, and the amount of deflection depends on the energy of the particle, a cloud chamber is often operated within a magnetic field. The tracks of negatively and positively charged particles will curve in opposite directions. By measuring the radius of curvature of each track, its velocity can be determined. Heavy nuclei such as alpha particles form thick and dense tracks, protons form tracks of medium thickness, and electrons form thin and irregular tracks. In a later refinement of Wilson's design, called a diffusion cloud chamber, a permanent layer of supersaturated vapour is formed between warm and cold regions. The layer of supersaturated vapour is continuously sensitive to the passage of particles, and the diffusion cloud chamber does not require the expansion of a piston for its operation. Although the cloud chamber has now been supplanted almost entirely by the bubble chamber and the spark chamber, it was used in making many important discoveries in nuclear physics.
The bubble chamber, invented in 1952 by the American physicist Donald Glaser, is similar in operation to the cloud chamber. In a bubble chamber a liquid is momentarily superheated to a temperature just above its boiling point. For an instant the liquid will not boil unless some impurity or disturbance is introduced. High-energy particles provide such a disturbance. Tiny bubbles form along the tracks as these particles pass through the liquid. If a photograph is taken just after the particles have crossed the chamber, these bubbles will make visible the paths of the particles. As with the cloud chamber, a bubble chamber placed between the poles of a magnet can be used to measure the energies of the particles. Many bubble chambers are equipped with superconducting magnets instead of conventional magnets. Bubble chambers filled with liquid hydrogen allow the study of interactions between the accelerated particles and the hydrogen nuclei.
In a spark chamber, incoming high-energy particles ionize the air or a gas between plates and wire grids that are kept alternately positively and negatively charged. Sparks jump along the paths of ionization and can be photographed to show particle tracks. In some spark-chamber installations, information on particle tracks is fed directly into electronic computer circuits without the necessity of photography. A spark chamber can be operated quickly and selectively. The instrument can be set to record particle tracks only when a particle of the type that the researchers want to study is produced in a nuclear reaction. This advantage is important in studies of the rarer particles; spark-chamber pictures, however, lack the resolution and detail of bubble-chamber pictures.
The scintillation counter functions by the ionization produced by charged particles moving at high speed within certain transparent solids and liquids, known as scintillating materials, causing flashes of visible light. The gases’ argon, krypton, and xenon produces ultraviolet light, and hence are used in scintillation counters. A primitive scintillation device, known as the spinthariscopes, was invented in the early 1990s and was of considerable importance in the development of nuclear physics. The spinthariscopes required, however, the counting of the scintillations by eye. Because of the uncertainties of this method, physicists turned to other detectors, including the Geiger-Müller counter. The scintillation method was revived in 1947 by placing the scintillating material in front of a photo multiplier tube, a type of photoelectric cell. The light flashes are converted into electrical pulses that can be amplified and recorded electronically.
Various organic and inorganic substances such as plastic, zinc sulfide, sodium iodide, and anthracene are used as scintillating materials. Certain substances react more favourably to specific types of radiation than others, making possible highly diversified instruments. The scintillation counter is superior to all other radiation-detecting devices in a number of fields of current research. It has replaced the Geiger -Müller counter in the detection of biological tracers and as a surveying instrument in prospecting for radioactive ores. It is also used in nuclear research, notably in the investigation of such particles as the antiproton, the meson Elementary Particles, and the neutrino. One such counter, the Crystal Ball, has been in use since 1979 for advanced particle research, first at the Stanford Linear Accelerator Centre and, since 1982, at the German Electron Synchrotron Laboratory (DESY) in Hamburg, Germany. The Crystal Ball is a hollow crystal sphere, about 2.1 m. (7 ft.) wide, that is surrounded by 730 sodium iodide crystals.
Many other types of interactions between matter and elementary particles are used in detectors. Thus in semiconductor detectors, electron-hole pairs that elementary particles produce in a semiconductor junction momentarily increase the electric conduction across the junction. The Cherenkov detector, on the other hand, makes use of the effect discovered by the Russian physicist Pavel Alekseyevich Cherenkov in 1934: A particle emits light when it passes through a nonconducting medium at a velocity higher than the velocity of light in that medium (the velocity of light in glass, for example, is lower than the velocity of light in vacuum). In Cherenkov detectors, materials such as glass, plastic, water, or carbon dioxide serve as the medium in which the light flashes are produced. As in scintillation counters, the light flashes are detected with photo multiplier tubes.
Neutral particles such as neutrons or neutrinos can be detected by nuclear reactions that occur when they collide with nuclei of certain atoms. Slow neutrons produce easily detectable alpha particles when they collide with boron nuclei in borontrifluoride. Neutrinos, which barely interact with matter, are detected in huge tanks containing perchloroethylene (C2CI4, a dry-cleaning fluid). The neutrinos that collide with chlorine nuclei produce radioactive argon nuclei. The perchlorethylene tank is flushed at regular intervals, and the newly formed argon atoms, presents in minute amounts, is counted. This type of neutrino detector, placed deep underground to shield against cosmic radiation, is currently used to measure the neutrino flux from the sun. Neutrino detectors may also take the form of scintillation counters, the tank in this case being filled with an organic liquid that emits light flashes when traversed by electrically charged particles produced by the interaction of neutrinos with the liquid's molecules.
The detectors now being developed for use with the storage rings and colliding particle beams of the most recent generation of accelerators are bubble-chamber types known as time-projection chambers. They can measure three-dimensionally the tracks produced by particles from colliding beams, with supplementary detectors to record other particles resulting from the high-power collisions. The Fermi National Accelerator Laboratory's CDF (Collision Detector Fermilab) is used with its colliding-beam accelerator to study head-on particle collisions. CDF's three different systems can capture or account for nearly all of the sub-nuclear fragments released in such violent collisions.
High-energy particle physicists are using particle accelerators measuring 8 km. (5 mi.) across to study something billions of times too small to see. Why? To find out what everything is made of and where it comes from. These physicists are constructing and testing new theories about objects called superstrings. Superstrings may explain the nature of space and time and of everything in them, from the light you are using to read these words to black holes so dense that they can capture light forever. Possibly the smallest objects allowed by the laws of physics, superstrings may tell us about the largest event of all time: the big bang, and the creation of the universe!
These are exciting ideas, still strange to most people. For the past 100 years physicists have descended to deeper and deeper levels of structure, into the heart of matter and energy and of existence itself. Read on to follow their progress.
The world around us, full of books, computers, mountains, lakes, and people, is made by rearranging more than 100 chemical elements. Oxygen, hydrogen, carbon, and nitrogen are elements especially important to living things; silicon is especially important to computer chips.
The smallest recognizable form in which a chemical element occurs is the atom, and the atoms of one element are unlike the atoms of any other element. Every atom has a small core called a nucleus around which electrons swarm. Electrons, tiny particles with a negative electrical charge, determine the chemical properties of an element-that is, how it interacts with other atoms to make the things around us. Electrons also are what move through wires to make light, heat, and video games.
In 1869, before anyone knew anything about nuclei or electrons, Russian chemist Dmitry Mendeleyev grouped the elements according to their physical qualities and discovered the periodic law. He was able to predict the qualities of elements that had not yet been discovered. By the early 1900s scientists had discovered the nucleus and electrons.
Atoms stick together and form larger objects called molecules because of a force called electromagnetism. The best-known form of electromagnetism is radiation: light, radio waves, X rays, and infrared and ultraviolet radiation.
Modern physics starts with light and other forms of electromagnetic radiation. In 1900 German physicist Max Planck proposed the quantum theory, which says that light comes in units of energy called quanta. As we will explain, these units of light are waves and they are also particles. Light is simultaneously energy and matter. So is everything else.
It was Albert Einstein who first proposed (in 1905) that Planck's units of light can be considered particles. He named these particles photons. In the same year, Einstein published what is known as the special theory of relativity. According to this theory, the speed of light is the fastest that anything in the universe can go, and all forms of electromagnetic radiation are forms of light, moving at the same speed.
What differentiates radio waves, visible light, and X ray is their energy. This energy is directly related to the wave’s length. Light waves, like ocean waves, have peaks and troughs that repeat at regular intervals, and wavelength is the distance between each pair of peaks (or troughs). The shorter the wavelength, the higher the energy.
How does this relate to our story? It turns out that the process by which electrons interact is an exchange of photons (particles of light). Therefore we can study electrons by probing them with photons.
To understand really what things are made of, we must probe them or move them around and thus learn how they work. In the case of electrons, physicists probe them with photons, the particles that carry the electromagnetic force.
While some physicists studied electrons and photons, others pondered and probed the atomic nucleus. The nucleus of each chemical element contains a distinctive number of positively charged protons and a number of uncharged neutrons that can vary slightly from atom to atom. Protons and neutrons are the source of radioactivity and of nuclear energy. In 1964 physicists suggested that protons and neutrons are made of still smaller particles they called quarks.
Probing protons and neutrons requires particles with extremely high energies. Particle accelerators are large machines for bringing particles to these high energies. These machines have to be big, because they accelerate particles by applying force many times, over long distances. Some particle accelerators are the largest machines ever constructed. This is ironic given that these are delicate scientific instruments designed to probe the shortest distances ever investigated.
The proposal and acceptance of quarks were a major step in putting together what is called the standard model of particles and forces. This unified theory describes all of the fundamental particles, from which everything is made, and how they interact. There are twelve kinds of fundamental particles: six kinds of quarks and six kinds of leptons, including the electron.
Four forces are believed to control all the interactions of these fundamental particles. They are the strong force, which holds the nucleus together; the weak force, responsible for radioactivity; the electromagnetic force, which provides electric charge and binds electrons to atomic nuclei; and gravitation, which holds us on Earth. The standard model identifies a force-carrying particle to correspond with three of these forces. The photon, for example, carries the electromagnetic force. Physicists have not yet detected a particle that carries gravitation.
Powerful mathematical techniques called gauge field theories allow physicists to describe, calculate, and predict the interactions of these particles and forces. Gauge theories combine quantum physics and special relativity into consistent equations that produce extremely accurate results. The extraordinary precision of quantum electrodynamics, for example, has filled our world with ultrareliable lasers and transistors.
The mathematical rules that come together in the standard model can explain every particle physics phenomenon that we have ever seen. Physicists can explain forces; they can explain particles. However, they cannot yet explain why forces and particles are what they are. Basic properties, such as the speed of light, must be taken from measurements. Physicists cannot yet provide a satisfactory description of gravity.
The basic behaviour of gravity was taught to us by English physicist Sir Isaac Newton. After creating the basics of quantum physics in his theory of special relativity, Albert Einstein in 1915 clarified and extended Newton’s explanation with his own description of gravity, known as general relativity. Not even Einstein, however, could bring the two theories of relativity into a single unified field theory. Since everything else is governed by quantum physics on small scales, what is the quantum theory of gravity? No one has yet proposed a satisfactory answer to this question. Physicists have been trying to find one for a long time.
At first, this might not seem to be an important problem. Compared with other forces, gravity is extremely weak. We are aware of its action in everyday life because its pull corresponds to mass, and Earth has a huge amount of mass and hence a big gravitational pull. Fundamental particles have tiny masses and hence a minuscule gravitational pull. So couldn’t we just ignore gravity when studying fundamental particles? The ability to ignore gravity on this scale is why we have made so much progress in particle physics over so many years without possessing a theory of quantum gravity.
There are several reasons, however, why we cannot ignore gravity forever. One reason is simply that scientists want to know the whole story. A second reason is that gravity, as Einstein taught us, is the essential physics of space and time. If this physics is not subject to the same quantum laws that any other physics is subject to, something is wrong somewhere. A third reason is that an understanding of quantum gravity is necessary to deal with some important questions in cosmology-for example, how did the universe get to be the way it is, and why did galaxies form?
Gravitation has been shown to spread in waves, and physicists theorize the existence of a corresponding particle, the graviton. The force of gravity, like everything else, has a natural quantum length. For gravity it is about 10-31 m. This is about a million billion times smaller than a proton.
We can't build an accelerator to probe that distance using today’s technology, because the proportions of size and energy show that it would stretch from here to the stars. However, we know that the universe began with the big bang, when all matter and force originated. Everything we know about today follows from the period after the big bang, when the universe expanded. Everything we know indicates that in the fractions of a second following the big bang, the universe was extremely small and dense. At some earliest time, the entire universe was no larger across than the quantum length of gravity. If we are to understand the true nature of where everything comes from and how it really fits together, we must understand quantum gravity.
These questions may seem almost metaphysical. Physicists now suspect that research in this direction will answer many other questions about the standard model-such as why are there are so many different fundamental particles. Other questions are more immediately practical. Our control of technology arises from our understanding of particles and forces. Answers to physicists’ questions could increase computing power or help us find new sources of energy. They will shape the 21st century as quantum physics has shaped the 20th.
Among the most promising new theories is the idea that everything is made of fundamental ‘strings,’ rather than of another layer of tiny particles. The best analogy for these minute entities is a guitar or violin string, which vibrates to produce notes of different frequencies and wavelengths. Superstring theory proposes that if we were able to look closely enough at a fundamental particle-at quantum-length distances-we would see a tiny, vibrating loop!
In this view, all the different types of fundamental particles that we find in the standard model are really just different vibrations of the same string, which can split and join in ways that change its evident nature. This is the case not only for particles of matter, such as quarks and electrons, but also for force-carrying particles, such as photons.
This is a very clever idea, since it unifies everything we have learned in a simple way. In its details, the theory is extremely complicated but very promising. For example, the superstring theory very naturally describes the graviton among its vibrations, and it also explains the quantum properties of many types of black holes. There are also signs that the quantum length of gravity is really the smallest physically possible distance. Below this scale, points in space and time are no longer connected in sequence, so distances cannot be measured or described. The very notions of space, time, and distance seem to stop making sense.
Recent discoveries have shown that the five leading versions of superstring theory are all contained within a powerful complex known as M-Theory. M-Theory says that entities mathematically resembling membranes and other extended objects may also be important. The end of the story has not yet been written, however. Physicists are still working out the details, and it will take many years to be confident that this approach is correct and comprehensive. Much remains to be learned, and surprises are guaranteed. In the quest to probe these small distances, experimentally and theoretically, our understanding of nature is forever enriched, and we approach at least a part of ultimate truth.
Elementary Particles, in physics, are particles that cannot be broken down into any other particles. The term elementary particles also are used more loosely to include some subatomic particles that are composed of other particles. Particles that cannot be broken further are sometimes called fundamental particles to avoid confusion. These fundamental particles provide the basic units that make up all matter and energy in the universe.
Scientists and philosophers have sought to identify and study elementary particles since ancient times. Aristotle and other ancient Greek philosophers believed that all things were composed of four elementary materials: fire, water, air, and earth. People in other ancient cultures developed similar notions of basic substances. As early scientists began collecting and analysing information about the world, they showed that these materials were not fundamental but were made of other substances.
In the 1800s British physicist John Dalton was so sure he had identified the most basic objects that he called them atoms (from the Greek word for ‘indivisible’). By the early 1900s scientists were able to break apart these atoms into particles that they called the electron and the nucleus. Electrons surround the dense nucleus of an atom. In the 1930s, researchers showed that the nucleus consists of smaller particles, called the proton and the neutron. Today, scientists have evidence that the proton and neutron are themselves made up of even smaller particles, called quarks.
Scientists now believe that quarks and three other types of particles-leptons, force-carrying bosons, and the Higgs boson-are truly fundamental and cannot be split into anything smaller. In the 1960s American physicists Steven Weinberg and Sheldon Glashow and Pakistani physicist Abdus Salam developed a mathematical description of the nature and behaviour of elementary particles. Their theory, known as the standard model of particle physics, has greatly advanced understanding of the fundamental particles and forces in the universe. Yet some questions about particles remain unanswered by the standard model, and physicists continue to work toward a theory that would explain even more about particles.
Everything in the universe, from elementary particles and atoms to people, houses, and planets, can be classified into one of two categories: fermions (pronounced FUR-me-onz) or bosons (pronounced BO-zonz). The behaviour of a particle or group of particles, such as an atom or a house, determines whether it is a fermion or boson. The distinction between these two categories is not noticeable on the large scale of people or houses, but it has profound implications in the world of atoms and elementary particles. Fundamental particles are classified according to whether they are fermions or bosons. Fundamental fermions combine to form atoms and other more unusual particles, while fundamental bosons carry forces between particles and give particles mass.
In 1925 Austrian-born American physicist Wolfgang Pauli formulated a rule of physics that helped define fermions. He suggested that no two electrons can have the same properties and locations. He proposed this exclusion principle to explain why all of the electrons in atoms have different amounts of energy. In 1926 Italian-born American physicist Enrico Fermi and British physicist Paul Dirac developed equations that describe electron behaviour, providing mathematical proof of the exclusion principle. Physicists call particles that obey the exclusion principle fermions in honour of Fermi. Protons, neutrons, and the quarks that comprise them are all examples of fermions.
Some particles, such as particles of light called photons, do not obey the exclusion principle. Two or more photons can have the same characteristics. In 1925 German-born American physicist Albert Einstein and Indian mathematician Satyendra Bose developed a set of equations describing the behaviour of particles that do not obey the exclusion principle. Particles that obey the equations of Bose and Einstein are called bosons, in honour of Bose.
Classifying particles as either fermions or bosons are similar to classifying whole numbers as either odd or even. No number is both odd and even, yet every whole number is either odd or even. Similarly, particles are either fermions or bosons. Sums of odd and even numbers are either odd or even, depending on how many odd numbers were added. Adding two odd numbers yields an even number, but adding a third odd number makes the sum odd again. Adding any number of even numbers yields an even sum. In a similar manner, adding an even number of fermions yield a boson, while adding an odd number of fermions results in a fermion. Adding any number of bosons yields a boson.
For example, a hydrogen atom contains two fermions: an electron and a proton. Yet the atom itself is a boson because it contains an even number of fermions. According to the exclusion principle, the electron inside the hydrogen atom cannot have the same properties as another electron nearby. However, the hydrogen atom itself, as a boson, does not follow the exclusion principle. Thus, one hydrogen atom can be identical to another hydrogen atom.
A particle composed of three fermions, on the other hand, is a fermion. An atom of heavy hydrogen, also called a deuteron, is a hydrogen atom with a neutron added to the nucleus. A deuteron contains three fermions: one proton, one electron, and one neutron. Since the deuteron contains an odd number of fermions, it too is a fermion. Just like its constituent particles, the deuteron must obey the exclusion principle. It cannot have the same properties as another deuteron atom.
The differences between fermions and bosons have important implications. If electrons did not obey the exclusion principle, all electrons in an atom could have the same energy and be identical. If all of the electrons in an atom were identical, different elements would not have such different properties. For example, metals conduct electricity better than plastics do because the arrangement of the electrons in their atoms and molecules differs. If electrons were bosons, their arrangements could be identical in these atoms, and devices that rely on the conduction of electricity, such as televisions and computers, would not work. Photons, on the other hand, are bosons, so a group of photons can all have identical properties. This characteristic allows the photons to form a coherent beam of identical particles called a laser.
The most fundamental particles that make up matter fall into the fermion category. These fermions cannot be split into anything smaller. The particles that carry the forces acting on matter and antimatter is bosons called force carriers. Force carriers are also fundamental particles, so they cannot be split into anything smaller. These bosons carry the four basic forces in the universe: the electromagnetic, the gravitational, the strong (force that holds the nuclei of atoms together), and the weak (force that causes atoms radioactively to decay). Scientists believed another type of fundamental boson, called the Higgs boson, give matter and antimatter mass. Scientists have yet to discover definitive proof of the existence of the Higgs boson.
Ordinary matter makes up all the objects and materials familiar to life on Earth, including people, cars, buildings, mountains, air, and clouds. Stars, planets, and other celestial bodies also contain ordinary matter. The fundamental fermions that make up matter fall into two categories: leptons and quarks. Each lepton and quark has an antiparticle partner, with the same mass but opposite charge. Leptons and quarks differ from each other in two main ways: (1) the electric charge they carry and (2) the way they interact with each other and with other particles. Scientists usually state the electric charge of a particle as a multiple of the electric charge of a proton, which is 1.602 × 10-19 coulombs. Leptons have electric charges of either-1 or 0 (neutral), with their antiparticles having charges of +1 or 0. Quarks have electric charges of either +? or? Antiquarks have electric charges of either -? or +? . Leptons interact weakly with one another and with other particles, while quarks interact strongly with one another.
Leptons and quarks each come in 6 varieties. Scientists divided these 12 basic types into 3 groups, called generations. Each generation consists of 2 leptons and 2 quarks. All ordinary matter consists of just the first generation of particles. The particles in the second and third generation tend to be heavier than their counterparts in the first generation. These heavier, higher-generation particles decay, or spontaneously change, into their first generation counterparts. Most of these decays occur very quickly, and the particles in the higher generations exist for an extremely short time (a millionth of a second or less). Particle physicists are still trying to understand the role of the second and third generations in nature.
Scientists divide leptons into two groups: particles that have electric charges and particles, called neutrinos, that are electrically neutral. Each of the three generations contains a charged lepton and a neutrino. The first generation of leptons consists of the electron (e-) and the electron neutrino (ν? e); the second generation, the muon (µ) and the muon neutrino (ν? µ); and the third generation, the tau (t) and the tau neutrino (ν? t;).
The electron is probably the most familiar elementary particle. Electrons are about 2,000 times lighter than protons and have an electric charge of-1. They are stable, so they can exist independently (outside an atom) for an infinitely long time. All atoms contain electrons, and the behaviour of electrons in atoms distinguishes one type of atom from another. When atoms radioactively decay, they sometimes emit an electron in a process called beta decay.
Studies of beta decay led to the discovery of the electron neutrino, the first generation lepton with no electric charge. Atoms release neutrinos, along with electrons, when they undergo beta decay. Electron neutrinos might have a tiny mass, but their mass is so small that scientists have not been able to measure it or conclusively confirm that the particles have any mass at all.
Physicists discovered a particle heavier than the electron but lighter than a proton in studies of high-energy particles created in Earth’s atmosphere. This particle, called the muon (pronounced MYOO-on), is the second generation charged lepton. Muons have an electric charge of -1 and an average lifetime of 1.52 microseconds (a microsecond is one-millionth of a second). Unlike electrons, they do not make up everyday matter. Muons live their brief lives in the atmosphere, where heavier particles called pions decay into Muons and other particles. The electrically neutral partner of the muon is the muon neutrino. Muon neutrinos, like electron neutrinos, have either a tiny mass too small to measure or no mass at all. They are released when a muon decays.
The third generation charged lepton is the tau. The tau has an electric charge of-1 and almost twice the mass of a proton. Scientists have detected taus only in laboratory experiments. The average lifetime of taus is extremely short-only 0.3 picoseconds (a picosecond is one-trillionth of a second). Scientists believe the tau has an electrically neutral partner called the tau neutrino. While scientists have never detected a tau neutrino directly, they believe they have seen the effects of tau neutrinos during experiments. Like the other neutrinos, the tau neutrino has a very small mass or no mass at all.
The fundamental particles that make up protons and neutrons are called quarks. Like leptons, quarks come in six varieties, or ‘flavours,’ divided into three generations. Unlike leptons, however, quarks never exist alone-they are always combined with other quarks. In fact, quarks cannot be isolated even with the most advanced laboratory equipment and processes. Scientists have had to determine the charges and approximate masses of quarks mathematically by studying particles that contain quarks.
Quarks are unique among all elementary particles in that they have fractional electric charges-either +? or -? . In an observable particle, the fractional charges of quarks in the particle add up to an integer charge for the combination.
The first generation quarks are designated up (u) and down (d); the second generation, charm and strange (s); and the third generation, top (t) and bottom (b). The odd names for quarks do not describe any aspect of the particles; they merely give scientists a way to refer to a particular type of quark.
The up quark and the down quark make up protons and neutrons in atoms, as described below. The up quark has an electric charge of +? , and the down quark has a charge of ~? . The second generation quarks have greater mass than those in the first generation. The charm quark has an electric charge of +? , and the strange quark has a charge of ~? . The heaviest quarks are the third generation top and bottom quarks. Some scientists originally called the top and bottom quarks truth and beauty, but those names have dropped out of use. The top quark has an electric charge of +? , and the bottom quark has a charge of ~? . The up quark, the charm quark, and the top quark behave similarly and are called up-type quarks. The down quark, the strange quark, and the bottom quark are called down-type quarks because they share the same electric charge.
Particles made of quarks are called hadrons (pronounced HA-dronz). Hadrons are not fundamental, since they consist of quarks, but they are commonly included in discussions of elementary particles. Two classes of hadrons can be found in nature: mesons (pronounced ME-zonz) and baryons (pronounced BARE-ee-onz).
Mesons contain a quark and an antiquark (the antiparticle partner of the quark). Since they contain two fermions, mesons are bosons. The first meson that scientists detected was the pion. Pions exist as intermediary particles in the nuclei of atoms, forming from and being absorbed by protons and neutrons. The pion comes in three varieties: a positive pion (p+), a negative pion (p-), and an electrically neutral pion (p0). The positive pion consists of an up quark and a down antiquark. The up quark has charge +? and the down antiquark has charge +? , so the charge on the positive pion is +1. Positive pions have an average lifetime of 26 nanoseconds (a nanosecond is one-billionth of a second). The negative pion contains an up antiquark and a down quark, so the charge on the negative pion is~? Besides ~ ? , or -1. It has the same mass and average lifetime as the positive pion. The neutral pion contains an up quark and an up antiquark, so the electric charges cancel each other. It has an average lifetime of 9 femtoseconds (a femtosecond is one-quadrillionth of a second).
Many other mesons exist. All six quarks play a part in the formation of mesons, although mesons containing heavier quarks like the top quark have very short lifetimes. Other mesons include the Kaons (pronounced KAY-ons) and the D particles. Kaons (Κ?) Ds comes in several different varieties, just as pions do. All varieties of Kaons and some varieties of Ds contain either a strange quark or a strange antiquark. All Ds contains either a charm quark or a charm antiquark.
Three quarks together form a baryon. A baryon contains an odd number of fermions, so it is a fermion itself. Protons, the positively charged particles in all atomic nuclei, are baryons that consist of two up quarks and a down quark. Adding the charges of two up quarks and a down quark, +? In addition +? Moreover ~ ? , produces a net charge of +1, the charge of the proton. Protons have never been observed to decay.
The neutrons found inside atoms are baryons as well. A neutron consists of one up quark and two down quarks. Adding these charges gives +? plus ~ ? plus ~ ? for a net charge of 0, making the neutron electrically neutral. Neutrons have a greater mass than protons and an average lifetime of 930 seconds.
Many other baryons exist, and many contain quarks other than the up and down flavours. For example, lambda and sigma (S) particles contain strange, charm, or bottom quarks. For lambda particles, the average lifespan ranges from 200 femtoseconds to 1.2 picoseconds. The average lifetime of sigma particles ranges from 0.0007 femtoseconds to 150 picoseconds.
British physicist Paul Dirac proposed an early theory of particle interactions in 1928. His theory predicted the existence of antiparticles, which combine to form antimatter. Antiparticles have the same mass as their normal particle counterparts, but they have several opposite quantities, such as electric charge and colour charge. Colour charge determines how particles react with one another under the strong force (the force that holds the nuclei of atoms together, just as electric charge determines how particles react to one another under the electromagnetic force). The antiparticles of fermions are also fermions, and the antiparticles of bosons are bosons.
All fermions have antiparticles. The antiparticle of an electron is called the positron (pronounced POZ-i-tron). The antiparticle of the proton is the antiproton. The antiproton consists of antiquarks, and two up antiquarks and one down antiquark. Antiquarks have the opposite electric and colour charges of their counterparts. The antiparticles of neutrinos are called antineutrinos. Both neutrinos and antineutrinos have no electric charge or colour charge, but physicists still consider them distinct from one another. Neutrinos and antineutrinos behave differently when they collide with other particles and in radioactive decay. When a particle decays, for example, an antineutrino accompanies the production of a charged lepton, and a neutrino accompanies the production of a charged antilepton. In addition, reactions that absorb neutrinos do not absorb antineutrinos, giving further evidence of the distinction between neutrinos and antineutrinos.
When a particle and its associated antiparticle collide, they annihilate, or destroy, each other, creating a tiny burst of energy. Particle-antiparticle collisions would provide a very efficient source of energy if large numbers of antiparticles could be harnessed cheaply. Physicists already make use of this energy in machines called particle accelerators. Particle accelerators increase the speed (and therefore energy) of elementary particles and make the particles collide with one another. When particles and antiparticles (such as protons and antiprotons) collide, their kinetic energy and the energy released when they annihilate each other converts to matter, creating new and unusual particles for physicists to study.
Particle-antiparticle collisions could someday fuel spacecraft, which need only a slight push to change their speed or direction in the vacuum of space. The antiparticles and particles would have to be kept away from each other until the spacecraft needed the energy of their collisions. Finely tuned, magnetic fields could be used to trap the particles and keep them separate, but these magnetic fields are difficult to set up and maintain. At the end of the 20th century, technology was not advanced enough to allow spacecraft to carry the equipment and particles necessary for using particle-antiparticle collisions as fuel.
All of the known forces in our universe can be classified as one of four types: electromagnetic, strong, weak, or gravitational. These forces affect everything in the universe. The electromagnetic force binds electrons to the atoms that compose our bodies, the objects around us, the Earth, the planets, and the Moon. The strong nuclear force holds together the nuclei inside the atoms that compose matter. Reactions due to the weak nuclear force fuel the Sun, providing light and heat. Gravity holds people and objects to the ground.
Each force has a particular property associated with it, such as electric charge for the electromagnetic force. Elementary particles that do not have electric charge, such as neutrinos, are electrically neutral and are not affected by the electromagnetic force.
Mechanical forces, such as the force used to push a child on a swing, result from the electrical repulsion between electrons and are thus electromagnetic. Even though a parent pushing a child on a swing feels his or her hands touching the child, the atoms in the parent’s hands never come into contact with the atoms of the child. The electrons in the parent’s-s repel those in the child while remaining a slight distance away from them. In a similar manner, the Sun attracts Earth through gravity, without Earth ever contacting the Sun. Physicists call these forces nonlocal, because the forces appear to affect objects that are not in the same location, but at a distance from one another.
Theories about elementary particles, however, require forces to be local-that is, the objects affecting each other must come into contact. Scientists achieved this locality by introducing the idea of elementary particles that carry the force from one object to another. Experiments have confirmed the existence of many of these particles. In the case of electromagnetism, a particle called a photon travels between the two repelling electrons. One electron releases the photon and recoils, while the other electron absorbs it and is pushed away.
Each of the four forces has one or unique force carriers, such as the photon, associated with it. These force carrier particles are bosons, since they do not obey the exclusion principle-any number of force carriers can have the same characteristics. They are also believed to be fundamental, so they cannot be split into smaller particles. Other than the fact that they are all fundamental bosons, the force carriers have very few common features. They are as unique as the forces they carry.
For centuries, electricity and magnetism seemed distinct forces. In the 1800s, however, experiments showed many connections between these two forces. In 1864 British physicist James Clerk Maxwell drew together the work of many physicists to show that electricity and magnetism are different aspects of the same electromagnetic force. This force causes particles with similar electric charges to repel one another and particles with opposite charges to attract one another. Maxwell also showed that light is a travelling form of electromagnetic energy. The founders of quantum mechanics took Maxwell’s work one step further. In 1925 German-British physicist Max Born, and German physicists Ernst Pascual Jordan and Werner Heisenberg showed mathematically that packets of light energy, later called photons, are emitted and absorbed when charged particles attract or repel each other through the electromagnetic force.
Any particle with electric charge, such as a quark or an electron, is subject to, or ‘feels,’ the electromagnetic force. Electrically neutral particles, such as neutrinos, do not feel it. The electric charge of a hadron is the sum of the charges on the quarks in the hadron. If the sum is zero, the electromagnetic force does not affect the hadron, although it does affect the quarks inside the hadron. Photons carry the electromagnetic force between particles but have no mass or electric charge themselves. Since photons have no electric charge, they are not affected by the force they carry.
Unlike neutrinos and some other electrically neutral particles, the photon does not have a distinct antiparticle. Particles that have antiparticles are like positive and negative numbers-they are each the other’s additive inverse. Photons are like the number zero, which is its own additive inverse. In effect, a photon is its own antiparticle.
In one example of the electromagnetic force, two electrons repel each other because they both have negative electric charges. One electron releases a photon, and the other electron absorbs it. Even though photons have no mass, their energy gives them momentum, a property that enables them to affect other particles. The momentum of the photon pushes the two electrons apart, just as the momentum of a basketball tossed between two ice skaters will push the skaters apart. For more information about electromagnetic radiation and particle physics.
Quarks and particles made of quarks attract each other through the strong force. The strong force holds the quarks in protons and neutrons together, and it holds protons and neutrons together in the nuclei. If electromagnetism were the only force between quarks, the two up quarks in a proton would repel each other because they are both positively charged. (The up quarks are also attracted to the negatively charged down quark in the proton, but this attraction is not as great as the repulsion between the up quarks.) However, the strong force is stronger than the electromagnetic force, so it glues the quarks inside the proton together.
A property of particles called colour charge determines how the strong force affects them. The term colour charge has nothing to do with colour in the usual sense; it is just a convenient way for scientists to describe this property of particles. Colour charge is similar to electric charge, which determines a particle’s electromagnetic interactions. Quarks can have a colour charge of red, blue, or green. Antiquarks can have a colour charge of anti-red (also called cyan), anti-blue (also called yellow), or anti-green (also called magenta). Quark types and colours are not linked-quarks, for example, may be red, green, or blue.
All observed objects carry a colour charge of zero, so quarks (which compose matter) must combine to form hadrons that are colourless, or colour neutral. The colour charges of the quarks in hadrons therefore cancel one another. Mesons contain a quark of one colour and an antiquark of the quark’s anti-colour. The colour charges cancel each other out and make the meson white, or colourless. Baryons contain three quarks, each with a different colour. As with light, the colour’s red, blue, and green combine to produce white, so the baryon is white, or colourless.
The bosons that carry the strong force between particles are called gluons. Gluons have no mass or electric charge and, like photons, they are their own antiparticle. Unlike photons, however, gluons do have colour charge. They carry a colour and an anticolour. Possible gluon colour combinations include red-antiblue, green-antired, and blue-antigreen. Because gluons carry colour charge, they can attract each other, while the colourless, electrically neutral photons cannot. Colours and anticolour attract each other, so gluons that carry one colour will attract gluons that carry the associated anticolour.
Gluons carry the strong force by moving between quarks and antiquarks and changing the colours of these particles. Quarks and antiquarks in hadrons constantly exchange gluons, changing colours as they emit and absorb gluons. Baryons and mesons are all colourless, so each time a quark or antiquark changes colour, other quarks or antiquarks in the particle must change colour as well to preserve the balance. The constant exchange of gluons and colour charge inside mesons and baryons creates a colour force field that holds the particles together.
The strong force is the strongest of the four forces in atoms. Quarks are bound so tightly to each other that they cannot be isolated. Separating a quark from an antiquark requires more energy than creating a quark and antiquark does. Attempting to pull apart a meson, then, just creates another meson: The quark in the original meson combines with a newly created antiquark, and the antiquark in the original meson combines with a newly created quark.
In addition to holding quarks together in mesons and baryons, gluons and the strong force also attract mesons and baryons to one another. The nuclei of s contain two kinds of baryons: protons and neutrons. Protons and neutrons are colourless, so the strong force does not attract them to each other directly. Instead, the individual quarks in one neutron or proton attract the quarks of its neighbours. The pull of quarks toward each other, even though they occur in separate baryons, provides enough energy to create a quark-antiquark pair. This pair of particles forms a type of meson called a pion. The exchange of pions between neutrons and protons holds the baryons in the nucleus together. The strong force between baryons in the nucleus is called the residual strong force.
While the strong force holds the nucleus of an atom together, the weak force can make the nucleus decay, changing some of its particles into other particles. The weak force is so named because it is far weaker than the electromagnetic or strong forces. For example, an interaction involving the weak force is 10 quintillion (10 billion billion) times less likely to occur than an interaction involving the electromagnetic force. Three particles, called vector bosons, carry the weak force. The weak force equivalent to electric charge and colour charge is a property called weak hypercharge. Weak hypercharge determines whether the weak force will affect a particle. All fermions possess weak hypercharge, as do the vector bosons that carry the weak force.
All elementary particles, except the force carriers of the other forces and the Higgs boson, interact by means of the weak force. Yet the effects of the weak force are usually masked by the other, stronger forces. The weak force is not very significant when considering most of the interactions between two quarks. For example, the strong force completely overwhelms the weak force when a quark bounces off another quark. Nor does the weak force significantly affect interactions between two charged particles, such as the interaction between an electron and a proton. The electromagnetic force dominates those interactions.
The weak force becomes significant when an interaction does not involve the strong force or the electromagnetic force. For example, neutrinos have neither electric charge nor colour charge, so any interaction involving a neutrino must be due to either the weak force or the gravitational force. The gravitational force is even weaker than the weak force on the scale of elementary particles, so the weak force dominates in neutrino interactions.
One example of a weak interaction is beta decay involving the decay of a neutron. When a neutron decays, it turns into a proton and emits an electron and an electron antineutrino. The neutron and antineutrino are electrically neutral, ruling out the electromagnetic force as a cause. The antineutrino and electron are colourless, so the strong force is not at work. Beta decay is due solely to the weak force.
The weak force is carried by three vector bosons. These bosons are designated the W+, the W-, and the Z0. The W bosons are electrically charged (+1 and –1), so they can feel the electromagnetic force. These two bosons are each other’s antiparticle counterparts, while the Z0 is its own antiparticle. All three vector bosons are colourless. A distinctive feature of the vector bosons is their mass. The weak force is the only force carried by particles that have mass. These massive force carriers cannot travel as far as the massless force carriers of the three long-range forces, so the weak force acts over shorter distances than the other three forces.
When the weak force affects a particle, the particle emits one of the three weak vector bosons-W+, W-, or Z0 -and changes into a different particle. The weak vector boson then decays to produce other particles. In interactions that involve the W+ and W-, a particle changes into a particle with a different electric charge. For example, in beta decay, one of the down quarks in a neutron changes into an up quark and the neutron releases a W boson. This change in quark type converts the neutron (two down quarks and an up quark) to a proton (one down quark and two up quarks). The W boson released by the neutron could then decay into an electron and an electron antineutrino. In Z0 interactions, a particle changes into a particle with the same electric charge.
A quark or lepton can change into a different quark or lepton from another generation only by the weak interaction. Thus the weak force is the reason that all stable matter contains only first generation leptons and quarks. The second and third generation leptons and quarks are heavier than their first generation counterparts, so they quickly decay into the lighter first generation leptons and quarks by exchanging W and Z bosons. The first generation particles have no lighter counterparts into which they can decay, so they are stable.
Physicists call their goal of an overall theory a ‘theory of everything,’ because it would explain all four known forces in the universe and how these forces affect particles. In such a theory, the particles that carry the gravitational force would be called gravitons. Gravitons should share many characteristics with photons because, like electromagnetism, gravitation is a long-range force that gets weaker with distance. Gravitons should be massless and have no electric charge or colour charge. The graviton is the only force carrier not yet observed in an experiment.
Gravitation is the weakest of the four forces on the balance, but it can become extremely powerful on a cosmic scale. For instance, the gravitational force between Earth and the Sun holds Earth in orbit. Gravity can have large effects, because, unlike the electromagnetic force, it is always attractive. Every particle in your body has some tiny gravitational attraction to the ground. The innumerable tiny attractions add up, which is why you do not float off into space. The negative charge on electrons, however, cancels out the positive charge on the protons in your body, leaving you electrically neutral.
Another unique feature of gravitation is its universality, and every object is gravitationally attracted to every other object, even objects without mass. For example, the theory of relativity predicted that light should feel the gravitational force. Before Einstein, scientists thought that gravitational attraction depended only on mass. They thought that light, being massless, would not be attracted by gravitation. Relativity, however, holds that gravitational attraction depends on the energy of an object and that mass is just one possible form of energy. Einstein was proven correct in 1919, when astronomers observed that the gravitational attraction between light from distant stars and the Sun bends the path of the light around the Sun (Gravitational Lens).
The standard model of particle physics includes an elementary boson that is not a force carrier: the Higgs boson. Scientists have not yet detected the Higgs boson in an experiment, but they believe it gives elementary particles their mass. Composite particles receive their mass from their constituent particles, and in some cases, the energy involved in holding these particles together. For example, the mass of a neutron comes from the mass of its quarks and the energy of the strong force holding the quarks together. The quarks themselves, however, have no such source of mass, which is why physicists introduced the idea of the Higgs boson. Elementary particles should obtain their mass by interacting with the Higgs boson.
Scientists expect the mass of the Higgs boson to be large compared to that of most other fundamental particles. Physicists can create more massive particles by forcing smaller particles to collide at high speeds. The energy released in the collisions converts to matter. Producing the Higgs boson, with its relatively large mass, will require a tremendous amount of energy. Many scientists are searching for the Higgs boson using machines called particle colliders. Particle colliders shoot a beam of particles at a target or another beam of particles to produce new, more massive particles.
Scientific progress often occurs when people find connections between apparently unconnected phenomena. For example, 19th-century British physicist James Clerk Maxwell made a connection between electric forces on charged objects and the force on a moving charge due to a magnet. He deduced that the electric force and the magnetic force were just different aspects of the same force. His discovery led to a deeper understanding of electromagnetism.
The unification of electricity and magnetism and the discovery of the strong and weak nuclear forces in the mid20th century left physicists with four apparently independent forces: electromagnetism, the strong force, the weak force, and gravitation. Physicists believe they should be able to connect these forces with one unified theory, called a theory of everything (TOE). A TOE should explain all particles and particle interactions by demonstrating that these four forces are different aspects of one universal force. The theory should also explain why fermions come in three generations when all stable matter contains fermions from just the first generation.
Scientists also hope that in explaining the extra generations, a TOE will explain why particles have the masses they do. They would like an explanation of why the top quark is so much heavier than the other quarks and why neutrinos are so much lighter than the other fermions. The standard model does not address these questions, and scientists have had to determine the masses of particles by experiment rather than by theoretical calculations.
Unification of all of the forces, however, is not an easy task. Each force appears to have distinctive properties and unique force carriers. In addition, physicists have yet to describe successfully the gravitational force in terms of particles, as they have for the other three forces. Despite these daunting obstacles, particle physicists continue to seek a unified theory and have made some progress. Starting points for unification include the electroweak theory and grand unification theories.
The American physicists’ Sheldon Glashow and Steven Weinberg and Pakistani physicist Abdus Salam completed the first step toward finding a universal force in the 1960s with their standard model theory of particle physics. Using a branch of mathematics called group theory, they showed how the weak force and the electromagnetic force could be combined mathematically into a single electroweak force. The electromagnetic force seems much stronger than the weak force at low energies, but that disparity is due to the differences between the force carriers. At higher energies, the difference between the W and Z bosons of the weak force, which have mass, and the massless photons of the electromagnetic force becomes less significant, and the two forces become indistinguishable.
The standard model also uses group theory to describe the strong force, but scientists have not yet been able to unify the strong force with the electroweak force. The next step toward finding a TOE would be a grand unified theory (GUT), a theory that would unify the strong, electromagnetic, and weak forces (the forces currently described by the standard model). A GUT should describe all three forces as different aspects of one force. At high energies, the distinctions among the three aspects should disappear. The only force remaining would then be the gravitational force, which scientists have not been able to describe with particle theory.
One type of GUT contains a theory called Supersymmetry (SUSY), first suggested in 1971. Supersymmetric theories set rules for new symmetries, or pairings, between particles and interactions. The standard model, for example, requires that every particle have an associated antiparticle. In a similar manner, SUSY requires that every particle have an associated Supersymmetric partner. While particles and their associated antiparticles are either both fermions or bosons, the Supersymmetric partner of a fermion should be a boson, and the Supersymmetric partner of a boson should be a fermion. For example, the fermion electron should be paired with a boson called a selecton, and the fermion quarks with bosons called squarks. The force-carrying bosons, such as photons and gluons, should be paired with fermions, such as particles called photinos and gluinos. Scientists have yet to detect these super symmetric partners, but they believe the partners may be massive compared with known particles, and therefore require too much energy to create with current particle accelerators.
Another approach to grand unification involves string theories. British physicist Paul Dirac developed the first string theory in 1950. String theories describe elementary particles as loops of vibrating string. Scientists believe these strings are currently invisible to us because the vibrations do not occur in the four familiar dimensions of space and time-some string theories, for example, need as many as 26 dimensions to explain particles and particle interactions. Incorporating Supersymmetry with string theory results in theories of superstring. Superstring theories are one of the leading candidates in the quest to unify gravitation with the other forces. The mathematics of superstring theories incorporates gravity into particle physics easily. Many scientists, however, do not believe superstrings are the answers, because they have not detected the additional dimensions required by string theory.
Studying elementary particles requires specialized equipment, the skill of deduction, and much patience. All of the fundamental particles-leptons, quarks, force-carrying bosons, and the Higgs boson-appear to be ‘point particles.’ A point particle is infinitely small, and it exists at a certain point in space without taking up any space. These fundamental particles are therefore impossible to see directly, even with the most powerful microscopes. Instead, scientists must deduce the properties of a particle from the way it affects other objects.
In a way, studying an elementary particle is like tracking a white polar bear in a field of snow: The polar bear may be impossible to see, but you can see the tracks it left in the snow, you can find trees it clawed, and you can find the remains of polar bear meals. You might even smell or hear the polar bear. From these observations, you could determine the position of the polar bear, its speed (from the spacing of the paw prints), and its weight (from the depth of the paw prints). No one can see an elementary particle, but scientists can look at the tracks it leaves in detectors, and they can look at materials with which it has interacted. They can even measure electric and magnetic fields caused by electrically charged particles. From these observations, physicists can deduce the position of an elementary particle, its speed, its weight, and many other properties.
Most particles are extremely unstable, which means they decay into other particles very quickly. Only the proton, neutron, electron, photon, and neutrinos can be detected a significantly long time after they are created. Studying the other particles, such as mesons, the heavier baryons, and the heavier leptons, requires detectors that can take many (250,000 or more) measurements per second. In addition, these heavier particles do not naturally exist on the surface of Earth, so scientists must create them in the laboratory or look to natural laboratories, such as stars and Earth’s atmosphere. Creating these particles requires extremely high amounts of energy.
Particle physicists use large, specialized facilities to measure the effects of elementary particles. In some cases, they use particle accelerators and particle colliders to create the particles to be studied. Particle accelerators are huge devices that use electric and magnetic fields to speed up elementary particles. Particle colliders are chambers in which beams of accelerated elementary particles crash into one another. Scientists can also study elementary particles from outer space, from sources such as the Sun. Physicists use large particle detectors, complex machines with several different instruments, to measure many different properties of elementary particles. Particle traps slow down and isolate particles, allowing direct study of the particles’ properties.
When energetic particles collide, the energy released in the collision can convert to matter and produce new particles. The more energy produced in the collision, the heavier the new particles can be. Particle accelerators produce heavier elementary particles by accelerating beams of electrons, protons, or their antiparticles to very high energies. Once the accelerated particles reach the desired energy, scientists steer them into a collision. The particles can collide with a stationary object (in a fixed target experiment) or with another beam of accelerated particles (in a collider experiment).
Particle accelerators come in two basic types-linear accelerators and circular accelerators. Devices that accelerate particles in a straight line are called linear accelerators. They use electric fields to speed up charged particles. Traditional (not a flat screen) television sets and computer monitors use this method to accelerate electrons
Still, all the same, it came that on January 1, 2000, people around the world celebrated the arrival of a new millennium. Some observers noted that the Gregorian calendar, which most of the world uses, of which began in AD 1 and that the new millennium truly begins in 2001. This detail failed to stem millennial festivities, but the issue shed light on the arbitrary nature of the way human beings have measured time for . . . well. . . . several millennia.
Few people know that the fellow responsible for the dating of the year 2000 was a diminutive Christian monk who lived nearly 15 centuries ago. The Romans called him Dionysius Exiguus-literally, Dennis the Little. His stature, however, could not contain his colossal aspiration: to reorder time itself. The tiny monk's efforts paid off. His work helped establish the basis for the Gregorian calendar used today throughout the world.
Dennis the Little lived in Rome during the 6th century, a generation after the last emperor was deposed. The eternal city had collapsed into ruins: Its walls had been breached, its aqueducts were shattered, and its streets were eerily silent. A trained mathematician, Dennis spent his days at a complex now called the Vatican, writing church canons and thinking about time.
In the year that historians now know as 525, Pope John I asked Dennis to calculate the dates upon which future Easters would fall. Then, as now, this was a complicated task, given the formula adopted by the church some two centuries earlier -that Easter will fall on the first Sunday after the first full Moon following the spring equinox. Dennis carefully studied the positions of the Moon and the Sun and produced a chart of upcoming Easters, beginning in 532. A calendar beginning in the year 532 probably struck Dennis's contemporaries as strange. For them the year was either 1285, dated from the founding of Rome, or 248, based on a calendar that started with the first year of the reign of Emperor Diocletian.
Dennis approved of neither accepted date, especially not the one glorifying the reign of Diocletian, a notorious persecutor of Christians. Instead, Dennis calculated his years from the reputed birth date of Jesus Christ. Justifying his choice, Dennis wrote that he “preferred to count and denote the years from the incarnation of our Lord, in order to make the foundation of our hope better known. Dennis's preference appeared on his new Easter charts, which began with anno Domini nostri Jesu Christi DXXXII (Latin for “in the year of our Lord Jesus Christ 532”), or AD 532.
However, Dennis got his dates wrong. Modern biblical historians believe Jesus Christ was most likely born in 4 or 5 Bc, not in the year Dennis called AD 1, although no one knows for sure. The real 2,000-year anniversary of Jesus' birth was therefore probably 1996 or 1997. Dennis pegged the birth of Christ to the year AD 1, rather than AD0, for the simple reason that Roman numerals had no zero. The mathematical concept of zero did not reach Europe until some eight centuries later. So the wee abbot started with year 1, and 2,000 years from the start of year 1 is not January 1, 2000, but January 1, 2001-a date many people find far less interesting.
These errors, however, are hardly unique in the complicated history of the Gregorian calendar, which is essentially a story of attempts, and failures, to get time right. It was not until 1949, when Communist leader Mao Zedong seized power in China, that the Gregorian calendar became the world's most widely accepted dating system. Mao ordered the changeover, believing that replacing the ancient Chinese lunar calendar with the more accurate Gregorian calendar was central to China's march toward modernity.
Mao's order completed the world conquest of a calendar that takes its name from a 16th-century pope, Gregory XIII. Gregory earned his fame by revising the calendar already modified by Dennis and first launched by Roman leader Julius Caesar in 47 BC. Caesar, in turn, borrowed his calendar from the Egyptians, who invented their calendar some 4,000 years before that. On the long road to the Gregorian calendar, fragments of many other time-measuring schemes were incorporated-from India, Sumer, Babylon, Palestine, Arabia, and pagan Europe.
Despite persistent human efforts to track the passage of time, nearly every calendar ever created has been inaccurate. One reason is that the solar year (the precise amount of time it takes the Earth to revolve once around the Sun) runs an awkward 365.252199 days-hardly an easy number to calculate without modern instruments. Another complication is the tendency of the Earth to wobble and wiggle ever so slightly in its orbit, yanked this way and that by the Moon's elliptical orbit and by the gravitational tug of the Sun. As a result, each year varies in length by a few seconds, making the exact length of any given year extraordinarily difficult to pin down.
If this sounds like splitting hairs, it is. Yet it also highlights some of the difficulties faced by astronomers, kings, priests, and other calendar makers, who tracked the seasons to know when to plant crops, collect taxes, or follow religious rituals.
The first efforts to keep a record of time probably occurred tens of thousands of years ago, when ancient humans in Europe and Africa peered up at the Moon and realized that its phases recurred in a steady, predictable fashion. A few people scratched what they saw onto rocks and bones, creating what may have been the world's first calendars. Heady stuff for skin-clad hominids, these calendars enabled them to predict when the silvery light would be available to hunt or to raid rival clans and to know how many full Moons would pass before the chill of winter gave way to spring.
The atomic grid added a second to UTC. Millennium watchers everywhere began wondering whether they should add a second to the countless clocks on buildings, in shops, and in homes that are counting down the third millennium to the very second. Most, though not all, made the change, adding another second of uncertainty to the question of when the new millennium begins.
Always the calendar invented by Caesar and Dennis the Little moves forward, rushing toward the next millennium 1,000 years from now-the progression of days, weeks, months, and years that appears to be here to
stay, despite its flaws. Other calendars have been proposed to eliminate small errors in the Gregorian calendar. Some reformers, for example, support making the unequal months uniform by updating the ancient Egyptian scheme of 12 months of 30 days each, with 5 days remaining as holidays.
During the French Revolution, the government of France adopted the
Egyptian calendar and decreed 1792 the year 1, a system that lasted until Napoleon restored the Gregorian calendar in 1806. More recently the United Nations (UN) and the Congress of the United States have reconsidered this historic alternative, calling it the World Calendar. To date, however, people seem content to use an ancient calendar designed by a Roman conqueror and an obscure abbot rather than fixing it or making it more accurate. Perhaps most of us prefer the illusion of a fixed time-line over admitting that time has meaning only because we say it does.
EVOLVING PRINCIPLES OF THOUGHT
BOOK THREE
METAPHYSICAL THINKING
In whatever way or possibility, we should not be of the assumption of taking for granted, as no thoughtful conclusion should be lightly dismissed as fallacious in the study assembled through the phenomenon of consciousness. Becoming even more so, when exercising the ingenuous humanness that caution measures, that we must try to move ahead to reach forward into the positive conclusion to its topic.
Many writers, along with a few well-known ne
w-age gurus, have played fast and loosely with firm interpretations of some new but informal understanding grounded within the mental in some vague sense of cosmic consciousness. However, these new age nuances are ever so erroneously placed in the new-age section of a commercial bookstore and purchased by those interested in new-age literature, and they will be quite disappointed.
What makes our species unique is the ability to construct a virtual world in which the real world can be imaged and manipulated in abstract forms and idea. Evolution has produced hundreds of thousands of species with brains, in which tens of thousands of species with complex behavioural and learning abilities. In that respect are also many species in which sophisticated forms of group communication have evolved. For example, birds, primates, and social carnivores use extensive vocal and gestural repertoires to structure behaviour in large social groups. Although we share roughly 98 percent of our genes with our primate cousins, the course of human evolution widened the cognitive gap between us and all other species, including our cousins, into a yawning chasm.
Research in neuroscience has shown that language processing is a staggeringly complex phenomenon that places incredible demands on memory and learning. Language functions extend, for example, into all major lobes of the neocortex: Auditory opinion is associated with the temporal area; tactile information is associated with the parietal area, and attention, working memory, and planning are associated with the frontal cortex of the left or dominant hemisphere. The left prefrontal region is associated with verb and noun production tasks and in the retrieval of words representing action. Broca’s area, next to the mouth-tongue region of a motor cortex, is associated with vocalization in word formation, and Wernicke’s area, by the auditory cortex, is associated with sound analysis in the sequencing of words.
Lower brain regions, like the cerebellum, have also evolved in our species to help in language processing. Until recently, we thought the cerebellum to be exclusively involved with automatic or preprogrammed movements such as throwing a ball, jumping over a high hurdle or playing noted orchestrations as on a musical instrument. Imaging studies in neuroscience suggest, however, that the cerebellum awaken within the smoldering embers brought aflame by the sparks of awakening consciousness, to think communicatively during the spoken exchange. Mostly actuated when the psychological subject occurs in making difficult the word associations that the cerebellum plays a role in associations by providing access to automatic word sequences and by augmenting rapid shifts in attention.
The midbrain and brain stem, situated on top of the spinal cord, coordinate and articulate the numerous amounts of ideas and output systems that, to play an extreme and crucial role in the interplay through which we have adaptively adjusted and coordinated the distributable dynamic communicative functions. Vocalization has some special associations with the midbrain, which coordinates the interaction of the oral and respiratory tracks necessary to make speech sounds. Since this vocalization requires synchronous activity among oral, vocal, and respiratory muscles, these functions probably connect to a central site. This site resembles the central greyness founded around the brain. The central gray area links the reticular nuclei and brain stem motor nuclei to comprise a distributed network for sound production. While human speech is dependent on structures in the cerebral cortex, and on rapid movement of the oral and vocal muscles, this is not true for vocalisation in other mammals.
Research in neuroscience reveals that the human brain is a massively parallel system in which language processing is widely distributed. Computers generated images of human brains engaged in language processing reveals a hierarchical organization consisting of complicated clusters of brain areas that process different component functions in controlled time sequences. Language processing is clearly not accomplished by stand-alone or unitary modules that evolved with the addition of separate modules were eventually wired together on some neural circuit board.
Similarly, individual linguistic symbols are continued as given to clusters of distributed brain areas and are not in a particular area. The specific sound patterns of words may be produced in dedicated regions. All the same, the symbolic and referential relationships between words are generated through a convergence of neural codes from different and independent brain regions. The processes of words comprehension and retrieval result from combinations simpler associative processes in several separate brain regions that require input from other regions. The symbolic meaning of words, like the grammar that is essential for the construction of meaningful relationships between stings of words, is an emergent property from the complex interaction of several brain parts.
While the brain that evolved this capacity was obviously a product of Darwinian evolution, the most critical precondition for the evolution of brain cannot be simply explained in these terms. Darwinian evolution can explain why the creation of stone tools altered condition for survival in a ne ecological niche in which group living, pair bonding, and more complex social structures were critical to survival. Darwinian evolution can also explain why selective pressure in this new ecological niche favoured pre-adaptive changes required for symbolic commonisation. Nevertheless, as this communication resulted in increasingly more complex behaviour evolution began to take precedence of physical evolution in the sense that mutations resulting in enhanced social behaviour became selectively advantageously within the context of the social behaviour of hominids.
February 10, 2010
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment