title
stringlengths 3
69
| text
stringlengths 776
102k
| relevans
float64 0.76
0.82
| popularity
float64 0.96
1
| ranking
float64 0.76
0.81
|
---|---|---|---|---|
Traction (mechanics) | Traction, traction force or tractive force is a force used to generate motion between a body and a tangential surface, through the use of either dry friction or shear force.
It has important applications in vehicles, as in tractive effort.
Traction can also refer to the maximum tractive force between a body and a surface, as limited by available friction; when this is the case, traction is often expressed as the ratio of the maximum tractive force to the normal force and is termed the coefficient of traction (similar to coefficient of friction). It is the force which makes an object move over the surface by overcoming all the resisting forces like friction, normal loads(load acting on the tiers in negative 'Z' axis), air resistance, rolling resistance, etc.
Definitions
Traction can be defined as:
In vehicle dynamics, tractive force is closely related to the terms tractive effort and drawbar pull, though all three terms have different definitions.
Coefficient of traction
The coefficient of traction is defined as the usable force for traction divided by the weight on the running gear (wheels, tracks etc.) i.e.:
usable traction = coefficient of traction × normal force.
Factors affecting coefficient of traction
Traction between two surfaces depends on several factors:
Material composition of each surface.
Macroscopic and microscopic shape (texture; macrotexture and microtexture)
Normal force pressing contact surfaces together.
Contaminants at the material boundary including lubricants and adhesives.
Relative motion of tractive surfaces - a sliding object (one in kinetic friction) has less traction than a non-sliding object (one in static friction).
Direction of traction relative to some coordinate system - e.g., the available traction of a tire often differs between cornering, accelerating, and braking.
For low-friction surfaces, such as off-road or ice, traction can be increased by using traction devices that partially penetrate the surface; these devices use the shear strength of the underlying surface rather than relying solely on dry friction (e.g., aggressive off-road tread or snow chains)....
Traction coefficient in engineering design
In the design of wheeled or tracked vehicles, high traction between wheel and ground is more desirable than low traction, as it allows for higher acceleration (including cornering and braking) without wheel slippage. One notable exception is in the motorsport technique of drifting, in which rear-wheel traction is purposely lost during high speed cornering.
Other designs dramatically increase surface area to provide more traction than wheels can, for example in continuous track and half-track vehicles. A tank or similar tracked vehicle uses tracks to reduce the pressure on the areas of contact. A 70-ton M1A2 would sink to the point of high centering if it used round tires. The tracks spread the 70 tons over a much larger area of contact than tires would and allow the tank to travel over much softer land.
In some applications, there is a complicated set of trade-offs in choosing materials. For example, soft rubbers often provide better traction but also wear faster and have higher losses when flexed—thus reducing efficiency. Choices in material selection may have a dramatic effect. For example: tires used for track racing cars may have a life of 200 km, while those used on heavy trucks may have a life approaching 100,000 km. The truck tires have less traction and also thicker rubber.
Traction also varies with contaminants. A layer of water in the contact patch can cause a substantial loss of traction. This is one reason for grooves and siping of automotive tires.
The traction of trucks, agricultural tractors, wheeled military vehicles, etc. when driving on soft and/or slippery ground has been found to improve significantly by use of Tire Pressure Control Systems (TPCS). A TPCS makes it possible to reduce and later restore the tire pressure during continuous vehicle operation. Increasing traction by use of a TPCS also reduces tire wear and ride vibration.
See also
Anti-lock braking system
Equilibrium tide
Friction
Force (physics)
Karl A. Grosch
Rail adhesion
Road slipperiness
Sandbox (locomotive)
Tribology
Weight transfer
References
Force
Vehicle technology
Mechanics | 0.77574 | 0.985556 | 0.764535 |
Compton scattering | Compton scattering (or the Compton effect) is the quantum theory of high frequency photons scattering following an interaction with a charged particle, usually an electron. Specifically, when the photon hits electrons, it releases loosely bound electrons from the outer valence shells of atoms or molecules.
The effect was discovered in 1923 by Arthur Holly Compton while researching the scattering of X-rays by light elements, and earned him the Nobel Prize for Physics in 1927. The Compton effect significantly deviated from dominating classical theories, using both special relativity and quantum mechanics to explain the interaction between high frequency photons and charged particles.
Photons can interact with matter at the atomic level (e.g. photoelectric effect and Rayleigh scattering), at the nucleus, or with just an electron. Pair production and the Compton effect occur at the level of the electron. When a high frequency photon scatters due to an interaction with a charged particle, there is a decrease in the energy of the photon and thus, an increase in its wavelength. This tradeoff between wavelength and energy in response to the collision is the Compton effect. Because of conservation of energy, the lost energy from the photon is transferred to the recoiling particle (such an electron would be called a "Compton Recoil electron").
This implies that if the recoiling particle initially carried more energy than the photon, the reverse would occur. This is known as inverse Compton scattering, in which the scattered photon increases in energy.
Introduction
In Compton's original experiment (see Fig. 1), the energy of the X ray photon (≈ 17 keV) was significantly larger than the binding energy of the atomic electron, so the electrons could be treated as being free after scattering. The amount by which the light's wavelength changes is called the Compton shift. Although nucleus Compton scattering exists, Compton scattering usually refers to the interaction involving only the electrons of an atom. The Compton effect was observed by Arthur Holly Compton in 1923 at Washington University in St. Louis and further verified by his graduate student Y. H. Woo in the years following. Compton was awarded the 1927 Nobel Prize in Physics for the discovery.
The effect is significant because it demonstrates that light cannot be explained purely as a wave phenomenon. Thomson scattering, the classical theory of an electromagnetic wave scattered by charged particles, cannot explain shifts in wavelength at low intensity: classically, light of sufficient intensity for the electric field to accelerate a charged particle to a relativistic speed will cause radiation-pressure recoil and an associated Doppler shift of the scattered light, but the effect would become arbitrarily small at sufficiently low light intensities regardless of wavelength. Thus, if we are to explain low-intensity Compton scattering, light must behave as if it consists of particles. Or the assumption that the electron can be treated as free is invalid resulting in the effectively infinite electron mass equal to the nuclear mass (see e.g. the comment below on elastic scattering of X-rays being from that effect). Compton's experiment convinced physicists that light can be treated as a stream of particle-like objects (quanta called photons), whose energy is proportional to the light wave's frequency.
As shown in Fig. 2, the interaction between an electron and a photon results in the electron being given part of the energy (making it recoil), and a photon of the remaining energy being emitted in a different direction from the original, so that the overall momentum of the system is also conserved. If the scattered photon still has enough energy, the process may be repeated. In this scenario, the electron is treated as free or loosely bound. Experimental verification of momentum conservation in individual Compton scattering processes by Bothe and Geiger as well as by Compton and Simon has been important in disproving the BKS theory.
Compton scattering is commonly described as inelastic scattering. This is because, unlike the more common Thomson scattering that happens at the low-energy limit, the energy in the scattered photon in Compton scattering is less than the energy of the incident photon. As the electron is typically weakly bound to the atom, the scattering can be viewed from either the perspective of an electron in a potential well, or as an atom with a small ionization energy. In the former perspective, energy of the incident photon is transferred to the recoil particle, but only as kinetic energy. The electron gains no internal energy, respective masses remain the same, the mark of an elastic collision. From this perspective, Compton scattering could be considered elastic because the internal state of the electron does not change during the scattering process. In the latter perspective, the atom's state is change, constituting an inelastic collision. Whether Compton scattering is considered elastic or inelastic depends on which perspective is being used, as well as the context.
Compton scattering is one of four competing processes when photons interact with matter. At energies of a few eV to a few keV, corresponding to visible light through soft X-rays, a photon can be completely absorbed and its energy can eject an electron from its host atom, a process known as the photoelectric effect. High-energy photons of and above may bombard the nucleus and cause an electron and a positron to be formed, a process called pair production; even-higher-energy photons (beyond a threshold energy of at least , depending on the nuclei involved), can eject a nucleon or alpha particle from the nucleus in a process called photodisintegration. Compton scattering is the most important interaction in the intervening energy region, at photon energies greater than those typical of the photoelectric effect but less than the pair-production threshold.
Description of the phenomenon
By the early 20th century, research into the interaction of X-rays with matter was well under way. It was observed that when X-rays of a known wavelength interact with atoms, the X-rays are scattered through an angle and emerge at a different wavelength related to . Although classical electromagnetism predicted that the wavelength of scattered rays should be equal to the initial wavelength, multiple experiments had found that the wavelength of the scattered rays was longer (corresponding to lower energy) than the initial wavelength.
In 1923, Compton published a paper in the Physical Review that explained the X-ray shift by attributing particle-like momentum to light quanta (Albert Einstein had proposed light quanta in 1905 in explaining the photo-electric effect, but Compton did not build on Einstein's work). The energy of light quanta depends only on the frequency of the light. In his paper, Compton derived the mathematical relationship between the shift in wavelength and the scattering angle of the X-rays by assuming that each scattered X-ray photon interacted with only one electron. His paper concludes by reporting on experiments which verified his derived relation:
where
is the initial wavelength,
is the wavelength after scattering,
is the Planck constant,
is the electron rest mass,
is the speed of light, and
is the scattering angle.
The quantity is known as the Compton wavelength of the electron; it is equal to . The wavelength shift is at least zero (for ) and at most twice the Compton wavelength of the electron (for ).
Compton found that some X-rays experienced no wavelength shift despite being scattered through large angles; in each of these cases the photon failed to eject an electron. Thus the magnitude of the shift is related not to the Compton wavelength of the electron, but to the Compton wavelength of the entire atom, which can be upwards of 10000 times smaller. This is known as "coherent" scattering off the entire atom since the atom remains intact, gaining no internal excitation.
In Compton's original experiments the wavelength shift given above was the directly measurable observable. In modern experiments it is conventional to measure the energies, not the wavelengths, of the scattered photons. For a given incident energy , the outgoing final-state photon energy, , is given by
Derivation of the scattering formula
A photon with wavelength collides with an electron in an atom, which is treated as being at rest. The collision causes the electron to recoil, and a new photon ′ with wavelength ′ emerges at angle from the photon's incoming path. Let ′ denote the electron after the collision. Compton allowed for the possibility that the interaction would sometimes accelerate the electron to speeds sufficiently close to the velocity of light as to require the application of Einstein's special relativity theory to properly describe its energy and momentum.
At the conclusion of Compton's 1923 paper, he reported results of experiments confirming the predictions of his scattering formula, thus supporting the assumption that photons carry momentum as well as quantized energy. At the start of his derivation, he had postulated an expression for the momentum of a photon from equating Einstein's already established mass-energy relationship of to the quantized photon energies of , which Einstein had separately postulated. If , the equivalent photon mass must be . The photon's momentum is then simply this effective mass times the photon's frame-invariant velocity . For a photon, its momentum , and thus can be substituted for for all photon momentum terms which arise in course of the derivation below. The derivation which appears in Compton's paper is more terse, but follows the same logic in the same sequence as the following derivation.
The conservation of energy merely equates the sum of energies before and after scattering.
Compton postulated that photons carry momentum; thus from the conservation of momentum, the momenta of the particles should be similarly related by
in which is omitted on the assumption it is effectively zero.
The photon energies are related to the frequencies by
where h is the Planck constant.
Before the scattering event, the electron is treated as sufficiently close to being at rest that its total energy consists entirely of the mass-energy equivalence of its (rest) mass ,
After scattering, the possibility that the electron might be accelerated to a significant fraction of the speed of light, requires that its total energy be represented using the relativistic energy–momentum relation
Substituting these quantities into the expression for the conservation of energy gives
This expression can be used to find the magnitude of the momentum of the scattered electron,
Note that this magnitude of the momentum gained by the electron (formerly zero) exceeds the energy/c lost by the photon,
Equation (1) relates the various energies associated with the collision. The electron's momentum change involves a relativistic change in the energy of the electron, so it is not simply related to the change in energy occurring in classical physics. The change of the magnitude of the momentum of the photon is not just related to the change of its energy; it also involves a change in direction.
Solving the conservation of momentum expression for the scattered electron's momentum gives
Making use of the scalar product yields the square of its magnitude,
In anticipation of being replaced with , multiply both sides by ,
After replacing the photon momentum terms with , we get a second expression for the magnitude of the momentum of the scattered electron,
Equating the alternate expressions for this momentum gives
which, after evaluating the square and canceling and rearranging terms, further yields
Dividing both sides by yields
Finally, since = = ,
It can further be seen that the angle of the outgoing electron with the direction of the incoming photon is specified by
Applications
Compton scattering
Compton scattering is of prime importance to radiobiology, as it is the most probable interaction of gamma rays and high energy X-rays with atoms in living beings and is applied in radiation therapy.
Compton scattering is an important effect in gamma spectroscopy which gives rise to the Compton edge, as it is possible for the gamma rays to scatter out of the detectors used. Compton suppression is used to detect stray scatter gamma rays to counteract this effect.
Magnetic Compton scattering
Magnetic Compton scattering is an extension of the previously mentioned technique which involves the magnetisation of a crystal sample hit with high energy, circularly polarised photons. By measuring the scattered photons' energy and reversing the magnetisation of the sample, two different Compton profiles are generated (one for spin up momenta and one for spin down momenta). Taking the difference between these two profiles gives the magnetic Compton profile (MCP), given by – a one-dimensional projection of the electron spin density.
where is the number of spin-unpaired electrons in the system, and are the three-dimensional electron momentum distributions for the majority spin and minority spin electrons respectively.
Since this scattering process is incoherent (there is no phase relationship between the scattered photons), the MCP is representative of the bulk properties of the sample and is a probe of the ground state. This means that the MCP is ideal for comparison with theoretical techniques such as density functional theory.
The area under the MCP is directly proportional to the spin moment of the system and so, when combined with total moment measurements methods (such as SQUID magnetometry), can be used to isolate both the spin and orbital contributions to the total moment of a system.
The shape of the MCP also yields insight into the origin of the magnetism in the system.
Inverse Compton scattering
Inverse Compton scattering is important in astrophysics. In X-ray astronomy, the accretion disk surrounding a black hole is presumed to produce a thermal spectrum. The lower energy photons produced from this spectrum are scattered to higher energies by relativistic electrons in the surrounding corona. This is surmised to cause the power law component in the X-ray spectra (0.2–10 keV) of accreting black holes.
The effect is also observed when photons from the cosmic microwave background (CMB) move through the hot gas surrounding a galaxy cluster. The CMB photons are scattered to higher energies by the electrons in this gas, resulting in the Sunyaev–Zel'dovich effect. Observations of the Sunyaev–Zel'dovich effect provide a nearly redshift-independent means of detecting galaxy clusters.
Some synchrotron radiation facilities scatter laser light off the stored electron beam.
This Compton backscattering produces high energy photons in the MeV to GeV range subsequently used for nuclear physics experiments.
Non-linear inverse Compton scattering
Non-linear inverse Compton scattering (NICS) is the scattering of multiple low-energy photons, given by an intense electromagnetic field, in a high-energy photon (X-ray or gamma ray) during the interaction with a charged particle, such as an electron. It is also called non-linear Compton scattering and multiphoton Compton scattering. It is the non-linear version of inverse Compton scattering in which the conditions for multiphoton absorption by the charged particle are reached due to a very intense electromagnetic field, for example the one produced by a laser.
Non-linear inverse Compton scattering is an interesting phenomenon for all applications requiring high-energy photons since NICS is capable of producing photons with energy comparable to the charged particle rest energy and higher. As a consequence NICS photons can be used to trigger other phenomena such as pair production, Compton scattering, nuclear reactions, and can be used to probe non-linear quantum effects and non-linear QED.
See also
References
Further reading
(the original 1923 paper on the APS website)
Stuewer, Roger H. (1975), The Compton Effect: Turning Point in Physics (New York: Science History Publications)
External links
Compton Scattering – Georgia State University
Compton Scattering Data – Georgia State University
Derivation of Compton shift equation
Astrophysics
Observational astronomy
Atomic physics
Foundational quantum physics
Quantum electrodynamics
X-ray scattering | 0.766159 | 0.997865 | 0.764523 |
John William Strutt, 3rd Baron Rayleigh | John William Strutt, 3rd Baron Rayleigh, (; 12 November 1842 – 30 June 1919) was an English mathematician and physicist who made extensive contributions to science. He spent all of his academic career at the University of Cambridge. Among many honours, he received the 1904 Nobel Prize in Physics "for his investigations of the densities of the most important gases and for his discovery of argon in connection with these studies." He served as president of the Royal Society from 1905 to 1908 and as chancellor of the University of Cambridge from 1908 to 1919.
Rayleigh provided the first theoretical treatment of the elastic scattering of light by particles much smaller than the light's wavelength, a phenomenon now known as "Rayleigh scattering", which notably explains why the sky is blue. He studied and described transverse surface waves in solids, now known as "Rayleigh waves". He contributed extensively to fluid dynamics, with concepts such as the Rayleigh number (a dimensionless number associated with natural convection), Rayleigh flow, the Rayleigh–Taylor instability, and Rayleigh's criterion for the stability of Taylor–Couette flow. He also formulated the circulation theory of aerodynamic lift. In optics, Rayleigh proposed a well-known criterion for angular resolution. His derivation of the Rayleigh–Jeans law for classical black-body radiation later played an important role in the birth of quantum mechanics (see ultraviolet catastrophe). Rayleigh's textbook The Theory of Sound (1877) is still used today by acousticians and engineers. He introduced the Rayleigh test for circular non-uniformity, of which the Rayleigh plot visualizes.
Early life and education
Strutt was born on 12 November 1842 at Langford Grove, Maypole Road in Maldon, Essex. In his early years he suffered from frailty and poor health. He attended Eton College and Harrow School (each for only a short period), before going on to the University of Cambridge in 1861 where he studied mathematics at Trinity College, Cambridge. He obtained a Bachelor of Arts degree (Senior Wrangler and 1st Smith's Prize) in 1865, and a Master of Arts in 1868. He was subsequently elected to a fellowship of Trinity. He held the post until his marriage to Evelyn Balfour, daughter of James Maitland Balfour, in 1871. He had three sons with her. In 1873, on the death of his father, John Strutt, 2nd Baron Rayleigh, he inherited the Barony of Rayleigh. Rayleigh was elected fellow of the Royal Society on 12 June 1873.
Career
Strutt was the second Cavendish Professor of Physics at the University of Cambridge (following James Clerk Maxwell), from 1879 to 1884. He first described dynamic soaring by seabirds in 1883, in the British journal Nature. From 1887 to 1905 he was professor of Natural Philosophy at the Royal Institution.
Around 1900 Rayleigh developed the duplex (combination of two) theory of human sound localisation using two binaural cues, interaural phase difference (IPD) and interaural level difference (ILD) (based on analysis of a spherical head with no external pinnae). The theory posits that we use two primary cues for sound lateralisation, using the difference in the phases of sinusoidal components of the sound and the difference in amplitude (level) between the two ears.
He received the degree of Doctor mathematicae (honoris causa) from the Royal Frederick University on 6 September 1902, when they celebrated the centennial of the birth of mathematician Niels Henrik Abel.
In 1904 he was awarded the Nobel Prize for Physics "for his investigations of the densities of the most important gases and for his discovery of argon in connection with these studies".
During the First World War, he was president of the government's Advisory Committee for Aeronautics, which was located at the National Physical Laboratory, and chaired by Richard Glazebrook.
In 1919, Rayleigh served as president of the Society for Psychical Research. As an advocate that simplicity and theory be part of the scientific method, Rayleigh argued for the principle of similitude.
Rayleigh served as president of the Royal Society from 1905 to 1908. From time to time he participated in the House of Lords; however, he spoke up only if politics attempted to become involved in science.
Personal life and death
Rayleigh married Evelyn Georgiana Mary (née Balfour). He died on 30 June 1919, at his home in Witham, Essex. He was succeeded, as the 4th Lord Rayleigh, by his son Robert John Strutt, another well-known physicist. Lord Rayleigh was buried in the graveyard of All Saints' Church in Terling in Essex.
Religious views
Rayleigh was an Anglican. Though he did not write about the relationship of science and religion, he retained a personal interest in spiritual matters. When his scientific papers were to be published in a collection by the Cambridge University Press, Strutt wanted to include a quotation from the Bible, but he was discouraged from doing so, as he later reported:
Still, he had his wish and the quotation was printed in the five-volume collection of scientific papers. In a letter to a family member, he wrote about his rejection of materialism and spoke of Jesus Christ as a moral teacher:
He held an interest in parapsychology and was an early member of the Society for Psychical Research (SPR). He was not convinced of spiritualism but remained open to the possibility of supernatural phenomena. Rayleigh was the president of the SPR in 1919. He gave a presidential address in the year of his death but did not come to any definite conclusions.
Honours and awards
The lunar crater Rayleigh as well as the Martian crater Rayleigh were named in his honour. The asteroid 22740 Rayleigh was named after him on 1 June 2007. A type of surface waves are known as Rayleigh waves, and the elastic scattering of electromagnetic waves is called Rayleigh scattering. The rayl, a unit of specific acoustic impedance, is also named for him. Rayleigh was also awarded with (in chronological order):
Smith's Prize (1864)
Royal Medal (1882)
Member of the American Philosophical Society (1886)
Matteucci Medal (1894)
Member of the Royal Swedish Academy of Sciences (1897)
Copley Medal (1899)
Nobel Prize in Physics (1904)
Elliott Cresson Medal (1913)
Rumford Medal (1914)
Lord Rayleigh was among the original recipients of the Order of Merit (OM) in the 1902 Coronation Honours list published on 26 June 1902, and received the order from King Edward VII at Buckingham Palace on 8 August 1902.
Sir William Ramsay, his co-worker in the investigation to discover argon described Rayleigh as "the greatest man alive" while speaking to Lady Ramsay during his last illness.
H. M. Hyndman said of Rayleigh that "no man ever showed less consciousness of great genius".
In honour of Lord Rayleigh, the Institute of Acoustics sponsors the Rayleigh Medal (established in 1970) and the Institute of Physics sponsors the John William Strutt, Lord Rayleigh Medal and Prize (established in 2008).
Many of the papers that he wrote on lubrication are now recognized as early classical contributions to the field of tribology. For these contributions, he was named as one of the 23 "Men of Tribology" by Duncan Dowson.
There is a memorial to him by Derwent Wood in St Andrew's Chapel at Westminster Abbey.
Bibliography
The Theory of Sound vol. I (London : Macmillan, 1877, 1894) (alternative link: Bibliothèque Nationale de France OR (Cambridge: University Press, reissued 2011, )
The Theory of Sound vol.II (London : Macmillan, 1878, 1896) (alternative link: Bibliothèque Nationale de France) OR (Cambridge: University Press, reissued 2011, )
Scientific papers (Vol. 1: 1869–1881) (Cambridge : University Press, 1899–1920, reissued by the publisher 2011, )
Scientific papers (Vol. 2: 1881–1887) (Cambridge : University Press, 1899–1920, reissued by the publisher 2011, )
Scientific papers (Vol. 3: 1887–1892) (Cambridge : University Press, 1899–1920, reissued by the publisher 2011, )
Scientific papers (Vol. 4: 1892–1901) (Cambridge : University Press, 1899–1920, reissued by the publisher 2011, )
Scientific papers (Vol. 5: 1902–1910) (Cambridge : University Press, 1899–1920, reissued by the publisher 2011, )
Scientific papers (Vol. 6: 1911–1919) (Cambridge : University Press, 1899–1920, reissued by the publisher 2011, )
See also
References
Further reading
Life of John William Strutt: Third Baron Rayleigh, O.M., F.R.S., (1924) Longmans, Green & Co.
A biography written by his son, Robert Strutt, 4th Baron Rayleigh
External links
About John William Strutt
Lord Rayleigh – the Last of the Great Victorian Polymaths, GEC Review, Volume 7, No. 3, 1992
1842 births
1919 deaths
20th-century British physicists
Acousticians
Alumni of Trinity College, Cambridge
Barons in the Peerage of the United Kingdom
British Nobel laureates
Chancellors of the University of Cambridge
De Morgan Medallists
Discoverers of chemical elements
English Anglicans
Experimental physicists
Optical physicists
Fluid dynamicists
Lord-lieutenants of Essex
Members of the Order of Merit
Nobel laureates in Physics
Fellows of the Royal Society
Fellows of the American Academy of Arts and Sciences
Foreign associates of the National Academy of Sciences
Members of the Royal Swedish Academy of Sciences
Members of the Royal Netherlands Academy of Arts and Sciences
Members of the Bavarian Academy of Sciences
Members of the Prussian Academy of Sciences
Members of the Hungarian Academy of Sciences
Members of the French Academy of Sciences
British parapsychologists
People educated at Eton College
People educated at Harrow School
People from Maldon, Essex
Presidents of the Physical Society
Presidents of the Royal Society
Recipients of the Copley Medal
Recipients of the Pour le Mérite (civil class)
Royal Medal winners
Senior Wranglers
John
Members of the Privy Council of the United Kingdom
Burials in Essex
Linear algebraists
Tribologists
Recipients of the Matteucci Medal
Members of the American Philosophical Society
Cavendish Professors of Physics
Members of the Royal Society of Sciences in Uppsala
Scientists of the National Physical Laboratory (United Kingdom) | 0.771839 | 0.990517 | 0.764519 |
Corpuscularianism | Corpuscularianism, also known as corpuscularism, is a set of theories that explain natural transformations as a result of the interaction of particles (minima naturalia, partes exiles, partes parvae, particulae, and semina). It differs from atomism in that corpuscles are usually endowed with a property of their own and are further divisible, while atoms are neither. Although often associated with the emergence of early modern mechanical philosophy, and especially with the names of Thomas Hobbes, René Descartes, Pierre Gassendi, Robert Boyle, Isaac Newton, and John Locke, corpuscularian theories can be found throughout the history of Western philosophy.
Overview
Corpuscles vs. atoms
Corpuscularianism is similar to the theory of atomism, except that where atoms were supposed to be indivisible, corpuscles could in principle be divided. In this manner, for example, it was theorized that mercury could penetrate into metals and modify their inner structure, a step on the way towards the production of gold by transmutation.
Perceived vs. real properties
Corpuscularianism was associated by its leading proponents with the idea that some of the apparent properties of objects are artifacts of the perceiving mind, that is, "secondary" qualities as distinguished from "primary" qualities. Corpuscles were thought to be unobservable and having a very limited number of basic properties, such as size, shape, and motion.
Thomas Hobbes
The philosopher Thomas Hobbes used corpuscularianism to justify his political theories in Leviathan. It was used by Newton in his development of the corpuscular theory of light, while Boyle used it to develop his mechanical corpuscular philosophy, which laid the foundations for the Chemical Revolution.
Robert Boyle
Corpuscularianism remained a dominant theory for centuries and was blended with alchemy by early scientists such as Robert Boyle and Isaac Newton in the 17th century. In his work The Sceptical Chymist (1661), Boyle abandoned the Aristotelian ideas of the classical elements—earth, water, air, and fire—in favor of corpuscularianism. In his later work, The Origin of Forms and Qualities (1666), Boyle used corpuscularianism to explain all of the major Aristotelian concepts, marking a departure from traditional Aristotelianism.
Light corpuscules
Alchemical corpuscularianism
William R. Newman traces the origins from the fourth book of Aristotle, Meteorology. The "dry" and "moist" exhalations of Aristotle became the alchemical 'sulfur' and 'mercury' of the eighth-century Islamic alchemist, Jābir ibn Hayyān (died c. 806–816). Pseudo-Geber's Summa perfectionis contains an alchemical theory in which unified sulfur and mercury corpuscles, differing in purity, size, and relative proportions, form the basis of a much more complicated process.
Importance to the development of modern scientific theory
Several of the principles which corpuscularianism proposed became tenets of modern chemistry.
The idea that compounds can have secondary properties that differ from the properties of the elements which are combined to make them became the basis of molecular chemistry.
The idea that the same elements can be predictably combined in different ratios using different methods to create compounds with radically different properties became the basis of stoichiometry, crystallography, and established studies of chemical synthesis.
The ability of chemical processes to alter the composition of an object without significantly altering its form is the basis of fossil theory via mineralization and the understanding of numerous metallurgical, biological, and geological processes.
See also
Atomic theory
Atomism
Classical element
History of chemistry
References
Bibliography
Further reading
Atomism
History of chemistry
13th century in science
Metaphysical theories
Particles | 0.779632 | 0.98061 | 0.764515 |
OpenAI o1 | OpenAI o1 is a generative pre-trained transformer released by OpenAI in September 2024. o1 spends time "thinking" before it answers, making it more efficient in complex reasoning tasks, science and programming.
History
Background
According to leaked information, o1 was formerly known within OpenAI as "Q*", and later as "Strawberry". The codename "Q*" first surfaced in November 2023, around the time of Sam Altman's ousting and subsequent reinstatement, with rumors suggesting that this experimental model had shown promising results on mathematical benchmarks. In July 2024, Reuters reported that OpenAI was developing a generative pre-trained transformer known as "Strawberry".
Release
"o1-preview" and "o1-mini" were released on September 12, 2024, for ChatGPT Plus and Team users. GitHub started testing the integration of o1-preview in its Copilot service the same day.
OpenAI noted that o1 is the first of a series of "reasoning" models, and that it was planning to add access to o1-mini to all ChatGPT free users. o1-preview's API is several times more expensive than GPT-4o.
Capabilities
According to OpenAI, o1 has been trained using a new optimization algorithm and a dataset specifically tailored to it. The training leverages reinforcement learning. OpenAI described o1 as a complement to GPT-4o rather than a successor.
o1 spends additional time thinking (generating a chain of thought) before generating an answer, which makes it more effective for complex reasoning tasks, particularly in science and mathematics. Compared to previous models, o1 has been trained to generate long "chains of thought" before returning a final answer. According to Mira Murati, this ability to think before responding represents a new, additional paradigm, which is improving model outputs by spending more computing power when generating the answer, whereas the model scaling paradigm improves outputs by increasing the model size, training data and training compute power. OpenAI's test results suggest a correlation between accuracy and the logarithm of the amount of compute spent thinking before answering.
o1-preview performed approximately at a PhD level on benchmark tests related to physics, chemistry, and biology. On the American Invitational Mathematics Examination, it solved 83% (12.5/15) of the problems, compared to 13% (1.8/15) for GPT-4o. It also ranked in the 89th percentile in Codeforces coding competitions. o1-mini is faster and 80% cheaper than o1-preview. It is particularly suitable for programming and STEM-related tasks, but does not have the same "broad world knowledge" as o1-preview.
OpenAI noted that o1's reasoning capabilities make it better at adhering to safety rules provided in the prompt's context window. OpenAI reported that during a test, one instance of o1-preview exploited a misconfiguration to succeed at a task that should have been infeasible due to a bug. OpenAI also granted early access to the UK and US AI Safety Institutes for research, evaluation, and testing. Dan Hendrycks wrote that "The model already outperforms PhD scientists most of the time on answering questions related to bioweapons." He suggested that these concerning capabilities will continue to increase.
Limitations
o1 usually requires more computing time and power than other GPT models by OpenAI, because it generates long chains of thought before making the final response.
According to OpenAI, o1 may "fake alignment", that is, generate a response that is contrary to accuracy and its own chain of thought, in about 0.38% of cases.
OpenAI forbids users from trying to reveal o1's chain of thought, which is hidden by design and not trained to comply with the company's policies. Prompts are monitored, and users who intentionally or accidentally violate this are warned and may lose their access to o1. OpenAI cites AI safety and competitive advantage as reasons for the restriction, which has been described as a loss of transparency by developers who work with large language models (LLMs).
In October 2024, researchers at Apple submitted a preprint reporting that LLMs such as o1 may be replicating reasoning steps from their training data. By changing the numbers and names used in a math problem or simply running the same problem again, LLMs would perform somewhat worse than their best benchmark results. Adding extraneous but logically inconsequential information to the problems caused a much greater drop in performance, from −17.5% for o1-preview, −29.1% for o1-mini, to −65.7% for the worst model tested.
References
OpenAI
ChatGPT
Artificial intelligence | 0.769816 | 0.99307 | 0.764481 |
Spinor | In geometry and physics, spinors (pronounced "spinner" IPA ) are elements of a complex number-based vector space that can be associated with Euclidean space. A spinor transforms linearly when the Euclidean space is subjected to a slight (infinitesimal) rotation, but unlike geometric vectors and tensors, a spinor transforms to its negative when the
space rotates through 360° (see picture). It takes a rotation of 720° for a spinor to go back to its original state. This property characterizes spinors: spinors can be viewed as the "square roots" of vectors (although this is inaccurate and may be misleading; they are better viewed as "square roots" of sections of vector bundles – in the case of the exterior algebra bundle of the cotangent bundle, they thus become "square roots" of differential forms).
It is also possible to associate a substantially similar notion of spinor to Minkowski space, in which case the Lorentz transformations of special relativity play the role of rotations. Spinors were introduced in geometry by Élie Cartan in 1913. In the 1920s physicists discovered that spinors are essential to describe the intrinsic angular momentum, or "spin", of the electron and other subatomic particles.
Spinors are characterized by the specific way in which they behave under rotations. They change in different ways depending not just on the overall final rotation, but the details of how that rotation was achieved (by a continuous path in the rotation group). There are two topologically distinguishable classes (homotopy classes) of paths through rotations that result in the same overall rotation, as illustrated by the belt trick puzzle. These two inequivalent classes yield spinor transformations of opposite sign. The spin group is the group of all rotations keeping track of the class. It doubly covers the rotation group, since each rotation can be obtained in two inequivalent ways as the endpoint of a path. The space of spinors by definition is equipped with a (complex) linear representation of the spin group, meaning that elements of the spin group act as linear transformations on the space of spinors, in a way that genuinely depends on the homotopy class. In mathematical terms, spinors are described by a double-valued projective representation of the rotation group SO(3).
Although spinors can be defined purely as elements of a representation space of the spin group (or its Lie algebra of infinitesimal rotations), they are typically defined as elements of a vector space that carries a linear representation of the Clifford algebra. The Clifford algebra is an associative algebra that can be constructed from Euclidean space and its inner product in a basis-independent way. Both the spin group and its Lie algebra are embedded inside the Clifford algebra in a natural way, and in applications the Clifford algebra is often the easiest to work with. A Clifford space operates on a spinor space, and the elements of a spinor space are spinors. After choosing an orthonormal basis of Euclidean space, a representation of the Clifford algebra is generated by gamma matrices, matrices that satisfy a set of canonical anti-commutation relations. The spinors are the column vectors on which these matrices act. In three Euclidean dimensions, for instance, the Pauli spin matrices are a set of gamma matrices, and the two-component complex column vectors on which these matrices act are spinors. However, the particular matrix representation of the Clifford algebra, hence what precisely constitutes a "column vector" (or spinor), involves the choice of basis and gamma matrices in an essential way. As a representation of the spin group, this realization of spinors as (complex) column vectors will either be irreducible if the dimension is odd, or it will decompose into a pair of so-called "half-spin" or Weyl representations if the dimension is even.
Introduction
What characterizes spinors and distinguishes them from geometric vectors and other tensors is subtle. Consider applying a rotation to the coordinates of a system. No object in the system itself has moved, only the coordinates have, so there will always be a compensating change in those coordinate values when applied to any object of the system. Geometrical vectors, for example, have components that will undergo the same rotation as the coordinates. More broadly, any tensor associated with the system (for instance, the stress of some medium) also has coordinate descriptions that adjust to compensate for changes to the coordinate system itself.
Spinors do not appear at this level of the description of a physical system, when one is concerned only with the properties of a single isolated rotation of the coordinates. Rather, spinors appear when we imagine that instead of a single rotation, the coordinate system is gradually (continuously) rotated between some initial and final configuration. For any of the familiar and intuitive ("tensorial") quantities associated with the system, the transformation law does not depend on the precise details of how the coordinates arrived at their final configuration. Spinors, on the other hand, are constructed in such a way that makes them sensitive to how the gradual rotation of the coordinates arrived there: They exhibit path-dependence. It turns out that, for any final configuration of the coordinates, there are actually two ("topologically") inequivalent gradual (continuous) rotations of the coordinate system that result in this same configuration. This ambiguity is called the homotopy class of the gradual rotation. The belt trick (shown, in which both ends of the rotated object are physically tethered to an external reference) demonstrates two different rotations, one through an angle of 2 and the other through an angle of 4, having the same final configurations but different classes. Spinors actually exhibit a sign-reversal that genuinely depends on this homotopy class. This distinguishes them from vectors and other tensors, none of which can feel the class.
Spinors can be exhibited as concrete objects using a choice of Cartesian coordinates. In three Euclidean dimensions, for instance, spinors can be constructed by making a choice of Pauli spin matrices corresponding to (angular momenta about) the three coordinate axes. These are 2×2 matrices with complex entries, and the two-component complex column vectors on which these matrices act by matrix multiplication are the spinors. In this case, the spin group is isomorphic to the group of 2×2 unitary matrices with determinant one, which naturally sits inside the matrix algebra. This group acts by conjugation on the real vector space spanned by the Pauli matrices themselves, realizing it as a group of rotations among them, but it also acts on the column vectors (that is, the spinors).
More generally, a Clifford algebra can be constructed from any vector space V equipped with a (nondegenerate) quadratic form, such as Euclidean space with its standard dot product or Minkowski space with its standard Lorentz metric. The space of spinors is the space of column vectors with components. The orthogonal Lie algebra (i.e., the infinitesimal "rotations") and the spin group associated to the quadratic form are both (canonically) contained in the Clifford algebra, so every Clifford algebra representation also defines a representation of the Lie algebra and the spin group. Depending on the dimension and metric signature, this realization of spinors as column vectors may be irreducible or it may decompose into a pair of so-called "half-spin" or Weyl representations. When the vector space V is four-dimensional, the algebra is described by the gamma matrices.
Mathematical definition
The space of spinors is formally defined as the fundamental representation of the Clifford algebra. (This may or may not decompose into irreducible representations.) The space of spinors may also be defined as a spin representation of the orthogonal Lie algebra. These spin representations are also characterized as the finite-dimensional projective representations of the special orthogonal group that do not factor through linear representations. Equivalently, a spinor is an element of a finite-dimensional group representation of the spin group on which the center acts non-trivially.
Overview
There are essentially two frameworks for viewing the notion of a spinor: the representation theoretic point of view and the geometric point of view.
Representation theoretic point of view
From a representation theoretic point of view, one knows beforehand that there are some representations of the Lie algebra of the orthogonal group that cannot be formed by the usual tensor constructions. These missing representations are then labeled the spin representations, and their constituents spinors. From this view, a spinor must belong to a representation of the double cover of the rotation group , or more generally of a double cover of the generalized special orthogonal group on spaces with a metric signature of . These double covers are Lie groups, called the spin groups or . All the properties of spinors, and their applications and derived objects, are manifested first in the spin group. Representations of the double covers of these groups yield double-valued projective representations of the groups themselves. (This means that the action of a particular rotation on vectors in the quantum Hilbert space is only defined up to a sign.)
In summary, given a representation specified by the data where is a vector space over or and is a homomorphism , a spinor is an element of the vector space .
Geometric point of view
From a geometrical point of view, one can explicitly construct the spinors and then examine how they behave under the action of the relevant Lie groups. This latter approach has the advantage of providing a concrete and elementary description of what a spinor is. However, such a description becomes unwieldy when complicated properties of the spinors, such as Fierz identities, are needed.
Clifford algebras
The language of Clifford algebras (sometimes called geometric algebras) provides a complete picture of the spin representations of all the spin groups, and the various relationships between those representations, via the classification of Clifford algebras. It largely removes the need for ad hoc constructions.
In detail, let V be a finite-dimensional complex vector space with nondegenerate symmetric bilinear form g. The Clifford algebra is the algebra generated by V along with the anticommutation relation . It is an abstract version of the algebra generated by the gamma or Pauli matrices. If V = , with the standard form we denote the Clifford algebra by Cℓn(). Since by the choice of an orthonormal basis every complex vector space with non-degenerate form is isomorphic to this standard example, this notation is abused more generally if . If is even, is isomorphic as an algebra (in a non-unique way) to the algebra of complex matrices (by the Artin–Wedderburn theorem and the easy to prove fact that the Clifford algebra is central simple). If is odd, is isomorphic to the algebra of two copies of the complex matrices. Therefore, in either case has a unique (up to isomorphism) irreducible representation (also called simple Clifford module), commonly denoted by Δ, of dimension 2[n/2]. Since the Lie algebra is embedded as a Lie subalgebra in equipped with the Clifford algebra commutator as Lie bracket, the space Δ is also a Lie algebra representation of called a spin representation. If n is odd, this Lie algebra representation is irreducible. If n is even, it splits further into two irreducible representations called the Weyl or half-spin representations.
Irreducible representations over the reals in the case when V is a real vector space are much more intricate, and the reader is referred to the Clifford algebra article for more details.
Spin groups
Spinors form a vector space, usually over the complex numbers, equipped with a linear group representation of the spin group that does not factor through a representation of the group of rotations (see diagram). The spin group is the group of rotations keeping track of the homotopy class. Spinors are needed to encode basic information about the topology of the group of rotations because that group is not simply connected, but the simply connected spin group is its double cover. So for every rotation there are two elements of the spin group that represent it. Geometric vectors and other tensors cannot feel the difference between these two elements, but they produce opposite signs when they affect any spinor under the representation. Thinking of the elements of the spin group as homotopy classes of one-parameter families of rotations, each rotation is represented by two distinct homotopy classes of paths to the identity. If a one-parameter family of rotations is visualized as a ribbon in space, with the arc length parameter of that ribbon being the parameter (its tangent, normal, binormal frame actually gives the rotation), then these two distinct homotopy classes are visualized in the two states of the belt trick puzzle (above). The space of spinors is an auxiliary vector space that can be constructed explicitly in coordinates, but ultimately only exists up to isomorphism in that there is no "natural" construction of them that does not rely on arbitrary choices such as coordinate systems. A notion of spinors can be associated, as such an auxiliary mathematical object, with any vector space equipped with a quadratic form such as Euclidean space with its standard dot product, or Minkowski space with its Lorentz metric. In the latter case, the "rotations" include the Lorentz boosts, but otherwise the theory is substantially similar.
Spinor fields in physics
The constructions given above, in terms of Clifford algebra or representation theory, can be thought of as defining spinors as geometric objects in zero-dimensional space-time. To obtain the spinors of physics, such as the Dirac spinor, one extends the construction to obtain a spin structure on 4-dimensional space-time (Minkowski space). Effectively, one starts with the tangent manifold of space-time, each point of which is a 4-dimensional vector space with SO(3,1) symmetry, and then builds the spin group at each point. The neighborhoods of points are endowed with concepts of smoothness and differentiability: the standard construction is one of a fiber bundle, the fibers of which are affine spaces transforming under the spin group. After constructing the fiber bundle, one may then consider differential equations, such as the Dirac equation, or the Weyl equation on the fiber bundle. These equations (Dirac or Weyl) have solutions that are plane waves, having symmetries characteristic of the fibers, i.e. having the symmetries of spinors, as obtained from the (zero-dimensional) Clifford algebra/spin representation theory described above. Such plane-wave solutions (or other solutions) of the differential equations can then properly be called fermions; fermions have the algebraic qualities of spinors. By general convention, the terms "fermion" and "spinor" are often used interchangeably in physics, as synonyms of one-another.
It appears that all fundamental particles in nature that are spin-1/2 are described by the Dirac equation, with the possible exception of the neutrino. There does not seem to be any a priori reason why this would be the case. A perfectly valid choice for spinors would be the non-complexified version of , the Majorana spinor. There also does not seem to be any particular prohibition to having Weyl spinors appear in nature as fundamental particles.
The Dirac, Weyl, and Majorana spinors are interrelated, and their relation can be elucidated on the basis of real geometric algebra. Dirac and Weyl spinors are complex representations while Majorana spinors are real representations.
Weyl spinors are insufficient to describe massive particles, such as electrons, since the Weyl plane-wave solutions necessarily travel at the speed of light; for massive particles, the Dirac equation is needed. The initial construction of the Standard Model of particle physics starts with both the electron and the neutrino as massless Weyl spinors; the Higgs mechanism gives electrons a mass; the classical neutrino remained massless, and was thus an example of a Weyl spinor. However, because of observed neutrino oscillation, it is now believed that they are not Weyl spinors, but perhaps instead Majorana spinors. It is not known whether Weyl spinor fundamental particles exist in nature.
The situation for condensed matter physics is different: one can construct two and three-dimensional "spacetimes" in a large variety of different physical materials, ranging from semiconductors to far more exotic materials. In 2015, an international team led by Princeton University scientists announced that they had found a quasiparticle that behaves as a Weyl fermion.
Spinors in representation theory
One major mathematical application of the construction of spinors is to make possible the explicit construction of linear representations of the Lie algebras of the special orthogonal groups, and consequently spinor representations of the groups themselves. At a more profound level, spinors have been found to be at the heart of approaches to the Atiyah–Singer index theorem, and to provide constructions in particular for discrete series representations of semisimple groups.
The spin representations of the special orthogonal Lie algebras are distinguished from the tensor representations given by Weyl's construction by the weights. Whereas the weights of the tensor representations are integer linear combinations of the roots of the Lie algebra, those of the spin representations are half-integer linear combinations thereof. Explicit details can be found in the spin representation article.
Attempts at intuitive understanding
The spinor can be described, in simple terms, as "vectors of a space the transformations of which are related in a particular way to rotations in physical space". Stated differently:
Several ways of illustrating everyday analogies have been formulated in terms of the plate trick, tangloids and other examples of orientation entanglement.
Nonetheless, the concept is generally considered notoriously difficult to understand, as illustrated by Michael Atiyah's statement that is recounted by Dirac's biographer Graham Farmelo:
History
The most general mathematical form of spinors was discovered by Élie Cartan in 1913. The word "spinor" was coined by Paul Ehrenfest in his work on quantum physics.
Spinors were first applied to mathematical physics by Wolfgang Pauli in 1927, when he introduced his spin matrices. The following year, Paul Dirac discovered the fully relativistic theory of electron spin by showing the connection between spinors and the Lorentz group. By the 1930s, Dirac, Piet Hein and others at the Niels Bohr Institute (then known as the Institute for Theoretical Physics of the University of Copenhagen) created toys such as Tangloids to teach and model the calculus of spinors.
Spinor spaces were represented as left ideals of a matrix algebra in 1930, by Gustave Juvett and by Fritz Sauter. More specifically, instead of representing spinors as complex-valued 2D column vectors as Pauli had done, they represented them as complex-valued 2 × 2 matrices in which only the elements of the left column are non-zero. In this manner the spinor space became a minimal left ideal in .
In 1947 Marcel Riesz constructed spinor spaces as elements of a minimal left ideal of Clifford algebras. In 1966/1967, David Hestenes replaced spinor spaces by the even subalgebra Cℓ01,3() of the spacetime algebra Cℓ1,3(). As of the 1980s, the theoretical physics group at Birkbeck College around David Bohm and Basil Hiley has been developing algebraic approaches to quantum theory that build on Sauter and Riesz' identification of spinors with minimal left ideals.
Examples
Some simple examples of spinors in low dimensions arise from considering the even-graded subalgebras of the Clifford algebra . This is an algebra built up from an orthonormal basis of mutually orthogonal vectors under addition and multiplication, p of which have norm +1 and q of which have norm −1, with the product rule for the basis vectors
Two dimensions
The Clifford algebra Cℓ2,0() is built up from a basis of one unit scalar, 1, two orthogonal unit vectors, σ1 and σ2, and one unit pseudoscalar . From the definitions above, it is evident that , and .
The even subalgebra Cℓ02,0(), spanned by even-graded basis elements of Cℓ2,0(), determines the space of spinors via its representations. It is made up of real linear combinations of 1 and σ1σ2. As a real algebra, Cℓ02,0() is isomorphic to the field of complex numbers . As a result, it admits a conjugation operation (analogous to complex conjugation), sometimes called the reverse of a Clifford element, defined by
which, by the Clifford relations, can be written
The action of an even Clifford element on vectors, regarded as 1-graded elements of Cℓ2,0(), is determined by mapping a general vector to the vector
where is the conjugate of , and the product is Clifford multiplication. In this situation, a spinor is an ordinary complex number. The action of on a spinor is given by ordinary complex multiplication:
An important feature of this definition is the distinction between ordinary vectors and spinors, manifested in how the even-graded elements act on each of them in different ways. In general, a quick check of the Clifford relations reveals that even-graded elements conjugate-commute with ordinary vectors:
On the other hand, in comparison with its action on spinors , the action of on ordinary vectors appears as the square of its action on spinors.
Consider, for example, the implication this has for plane rotations. Rotating a vector through an angle of θ corresponds to , so that the corresponding action on spinors is via . In general, because of logarithmic branching, it is impossible to choose a sign in a consistent way. Thus the representation of plane rotations on spinors is two-valued.
In applications of spinors in two dimensions, it is common to exploit the fact that the algebra of even-graded elements (that is just the ring of complex numbers) is identical to the space of spinors. So, by abuse of language, the two are often conflated. One may then talk about "the action of a spinor on a vector". In a general setting, such statements are meaningless. But in dimensions 2 and 3 (as applied, for example, to computer graphics) they make sense.
Examples
The even-graded element corresponds to a vector rotation of 90° from σ1 around towards σ2, which can be checked by confirming that It corresponds to a spinor rotation of only 45°, however:
Similarly the even-graded element corresponds to a vector rotation of 180°: but a spinor rotation of only 90°:
Continuing on further, the even-graded element corresponds to a vector rotation of 360°: but a spinor rotation of 180°.
Three dimensions
The Clifford algebra Cℓ3,0() is built up from a basis of one unit scalar, 1, three orthogonal unit vectors, σ1, σ2 and σ3, the three unit bivectors σ1σ2, σ2σ3, σ3σ1 and the pseudoscalar . It is straightforward to show that , and .
The sub-algebra of even-graded elements is made up of scalar dilations,
and vector rotations
where
corresponds to a vector rotation through an angle θ about an axis defined by a unit vector .
As a special case, it is easy to see that, if , this reproduces the σ1σ2 rotation considered in the previous section; and that such rotation leaves the coefficients of vectors in the σ3 direction invariant, since
The bivectors σ2σ3, σ3σ1 and σ1σ2 are in fact Hamilton's quaternions i, j, and k, discovered in 1843:
With the identification of the even-graded elements with the algebra of quaternions, as in the case of two dimensions the only representation of the algebra of even-graded elements is on itself. Thus the (real) spinors in three-dimensions are quaternions, and the action of an even-graded element on a spinor is given by ordinary quaternionic multiplication.
Note that the expression (1) for a vector rotation through an angle , the angle appearing in γ was halved. Thus the spinor rotation (ordinary quaternionic multiplication) will rotate the spinor through an angle one-half the measure of the angle of the corresponding vector rotation. Once again, the problem of lifting a vector rotation to a spinor rotation is two-valued: the expression (1) with in place of θ/2 will produce the same vector rotation, but the negative of the spinor rotation.
The spinor/quaternion representation of rotations in 3D is becoming increasingly prevalent in computer geometry and other applications, because of the notable brevity of the corresponding spin matrix, and the simplicity with which they can be multiplied together to calculate the combined effect of successive rotations about different axes.
Explicit constructions
A space of spinors can be constructed explicitly with concrete and abstract constructions. The
equivalence of these constructions is a consequence of the uniqueness of the spinor representation of the complex Clifford algebra. For a complete example in dimension 3, see spinors in three dimensions.
Component spinors
Given a vector space V and a quadratic form g an explicit matrix representation of the Clifford algebra can be defined as follows. Choose an orthonormal basis for V i.e. where and for . Let . Fix a set of matrices such that (i.e. fix a convention for the gamma matrices). Then the assignment extends uniquely to an algebra homomorphism by sending the monomial in the Clifford algebra to the product of matrices and extending linearly. The space on which the gamma matrices act is now a space of spinors. One needs to construct such matrices explicitly, however. In dimension 3, defining the gamma matrices to be the Pauli sigma matrices gives rise to the familiar two component spinors used in non relativistic quantum mechanics. Likewise using the Dirac gamma matrices gives rise to the 4 component Dirac spinors used in 3+1 dimensional relativistic quantum field theory. In general, in order to define gamma matrices of the required kind, one can use the Weyl–Brauer matrices.
In this construction the representation of the Clifford algebra , the Lie algebra , and the Spin group , all depend on the choice of the orthonormal basis and the choice of the gamma matrices. This can cause confusion over conventions, but invariants like traces are independent of choices. In particular, all physically observable quantities must be independent of such choices. In this construction a spinor can be represented as a vector of 2k complex numbers and is denoted with spinor indices (usually α, β, γ). In the physics literature, such indices are often used to denote spinors even when an abstract spinor construction is used.
Abstract spinors
There are at least two different, but essentially equivalent, ways to define spinors abstractly. One approach seeks to identify the minimal ideals for the left action of on itself. These are subspaces of the Clifford algebra of the form , admitting the evident action of by left-multiplication: . There are two variations on this theme: one can either find a primitive element that is a nilpotent element of the Clifford algebra, or one that is an idempotent. The construction via nilpotent elements is more fundamental in the sense that an idempotent may then be produced from it. In this way, the spinor representations are identified with certain subspaces of the Clifford algebra itself. The second approach is to construct a vector space using a distinguished subspace of , and then specify the action of the Clifford algebra externally to that vector space.
In either approach, the fundamental notion is that of an isotropic subspace . Each construction depends on an initial freedom in choosing this subspace. In physical terms, this corresponds to the fact that there is no measurement protocol that can specify a basis of the spin space, even if a preferred basis of is given.
As above, we let be an -dimensional complex vector space equipped with a nondegenerate bilinear form. If is a real vector space, then we replace by its complexification and let denote the induced bilinear form on . Let be a maximal isotropic subspace, i.e. a maximal subspace of such that . If is even, then let be an isotropic subspace complementary to . If is odd, let be a maximal isotropic subspace with , and let be the orthogonal complement of . In both the even- and odd-dimensional cases and have dimension . In the odd-dimensional case, is one-dimensional, spanned by a unit vector .
Minimal ideals
Since W is isotropic, multiplication of elements of W inside is skew. Hence vectors in W anti-commute, and is just the exterior algebra Λ∗W. Consequently, the k-fold product of W with itself, Wk, is one-dimensional. Let ω be a generator of Wk. In terms of a basis of in W, one possibility is to set
Note that (i.e., ω is nilpotent of order 2), and moreover, for all . The following facts can be proven easily:
If , then the left ideal is a minimal left ideal. Furthermore, this splits into the two spin spaces and on restriction to the action of the even Clifford algebra.
If , then the action of the unit vector u on the left ideal decomposes the space into a pair of isomorphic irreducible eigenspaces (both denoted by Δ), corresponding to the respective eigenvalues +1 and −1.
In detail, suppose for instance that n is even. Suppose that I is a non-zero left ideal contained in . We shall show that I must be equal to by proving that it contains a nonzero scalar multiple of ω.
Fix a basis wi of W and a complementary basis wi′ of W so that
Note that any element of I must have the form αω, by virtue of our assumption that . Let be any such element. Using the chosen basis, we may write
where the ai1...ip are scalars, and the Bj are auxiliary elements of the Clifford algebra. Observe now that the product
Pick any nonzero monomial a in the expansion of α with maximal homogeneous degree in the elements wi:
(no summation implied),
then
is a nonzero scalar multiple of ω, as required.
Note that for n even, this computation also shows that
as a vector space. In the last equality we again used that W is isotropic. In physics terms, this shows that Δ is built up like a Fock space by creating spinors using anti-commuting creation operators in W acting on a vacuum ω.
Exterior algebra construction
The computations with the minimal ideal construction suggest that a spinor representation can
also be defined directly using the exterior algebra of the isotropic subspace W.
Let denote the exterior algebra of W considered as vector space only. This will be the spin representation, and its elements will be referred to as spinors.
The action of the Clifford algebra on Δ is defined first by giving the action of an element of V on Δ, and then showing that this action respects the Clifford relation and so extends to a homomorphism of the full Clifford algebra into the endomorphism ring End(Δ) by the universal property of Clifford algebras. The details differ slightly according to whether the dimension of V is even or odd.
When dim() is even, where W is the chosen isotropic complement. Hence any decomposes uniquely as with and . The action of on a spinor is given by
where i(w) is interior product with w using the nondegenerate quadratic form to identify V with V∗, and ε(w) denotes the exterior product. This action is sometimes called the Clifford product. It may be verified that
and so respects the Clifford relations and extends to a homomorphism from the Clifford algebra to End(Δ).
The spin representation Δ further decomposes into a pair of irreducible complex representations of the Spin group (the half-spin representations, or Weyl spinors) via
When dim(V) is odd, , where U is spanned by a unit vector u orthogonal to W. The Clifford action c is defined as before on , while the Clifford action of (multiples of) u is defined by
As before, one verifies that c respects the Clifford relations, and so induces a homomorphism.
Hermitian vector spaces and spinors
If the vector space V has extra structure that provides a decomposition of its complexification into two maximal isotropic subspaces, then the definition of spinors (by either method) becomes natural.
The main example is the case that the real vector space V is a hermitian vector space , i.e., V is equipped with a complex structure J that is an orthogonal transformation with respect to the inner product g on V. Then splits in the eigenspaces of J. These eigenspaces are isotropic for the complexification of g and can be identified with the complex vector space and its complex conjugate . Therefore, for a hermitian vector space the vector space (as well as its complex conjugate is a spinor space for the underlying real euclidean vector space.
With the Clifford action as above but with contraction using the hermitian form, this construction gives a spinor space at every point of an almost Hermitian manifold and is the reason why every almost complex manifold (in particular every symplectic manifold) has a Spinc structure. Likewise, every complex vector bundle on a manifold carries a Spinc structure.
Clebsch–Gordan decomposition
A number of Clebsch–Gordan decompositions are possible on the tensor product of one spin representation with another. These decompositions express the tensor product in terms of the alternating representations of the orthogonal group.
For the real or complex case, the alternating representations are
, the representation of the orthogonal group on skew tensors of rank r.
In addition, for the real orthogonal groups, there are three characters (one-dimensional representations)
σ+ : O(p, q) → {−1, +1} given by , if R reverses the spatial orientation of V, +1, if R preserves the spatial orientation of V. (The spatial character.)
σ− : O(p, q) → {−1, +1} given by , if R reverses the temporal orientation of V, +1, if R preserves the temporal orientation of V. (The temporal character.)
σ = σ+σ− . (The orientation character.)
The Clebsch–Gordan decomposition allows one to define, among other things:
An action of spinors on vectors.
A Hermitian metric on the complex representations of the real spin groups.
A Dirac operator on each spin representation.
Even dimensions
If is even, then the tensor product of Δ with the contragredient representation decomposes as
which can be seen explicitly by considering (in the Explicit construction) the action of the Clifford algebra on decomposable elements . The rightmost formulation follows from the transformation properties of the Hodge star operator. Note that on restriction to the even Clifford algebra, the paired summands are isomorphic, but under the full Clifford algebra they are not.
There is a natural identification of Δ with its contragredient representation via the conjugation in the Clifford algebra:
So also decomposes in the above manner. Furthermore, under the even Clifford algebra, the half-spin representations decompose
For the complex representations of the real Clifford algebras, the associated reality structure on the complex Clifford algebra descends to the space of spinors (via the explicit construction in terms of minimal ideals, for instance). In this way, we obtain the complex conjugate of the representation Δ, and the following isomorphism is seen to hold:
In particular, note that the representation Δ of the orthochronous spin group is a unitary representation. In general, there are Clebsch–Gordan decompositions
In metric signature , the following isomorphisms hold for the conjugate half-spin representations
If q is even, then and
If q is odd, then and
Using these isomorphisms, one can deduce analogous decompositions for the tensor products of the half-spin representations .
Odd dimensions
If is odd, then
In the real case, once again the isomorphism holds
Hence there is a Clebsch–Gordan decomposition (again using the Hodge star to dualize) given by
Consequences
There are many far-reaching consequences of the Clebsch–Gordan decompositions of the spinor spaces. The most fundamental of these pertain to Dirac's theory of the electron, among whose basic requirements are
A manner of regarding the product of two spinors ψ as a scalar. In physical terms, a spinor should determine a probability amplitude for the quantum state.
A manner of regarding the product ψ as a vector. This is an essential feature of Dirac's theory, which ties the spinor formalism to the geometry of physical space.
A manner of regarding a spinor as acting upon a vector, by an expression such as ψv. In physical terms, this represents an electric current of Maxwell's electromagnetic theory, or more generally a probability current.
Summary in low dimensions
In 1 dimension (a trivial example), the single spinor representation is formally Majorana, a real 1-dimensional representation that does not transform.
In 2 Euclidean dimensions, the left-handed and the right-handed Weyl spinor are 1-component complex representations, i.e. complex numbers that get multiplied by e±iφ/2 under a rotation by angle φ.
In 3 Euclidean dimensions, the single spinor representation is 2-dimensional and quaternionic. The existence of spinors in 3 dimensions follows from the isomorphism of the groups that allows us to define the action of Spin(3) on a complex 2-component column (a spinor); the generators of SU(2) can be written as Pauli matrices.
In 4 Euclidean dimensions, the corresponding isomorphism is . There are two inequivalent quaternionic 2-component Weyl spinors and each of them transforms under one of the SU(2) factors only.
In 5 Euclidean dimensions, the relevant isomorphism is that implies that the single spinor representation is 4-dimensional and quaternionic.
In 6 Euclidean dimensions, the isomorphism guarantees that there are two 4-dimensional complex Weyl representations that are complex conjugates of one another.
In 7 Euclidean dimensions, the single spinor representation is 8-dimensional and real; no isomorphisms to a Lie algebra from another series (A or C) exist from this dimension on.
In 8 Euclidean dimensions, there are two Weyl–Majorana real 8-dimensional representations that are related to the 8-dimensional real vector representation by a special property of Spin(8) called triality.
In dimensions, the number of distinct irreducible spinor representations and their reality (whether they are real, pseudoreal, or complex) mimics the structure in d dimensions, but their dimensions are 16 times larger; this allows one to understand all remaining cases. See Bott periodicity.
In spacetimes with p spatial and q time-like directions, the dimensions viewed as dimensions over the complex numbers coincide with the case of the -dimensional Euclidean space, but the reality projections mimic the structure in Euclidean dimensions. For example, in dimensions there are two non-equivalent Weyl complex (like in 2 dimensions) 2-component (like in 4 dimensions) spinors, which follows from the isomorphism .
See also
Anyon
Dirac equation in the algebra of physical space
Eigenspinor
Einstein–Cartan theory
Projective representation
Pure spinor
Spin-1/2
Spinor bundle
Supercharge
Twistor theory
Spacetime algebra
Notes
References
Works cited
Further reading
Rotation in three dimensions
Quantum mechanics
Quantum field theory | 0.767218 | 0.996427 | 0.764477 |
Central force | In classical mechanics, a central force on an object is a force that is directed towards or away from a point called center of force.
where is the force, F is a vector valued force function, F is a scalar valued force function, r is the position vector, ||r|| is its length, and is the corresponding unit vector.
Not all central force fields are conservative or spherically symmetric. However, a central force is conservative if and only if it is spherically symmetric or rotationally invariant.
Properties
Central forces that are conservative can always be expressed as the negative gradient of a potential energy:
(the upper bound of integration is arbitrary, as the potential is defined up to an additive constant).
In a conservative field, the total mechanical energy (kinetic and potential) is conserved:
(where 'ṙ' denotes the derivative of 'r' with respect to time, that is the velocity,'I' denotes moment of inertia of that body and 'ω' denotes angular velocity), and in a central force field, so is the angular momentum:
because the torque exerted by the force is zero. As a consequence, the body moves on the plane perpendicular to the angular momentum vector and containing the origin, and obeys Kepler's second law. (If the angular momentum is zero, the body moves along the line joining it with the origin.)
It can also be shown that an object that moves under the influence of any central force obeys Kepler's second law. However, the first and third laws depend on the inverse-square nature of Newton's law of universal gravitation and do not hold in general for other central forces.
As a consequence of being conservative, these specific central force fields are irrotational, that is, its curl is zero, except at the origin:
Examples
Gravitational force and Coulomb force are two familiar examples with being proportional to 1/r2 only. An object in such a force field with negative (corresponding to an attractive force) obeys Kepler's laws of planetary motion.
The force field of a spatial harmonic oscillator is central with proportional to r only and negative.
By Bertrand's theorem, these two, and , are the only possible central force fields where all bounded orbits are stable closed orbits. However, there exist other force fields, which have some closed orbits.
See also
Classical central-force problem
Particle in a spherically symmetric potential
Notes
References
Force
Classical mechanics | 0.781016 | 0.978822 | 0.764476 |
Energy flux | Energy flux is the rate of transfer of energy through a surface. The quantity is defined in two different ways, depending on the context:
Total rate of energy transfer (not per unit area); SI units: W = J⋅s−1.
Specific rate of energy transfer (total normalized per unit area); SI units: W⋅m−2 = J⋅m−2⋅s−1:
This is a vector quantity, its components being determined in terms of the normal (perpendicular) direction to the surface of measurement.
This is sometimes called energy flux density, to distinguish it from the first definition.
Radiative flux, heat flux, and sound energy flux are specific cases of this meaning.
See also
Energy flow (ecology)
Flux
Irradiance
Poynting vector
Stress–energy tensor
Energy current
References
Physical quantities
Vector calculus | 0.779477 | 0.980744 | 0.764467 |
Engineering mathematics | Mathematical engineering (or engineering mathematics) is a branch of applied mathematics, concerning mathematical methods and techniques that are typically used in engineering and industry. Along with fields like engineering physics and engineering geology, both of which may belong in the wider category engineering science, engineering mathematics is an interdisciplinary subject motivated by engineers' needs both for practical, theoretical and other considerations outside their specialization, and to deal with constraints to be effective in their work.
Description
Historically, engineering mathematics consisted mostly of applied analysis, most notably: differential equations; real and complex analysis (including vector and tensor analysis); approximation theory (broadly construed, to include asymptotic, variational, and perturbative methods, representations, numerical analysis); Fourier analysis; potential theory; as well as linear algebra and applied probability, outside of analysis. These areas of mathematics were intimately tied to the development of Newtonian physics, and the mathematical physics of that period. This history also left a legacy: until the early 20th century subjects such as classical mechanics were often taught in applied mathematics departments at American universities, and fluid mechanics may still be taught in (applied) mathematics as well as engineering departments.
The success of modern numerical computer methods and software has led to the emergence of computational mathematics, computational science, and computational engineering (the last two are sometimes lumped together and abbreviated as CS&E), which occasionally use high-performance computing for the simulation of phenomena and the solution of problems in the sciences and engineering. These are often considered interdisciplinary fields, but are also of interest to engineering mathematics.
Specialized branches include engineering optimization and engineering statistics.
Engineering mathematics in tertiary education typically consists of mathematical methods and models courses.
See also
Industrial mathematics
Control theory, a mathematical discipline concerned with engineering
Further mathematics and additional mathematics, A-level mathematics courses with similar content
Mathematical methods in electronics, signal processing and radio engineering
References
Applied mathematics | 0.770951 | 0.991582 | 0.764461 |
The Structure of Scientific Revolutions | The Structure of Scientific Revolutions is a book about the history of science by the philosopher Thomas S. Kuhn. Its publication was a landmark event in the history, philosophy, and sociology of science. Kuhn challenged the then prevailing view of progress in science in which scientific progress was viewed as "development-by-accumulation" of accepted facts and theories. Kuhn argued for an episodic model in which periods of conceptual continuity and cumulative progress, referred to as periods of "normal science", were interrupted by periods of revolutionary science. The discovery of "anomalies" accumulating and precipitating revolutions in science leads to new paradigms. New paradigms then ask new questions of old data, move beyond the mere "puzzle-solving" of the previous paradigm, alter the rules of the game and change the "map" directing new research.
For example, Kuhn's analysis of the Copernican Revolution emphasized that, in its beginning, it did not offer more accurate predictions of celestial events, such as planetary positions, than the Ptolemaic system, but instead appealed to some practitioners based on a promise of better, simpler solutions that might be developed at some point in the future. Kuhn called the core concepts of an ascendant revolution its "paradigms" and thereby launched this word into widespread analogical use in the second half of the 20th century. Kuhn's insistence that a paradigm shift was a mélange of sociology, enthusiasm and scientific promise, but not a logically determinate procedure, caused an uproar in reaction to his work. Kuhn addressed concerns in the 1969 postscript to the second edition. For some commentators The Structure of Scientific Revolutions introduced a realistic humanism into the core of science, while for others the nobility of science was tarnished by Kuhn's introduction of an irrational element into the heart of its greatest achievements.
History
The Structure of Scientific Revolutions was first published as a monograph in the International Encyclopedia of Unified Science, then as a book by University of Chicago Press in 1962. In 1969, Kuhn added a postscript to the book in which he replied to critical responses to the first edition. A 50th Anniversary Edition (with an introductory essay by Ian Hacking) was published by the University of Chicago Press in April 2012.
Kuhn dated the genesis of his book to 1947, when he was a graduate student at Harvard University and had been asked to teach a science class for humanities undergraduates with a focus on historical case studies. Kuhn later commented that until then, "I'd never read an old document in science." Aristotle's Physics was astonishingly unlike Isaac Newton's work in its concepts of matter and motion. Kuhn wrote: "as I was reading him, Aristotle appeared not only ignorant of mechanics, but a dreadfully bad physical scientist as well. About motion, in particular, his writings seemed to me full of egregious errors, both of logic and of observation." This was in an apparent contradiction with the fact that Aristotle was a brilliant mind. While perusing Aristotle's Physics, Kuhn formed the view that in order to properly appreciate Aristotle's reasoning, one must be aware of the scientific conventions of the time. Kuhn concluded that Aristotle's concepts were not "bad Newton," just different. This insight was the foundation of The Structure of Scientific Revolutions.
Central ideas regarding the process of scientific investigation and discovery had been anticipated by Ludwik Fleck in . Fleck had developed the first system of the sociology of scientific knowledge. He claimed that the exchange of ideas led to the establishment of a thought collective, which, when developed sufficiently, separated the field into esoteric (professional) and exoteric (laymen) circles. Kuhn wrote the foreword to the 1979 edition of Fleck's book, noting that he read it in 1950 and was reassured that someone "saw in the history of science what I myself was finding there."
Kuhn was not confident about how his book would be received. Harvard University had denied his tenure a few years prior. By the mid-1980s, however, his book had achieved blockbuster status. When Kuhn's book came out in the early 1960s, "structure" was an intellectually popular word in many fields in the humanities and social sciences, including linguistics and anthropology, appealing in its idea that complex phenomena could reveal or be studied through basic, simpler structures. Kuhn's book contributed to that idea.
One theory to which Kuhn replies directly is Karl Popper's "falsificationism," which stresses falsifiability as the most important criterion for distinguishing between that which is scientific and that which is unscientific. Kuhn also addresses verificationism, a philosophical movement that emerged in the 1920s among logical positivists. The verifiability principle claims that meaningful statements must be supported by empirical evidence or logical requirements.
Synopsis
Basic approach
Kuhn's approach to the history and philosophy of science focuses on conceptual issues like the practice of normal science, influence of historical events, emergence of scientific discoveries, nature of scientific revolutions and progress through scientific revolutions. What sorts of intellectual options and strategies were available to people during a given period? What types of lexicons and terminology were known and employed during certain epochs? Stressing the importance of not attributing traditional thought to earlier investigators, Kuhn's book argues that the evolution of scientific theory does not emerge from the straightforward accumulation of facts, but rather from a set of changing intellectual circumstances and possibilities.
Kuhn did not see scientific theory as proceeding linearly from an objective, unbiased accumulation of all available data, but rather as paradigm-driven:
Historical examples of chemistry
Kuhn explains his ideas using examples taken from the history of science. For instance, eighteenth-century scientists believed that homogenous solutions were chemical compounds. Therefore, a combination of water and alcohol was generally classified as a compound. Nowadays it is considered to be a solution, but there was no reason then to suspect that it was not a compound. Water and alcohol would not separate spontaneously, nor will they separate completely upon distillation (they form an azeotrope). Water and alcohol can be combined in any proportion.
Under this paradigm, scientists believed that chemical reactions (such as the combination of water and alcohol) did not necessarily occur in fixed proportion. This belief was ultimately overturned by Dalton's atomic theory, which asserted that atoms can only combine in simple, whole-number ratios. Under this new paradigm, any reaction which did not occur in fixed proportion could not be a chemical process. This type of world-view transition among the scientific community exemplifies Kuhn's paradigm shift.
Copernican Revolution
A famous example of a revolution in scientific thought is the Copernican Revolution. In Ptolemy's school of thought, cycles and epicycles (with some additional concepts) were used for modeling the movements of the planets in a cosmos that had a stationary Earth at its center. As accuracy of celestial observations increased, complexity of the Ptolemaic cyclical and epicyclical mechanisms had to increase to maintain the calculated planetary positions close to the observed positions. Copernicus proposed a cosmology in which the Sun was at the center and the Earth was one of the planets revolving around it. For modeling the planetary motions, Copernicus used the tools he was familiar with, namely the cycles and epicycles of the Ptolemaic toolbox. Yet Copernicus' model needed more cycles and epicycles than existed in the then-current Ptolemaic model, and due to a lack of accuracy in calculations, his model did not appear to provide more accurate predictions than the Ptolemy model. Copernicus' contemporaries rejected his cosmology, and Kuhn asserts that they were quite right to do so: Copernicus' cosmology lacked credibility.
Kuhn illustrates how a paradigm shift later became possible when Galileo Galilei introduced his new ideas concerning motion. Intuitively, when an object is set in motion, it soon comes to a halt. A well-made cart may travel a long distance before it stops, but unless something keeps pushing it, it will eventually stop moving. Aristotle had argued that this was presumably a fundamental property of nature: for the motion of an object to be sustained, it must continue to be pushed. Given the knowledge available at the time, this represented sensible, reasonable thinking.
Galileo put forward a bold alternative conjecture: suppose, he said, that we always observe objects coming to a halt simply because some friction is always occurring. Galileo had no equipment with which to objectively confirm his conjecture, but he suggested that without any friction to slow down an object in motion, its inherent tendency is to maintain its speed without the application of any additional force.
The Ptolemaic approach of using cycles and epicycles was becoming strained: there seemed to be no end to the mushrooming growth in complexity required to account for the observable phenomena. Johannes Kepler was the first person to abandon the tools of the Ptolemaic paradigm. He started to explore the possibility that the planet Mars might have an elliptical orbit rather than a circular one. Clearly, the angular velocity could not be constant, but it proved very difficult to find the formula describing the rate of change of the planet's angular velocity. After many years of calculations, Kepler arrived at what we now know as the law of equal areas.
Galileo's conjecture was merely that – a conjecture. So was Kepler's cosmology. But each conjecture increased the credibility of the other, and together, they changed the prevailing perceptions of the scientific community. Later, Newton showed that Kepler's three laws could all be derived from a single theory of motion and planetary motion. Newton solidified and unified the paradigm shift that Galileo and Kepler had initiated.
Coherence
One of the aims of science is to find models that will account for as many observations as possible within a coherent framework. Together, Galileo's rethinking of the nature of motion and Keplerian cosmology represented a coherent framework that was capable of rivaling the Aristotelian/Ptolemaic framework.
Once a paradigm shift has taken place, the textbooks are rewritten. Often the history of science too is rewritten, being presented as an inevitable process leading up to the current, established framework of thought. There is a prevalent belief that all hitherto-unexplained phenomena will in due course be accounted for in terms of this established framework. Kuhn states that scientists spend most (if not all) of their careers in a process of puzzle-solving. Their puzzle-solving is pursued with great tenacity, because the previous successes of the established paradigm tend to generate great confidence that the approach being taken guarantees that a solution to the puzzle exists, even though it may be very hard to find. Kuhn calls this process normal science.
As a paradigm is stretched to its limits, anomalies – failures of the current paradigm to take into account observed phenomena – accumulate. Their significance is judged by the practitioners of the discipline. Some anomalies may be dismissed as errors in observation, others as merely requiring small adjustments to the current paradigm that will be clarified in due course. Some anomalies resolve themselves spontaneously, having increased the available depth of insight along the way. But no matter how great or numerous the anomalies that persist, Kuhn observes, the practicing scientists will not lose faith in the established paradigm until a credible alternative is available; to lose faith in the solvability of the problems would in effect mean ceasing to be a scientist.
In any community of scientists, Kuhn states, there are some individuals who are bolder than most. These scientists, judging that a crisis exists, embark on what Kuhn calls revolutionary science, exploring alternatives to long-held, obvious-seeming assumptions. Occasionally this generates a rival to the established framework of thought. The new candidate paradigm will appear to be accompanied by numerous anomalies, partly because it is still so new and incomplete. The majority of the scientific community will oppose any conceptual change, and, Kuhn emphasizes, so they should. To fulfill its potential, a scientific community needs to contain both individuals who are bold and individuals who are conservative. There are many examples in the history of science in which confidence in the established frame of thought was eventually vindicated. Kuhn cites, as an example, that Alexis Clairaut, in 1750, was able to account accurately for the precession of the Moon's orbit using Newtonian theory, after sixty years of failed attempts. It is almost impossible to predict whether the anomalies in a candidate for a new paradigm will eventually be resolved. Those scientists who possess an exceptional ability to recognize a theory's potential will be the first whose preference is likely to shift in favour of the challenging paradigm. There typically follows a period in which there are adherents of both paradigms. In time, if the challenging paradigm is solidified and unified, it will replace the old paradigm, and a paradigm shift will have occurred.
Phases
Kuhn explains the process of scientific change as the result of various phases of paradigm change.
Phase 1 – It exists only once and is the pre-paradigm phase, in which there is no consensus on any particular theory. This phase is characterized by several incompatible and incomplete theories. Consequently, most scientific inquiry takes the form of lengthy books, as there is no common body of facts that may be taken for granted. When the actors in the pre-paradigm community eventually gravitate to one of these conceptual frameworks and ultimately to a widespread consensus on the appropriate choice of methods, terminology and on the kinds of experiment that are likely to contribute to increased insights, the old schools of thought disappear. The new paradigm leads to a more rigid definition of the research field, and those who are reluctant or unable to adapt are isolated or have to join rival groups.
Phase 2 – Normal science begins, in which puzzles are solved within the context of the dominant paradigm. As long as there is consensus within the discipline, normal science continues. Over time, progress in normal science may reveal anomalies, facts that are difficult to explain within the context of the existing paradigm. While usually these anomalies are resolved, in some cases they may accumulate to the point where normal science becomes difficult and where weaknesses in the old paradigm are revealed.
Phase 3 – If the paradigm proves chronically unable to account for anomalies, the community enters a crisis period. Crises are often resolved within the context of normal science. However, after significant efforts of normal science within a paradigm fail, science may enter the next phase.
Phase 4 – Paradigm shift, or scientific revolution, is the phase in which the underlying assumptions of the field are reexamined and a new paradigm is established.
Phase 5 – Post-revolution, the new paradigm's dominance is established and so scientists return to normal science, solving puzzles within the new paradigm.
A science may go through these cycles repeatedly, though Kuhn notes that it is a good thing for science that such shifts do not occur often or easily.
Incommensurability
According to Kuhn, the scientific paradigms preceding and succeeding a paradigm shift are so different that their theories are incommensurable—the new paradigm cannot be proven or disproven by the rules of the old paradigm, and vice versa. (A later interpretation by Kuhn of "commensurable" versus "incommensurable" was as a distinction between "languages", namely, that statements in commensurable languages were translatable fully from one to the other, while in incommensurable languages, strict translation is not possible. The paradigm shift does not merely involve the revision or transformation of an individual theory, it changes the way terminology is defined, how the scientists in that field view their subject, and, perhaps most significantly, what questions are regarded as valid, and what rules are used to determine the truth of a particular theory. The new theories were not, as the scientists had previously thought, just extensions of old theories, but were instead completely new world views.
Such incommensurability exists not just before and after a paradigm shift, but in the periods in between conflicting paradigms. It is simply not possible, according to Kuhn, to construct an impartial language that can be used to perform a neutral comparison between conflicting paradigms, because the very terms used are integral to the respective paradigms, and therefore have different connotations in each paradigm. The advocates of mutually exclusive paradigms are in a difficult position: "Though each may hope to convert the other to his way of seeing science and its problems, neither may hope to prove his case. The competition between paradigms is not the sort of battle that can be resolved by proofs." Scientists subscribing to different paradigms end up talking past one another.
Kuhn states that the probabilistic tools used by verificationists are inherently inadequate for the task of deciding between conflicting theories, since they belong to the very paradigms they seek to compare. Similarly, observations that are intended to falsify a statement will fall under one of the paradigms they are supposed to help compare, and will therefore also be inadequate for the task. According to Kuhn, the concept of falsifiability is unhelpful for understanding why and how science has developed as it has. In the practice of science, scientists will only consider the possibility that a theory has been falsified if an alternative theory is available that they judge credible. If there is not, scientists will continue to adhere to the established conceptual framework. If a paradigm shift has occurred, the textbooks will be rewritten to state that the previous theory has been falsified.
Kuhn further developed his ideas regarding incommensurability in the 1980s and 1990s. In his unpublished manuscript The Plurality of Worlds, Kuhn introduces the theory of kind concepts: sets of interrelated concepts that are characteristic of a time period in a science and differ in structure from the modern analogous kind concepts. These different structures imply different "taxonomies" of things and processes, and this difference in taxonomies constitutes incommensurability. This theory is strongly naturalistic and draws on developmental psychology to "found a quasi-transcendental theory of experience and of reality."
Exemplar
Kuhn introduced the concept of an exemplar in a postscript to the second edition of The Structure of Scientific Revolutions (1970). He noted that he was substituting the term "exemplars" for "paradigm", meaning the problems and solutions that students of a subject learn from the beginning of their education. For example, physicists might have as exemplars the inclined plane, Kepler's laws of planetary motion, or instruments like the calorimeter.
According to Kuhn, scientific practice alternates between periods of normal science and revolutionary science. During periods of normalcy, scientists tend to subscribe to a large body of interconnecting knowledge, methods, and assumptions which make up the reigning paradigm (see paradigm shift). Normal science presents a series of problems that are solved as scientists explore their field. The solutions to some of these problems become well known and are the exemplars of the field.
Those who study a scientific discipline are expected to know its exemplars. There is no fixed set of exemplars, but for a physicist today it would probably include the harmonic oscillator from mechanics and the hydrogen atom from quantum mechanics.
Kuhn on scientific progress
The first edition of The Structure of Scientific Revolutions ended with a chapter titled "Progress through Revolutions", in which Kuhn spelled out his views on the nature of scientific progress. Since he considered problem solving (or "puzzle solving") to be a central element of science, Kuhn saw that for a new candidate paradigm to be accepted by a scientific community,
In the second edition, Kuhn added a postscript in which he elaborated his ideas on the nature of scientific progress. He described a thought experiment involving an observer who has the opportunity to inspect an assortment of theories, each corresponding to a single stage in a succession of theories. What if the observer is presented with these theories without any explicit indication of their chronological order? Kuhn anticipates that it will be possible to reconstruct their chronology on the basis of the theories' scope and content, because the more recent a theory is, the better it will be as an instrument for solving the kinds of puzzle that scientists aim to solve. Kuhn remarked: "That is not a relativist's position, and it displays the sense in which I am a convinced believer in scientific progress."
Influence and reception
The Structure of Scientific Revolutions has been credited with producing the kind of "paradigm shift" Kuhn discussed. Since the book's publication, over one million copies have been sold, including translations into sixteen different languages. In 1987, it was reported to be the twentieth-century book most frequently cited in the period 1976–1983 in the arts and the humanities.
Philosophy
The first extensive review of The Structure of Scientific Revolutions was authored by Dudley Shapere, a philosopher who interpreted Kuhn's work as a continuation of the anti-positivist sentiment of other philosophers of science, including Paul Feyerabend and Norwood Russell Hanson. Shapere noted the book's influence on the philosophical landscape of the time, calling it "a sustained attack on the prevailing image of scientific change as a linear process of ever-increasing knowledge". According to the philosopher Michael Ruse, Kuhn discredited the ahistorical and prescriptive approach to the philosophy of science of Ernest Nagel's The Structure of Science (1961). Kuhn's book sparked a historicist "revolt against positivism" (the so-called "historical turn in philosophy of science" which looked to the history of science as a source of data for developing a philosophy of science), although this may not have been Kuhn's intention; in fact, he had already approached the prominent positivist Rudolf Carnap about having his work published in the International Encyclopedia of Unified Science. The philosopher Robert C. Solomon noted that Kuhn's views have often been suggested to have an affinity to those of Georg Wilhelm Friedrich Hegel. Kuhn's view of scientific knowledge, as expounded in The Structure of Scientific Revolutions, has been compared to the views of the philosopher Michel Foucault.
Sociology
The first field to claim descent from Kuhn's ideas was the sociology of scientific knowledge. Sociologists working within this new field, including Harry Collins and Steven Shapin, used Kuhn's emphasis on the role of non-evidential community factors in scientific development to argue against logical empiricism, which discouraged inquiry into the social aspects of scientific communities. These sociologists expanded upon Kuhn's ideas, arguing that scientific judgment is determined by social factors, such as professional interests and political ideologies.
Barry Barnes detailed the connection between the sociology of scientific knowledge and Kuhn in his book T. S. Kuhn and Social Science. In particular, Kuhn's ideas regarding science occurring within an established framework informed Barnes's own ideas regarding finitism, a theory wherein meaning is continuously changed (even during periods of normal science) by its usage within the social framework.
The Structure of Scientific Revolutions elicited a number of reactions from the broader sociological community. Following the book's publication, some sociologists expressed the belief that the field of sociology had not yet developed a unifying paradigm, and should therefore strive towards homogenization. Others argued that the field was in the midst of normal science, and speculated that a new revolution would soon emerge. Some sociologists, including John Urry, doubted that Kuhn's theory, which addressed the development of natural science, was necessarily relevant to sociological development.
Economics
Developments in the field of economics are often expressed and legitimized in Kuhnian terms. For instance, neoclassical economists have claimed "to be at the second stage [normal science], and to have been there for a very long time – since Adam Smith, according to some accounts (Hollander, 1987), or Jevons according to others (Hutchison, 1978)". In the 1970s, post-Keynesian economists denied the coherence of the neoclassical paradigm, claiming that their own paradigm would ultimately become dominant.
While perhaps less explicit, Kuhn's influence remains apparent in recent economics. For instance, the abstract of Olivier Blanchard's paper "The State of Macro" (2008) begins:
Political science
In 1974, The Structure of Scientific Revolutions was ranked as the second most frequently used book in political science courses focused on scope and methods. In particular, Kuhn's theory has been used by political scientists to critique behavioralism, which claims that accurate political statements must be both testable and falsifiable. The book also proved popular with political scientists embroiled in debates about whether a set of formulations put forth by a political scientist constituted a theory, or something else.
The changes that occur in politics, society and business are often expressed in Kuhnian terms, however poor their parallel with the practice of science may seem to scientists and historians of science. The terms "paradigm" and "paradigm shift" have become such notorious clichés and buzzwords that they are sometimes viewed as effectively devoid of content.
Criticisms
The Structure of Scientific Revolutions was soon criticized by Kuhn's colleagues in the history and philosophy of science. In 1965, a special symposium on the book was held at an International Colloquium on the Philosophy of Science that took place at Bedford College, London, and was chaired by Karl Popper. The symposium led to the publication of the symposium's presentations plus other essays, most of them critical, which eventually appeared in an influential volume of essays. Kuhn expressed the opinion that his critics' readings of his book were so inconsistent with his own understanding of it that he was "tempted to posit the existence of two Thomas Kuhns," one the author of his book, the other the individual who had been criticized in the symposium by Professors Popper, Feyerabend, Lakatos, Toulmin and Watkins.
A number of the included essays question the existence of normal science. In his essay, Feyerabend suggests that Kuhn's conception of normal science fits organized crime as well as it does science. Popper expresses distaste with the entire premise of Kuhn's book, writing, "the idea of turning for enlightenment concerning the aims of science, and its possible progress, to sociology or to psychology (or ... to the history of science) is surprising and disappointing."
Concept of paradigm
Stephen Toulmin defined paradigm as "the set of common beliefs and agreements shared between scientists about how problems should be understood and addressed". In his 1972 work, Human Understanding, he argued that a more realistic picture of science than that presented in The Structure of Scientific Revolutions would admit the fact that revisions in science take place much more frequently, and are much less dramatic than can be explained by the model of revolution/normal science. In Toulmin's view, such revisions occur quite often during periods of what Kuhn would call "normal science". For Kuhn to explain such revisions in terms of the non-paradigmatic puzzle solutions of normal science, he would need to delineate what is perhaps an implausibly sharp distinction between paradigmatic and non-paradigmatic science.
Incommensurability of paradigms
In a series of texts published in the early 1970s, Carl R. Kordig asserted a position somewhere between that of Kuhn and the older philosophy of science. His criticism of the Kuhnian position was that the incommensurability thesis was too radical, and that this made it impossible to explain the confrontation of scientific theories that actually occurs. According to Kordig, it is in fact possible to admit the existence of revolutions and paradigm shifts in science while still recognizing that theories belonging to different paradigms can be compared and confronted on the plane of observation. Those who accept the incommensurability thesis do not do so because they admit the discontinuity of paradigms, but because they attribute a radical change in meanings to such shifts.
Kordig maintains that there is a common observational plane. For example, when Kepler and Tycho Brahe are trying to explain the relative variation of the distance of the sun from the horizon at sunrise, both see the same thing (the same configuration is focused on the retina of each individual). This is just one example of the fact that "rival scientific theories share some observations, and therefore some meanings". Kordig suggests that with this approach, he is not reintroducing the distinction between observations and theory in which the former is assigned a privileged and neutral status, but that it is possible to affirm more simply the fact that, even if no sharp distinction exists between theory and observations, this does not imply that there are no comprehensible differences at the two extremes of this polarity.
At a secondary level, for Kordig there is a common plane of inter-paradigmatic standards or shared norms that permit the effective confrontation of rival theories.
In 1973, Hartry Field published an article that also sharply criticized Kuhn's idea of incommensurability. In particular, he took issue with this passage from Kuhn:
Field takes this idea of incommensurability between the same terms in different theories one step further. Instead of attempting to identify a persistence of the reference of terms in different theories, Field's analysis emphasizes the indeterminacy of reference within individual theories. Field takes the example of the term "mass", and asks what exactly "mass" means in modern post-relativistic physics. He finds that there are at least two different definitions:
Relativistic mass: the mass of a particle is equal to the total energy of the particle divided by the speed of light squared. Since the total energy of a particle in relation to one system of reference differs from the total energy in relation to other systems of reference, while the speed of light remains constant in all systems, it follows that the mass of a particle has different values in different systems of reference.
"Real" mass: the mass of a particle is equal to the non-kinetic energy of a particle divided by the speed of light squared. Since non-kinetic energy is the same in all systems of reference, and the same is true of light, it follows that the mass of a particle has the same value in all systems of reference.
Projecting this distinction backwards in time onto Newtonian dynamics, we can formulate the following two hypotheses:
HR: the term "mass" in Newtonian theory denotes relativistic mass.
Hp: the term "mass" in Newtonian theory denotes "real" mass.
According to Field, it is impossible to decide which of these two affirmations is true. Prior to the theory of relativity, the term "mass" was referentially indeterminate. But this does not mean that the term "mass" did not have a different meaning than it now has. The problem is not one of meaning but of reference. The reference of such terms as mass is only partially determined: we do not really know how Newton intended his use of this term to be applied. As a consequence, neither of the two terms fully denotes (refers). It follows that it is improper to maintain that a term has changed its reference during a scientific revolution; it is more appropriate to describe terms such as "mass" as "having undergone a denotional refinement".
In 1974, Donald Davidson objected that the concept of incommensurable scientific paradigms competing with each other is logically inconsistent. In his article Davidson goes well beyond the semantic version of the incommensurability thesis: to make sense of the idea of a language independent of translation requires a distinction between conceptual schemes and the content organized by such schemes. But, Davidson argues, no coherent sense can be made of the idea of a conceptual scheme, and therefore no sense may be attached to the idea of an untranslatable language."
Incommensurability and perception
The close connection between the interpretationalist hypothesis and a holistic conception of beliefs is at the root of the notion of the dependence of perception on theory, a central concept in The Structure of Scientific Revolutions. Kuhn maintained that the perception of the world depends on how the percipient conceives the world: two scientists who witness the same phenomenon and are steeped in two radically different theories will see two different things. According to this view, our interpretation of the world determines what we see.
Jerry Fodor attempts to establish that this theoretical paradigm is fallacious and misleading by demonstrating the impenetrability of perception to the background knowledge of subjects. The strongest case can be based on evidence from experimental cognitive psychology, namely the persistence of perceptual illusions. Knowing that the lines in the Müller-Lyer illusion are equal does not prevent one from continuing to see one line as being longer than the other. This impenetrability of the information elaborated by the mental modules limits the scope of interpretationalism.
In epistemology, for example, the criticism of what Fodor calls the interpretationalist hypothesis accounts for the common-sense intuition (on which naïve physics is based) of the independence of reality from the conceptual categories of the experimenter. If the processes of elaboration of the mental modules are in fact independent of the background theories, then it is possible to maintain the realist view that two scientists who embrace two radically diverse theories see the world exactly in the same manner even if they interpret it differently. The point is that it is necessary to distinguish between observations and the perceptual fixation of beliefs. While it is beyond doubt that the second process involves the holistic relationship between beliefs, the first is largely independent of the background beliefs of individuals.
Other critics, such as Israel Scheffler, Hilary Putnam and Saul Kripke, have focused on the Fregean distinction between sense and reference in order to defend scientific realism. Scheffler contends that Kuhn confuses the meanings of terms such as "mass" with their referents. While their meanings may very well differ, their referents (the objects or entities to which they correspond in the external world) remain fixed.
Subsequent commentary by Kuhn
In 1995 Kuhn argued that the Darwinian metaphor in the book should have been taken more seriously than it had been.
Awards and honors
1998 Modern Library 100 Best Nonfiction: The Board's List (69)
1999 National Review 100 Best Nonfiction Books of the Century (25)
2015 Mark Zuckerberg book club selection for March.
Publication history
Bibliography
*
See also
Epistemological rupture
Groupthink
Scientific Revolution
Further reading
Wray, K. Brad, ed. (2024). Kuhn's The Structure of Scientific Revolutions at 60. Cambridge University Press.
References
External links
Article on Thomas Kuhn by Alexander Bird
Text of chapter 9 and a postscript at Marxists.org
"Thomas Kuhn, 73; Devised Science Paradigm", obituary by Lawrence Van Gelder, New York Times, 19 June 1996 (archived 7 February 2012).
1962 non-fiction books
American non-fiction books
Books about the history of science
Books by Thomas Kuhn
English-language books
Philosophy of science literature
Science studies
Scientific Revolution
University of Chicago Press books | 0.767075 | 0.996582 | 0.764454 |
Raven's Progressive Matrices | Raven's Progressive Matrices (often referred to simply as Raven's Matrices) or RPM is a non-verbal test typically used to measure general human intelligence and abstract reasoning and is regarded as a non-verbal estimate of fluid intelligence. It is one of the most common tests administered to both groups and individuals ranging from 5-year-olds to the elderly. It comprises 60 multiple choice questions, listed in order of increasing difficulty. This format is designed to measure the test taker's reasoning ability, the eductive ("meaning-making") component of Spearman's g (g is often referred to as general intelligence).
The tests were originally developed by John C. Raven in 1936. In each test item, the subject is asked to identify the missing element that completes a pattern. Many patterns are presented in the form of a 6×6, 4×4, 3×3, or 2×2 matrix, giving the test its name.
Problem structure
The questions consist of visual geometric design with a missing piece, with six to eight choices that fill in the piece.
Raven's Progressive Matrices and Vocabulary tests were originally developed for use in research into the genetic and environmental origins of cognitive ability. Raven thought that the tests commonly in use at that time were cumbersome to administer and the results difficult to interpret. Accordingly, he set about developing simple measures of the two main components of Spearman's g: the ability to think clearly and make sense of complexity (known as eductive ability) and the ability to store and reproduce information (known as reproductive ability).
Raven's tests of both were developed with the aid of what later became known as item response theory.
Raven first published his Progressive Matrices in the United Kingdom in 1938. His three sons established Scotland-based test publisher J C Raven Ltd. in 1972. In 2004, Harcourt Assessment, Inc. a division of Harcourt Education, acquired J C Raven Ltd. Harcourt was later acquired by Pearson PLC.
Versions
The Matrices are available in three different forms for participants of different ability:
Standard Progressive Matrices (RSPM): These were the original form of the matrices, first published in 1938. The booklet comprises five sets (A to E) of 12 items each (e.g., A1 through A12), with items within a set becoming increasingly complex, requiring ever greater cognitive capacity to encode and analyze information. All items are presented in black ink on a white background.
Colored Progressive Matrices (RCPM): Designed for children aged 5 through 11 years-of-age, the elderly, and mentally and physically impaired individuals. This test contains sets A and B from the standard matrices, with a further set of 12 items inserted between the two, as set Ab. Most items are presented on a coloured background to make the test visually stimulating for participants. However the last few items in set B are presented as black-on-white; in this way, if a subject exceeds the tester's expectations, transition to sets C, D, and E of the standard matrices is eased.
Advanced Progressive Matrices (RAPM): The advanced form of the matrices contains 48 items, presented as one set of 12 (set I) and another of 36 (set II). Items are again presented in black ink on a white background, and become increasingly complex as progress is made through each set. These items are appropriate for adults and adolescents of above-average intelligence.
In addition, "parallel" forms of the standard and coloured progressive matrices were published in 1998. This was to address the problem of the Raven's Matrices being too well known in the general population. Items in the parallel tests have been constructed so that average solution rates to each question are identical for the classic and parallel versions. A revised version of the RSPM – the Standard Progressive Matrices Plus – was published at the same time. This was based on the "parallel" version but, although the test was the same length, it had more difficult items in order to restore the test's ability to differentiate among more able adolescents and young adults that the original RSPM had when it was first published. This new test, developed with the aid of better sampling arrangements and developments in the procedures available to implement the item response theory, has turned out to have exemplary test properties.
Uses
The tests were initially developed for research purposes. Because of their independence of language and reading and writing skills, and the simplicity of their use and interpretation, they quickly found widespread practical application. For example, all entrants to the British armed forces from 1942 onwards took a twenty-minute version of the RSPM, and potential officers took a specially adapted version as part of British War Office Selection Boards. The routine administration of what became the Standard Progressive Matrices to all entrants (conscripts) to many military services throughout the world (including the Soviet Union) continued at least until the present century. It was by bringing together these data that James R. Flynn was able to place the intergenerational increase in scores beyond reasonable doubt. Flynn's path-breaking publications on IQ gains around the world have led to the phenomenon of the gains being known as the Flynn effect. Among Robert L. Thorndike and other researchers who preceded Flynn in finding evidence of IQ score gains was John Raven, reporting on studies with the RPM.
A 2007 study provided evidence that individuals with Asperger syndrome, a high-functioning autism spectrum disorder, score higher than other individuals on Raven's tests. Another 2007 study found that individuals with classic low-functioning autism score higher on Raven's tests than on Wechsler tests. In addition, individuals with classic autism provided correct answers to the Raven's test in less time than individuals without autism, although they erred as often as the latter.
The high IQ societies Intertel and the International Society for Philosophical Enquiry (ISPE) accept the RAPM as a qualification for admission, and so does the International High IQ Society. The Triple Nine Society used to accept the Advanced Progressive Matrices as one of their admission tests. They still accept a raw score of at least 35 out of 36 on Set II of the RAPM if scored before April 2014.
See also
Naglieri Nonverbal Ability Test
Spatial ability
References
Bibliography
Raven, J., Raven, J.C., & Court, J.H. (2003, updated 2004) Manual for Raven's Progressive Matrices and Vocabulary Scales. San Antonio, TX: Harcourt Assessment.
The above Manual is only available to qualified psychologists. Uses and Abuses of Intelligence (see below) is a more generally available source. However a summary of the contents of the Manual's 7 Sections, and, in particular links to Section 7, which contains abstracts of hundreds of studies in which the RPM have been used will be found at https://www.researchgate.net/publication/368919615_Manual_for_Raven's_Progressive_Matrices_and_Vocabulary_Scales_Summary_of_Contents_of_All_Sections.
Raven, J., & Raven, J. (eds.) (2008) Uses and Abuses of Intelligence: Studies Advancing Spearman and Raven's Quest for Non-Arbitrary Metrics. Unionville, New York: Royal Fireworks Press.
External links
Website about Dr. John Raven
Cognitive tests
Intelligence tests | 0.766236 | 0.99767 | 0.764451 |
REST | REST (Representational State Transfer) is a software architectural style that was created to guide the design and development of the architecture for the World Wide Web. REST defines a set of constraints for how the architecture of a distributed, Internet-scale hypermedia system, such as the Web, should behave. The REST architectural style emphasises uniform interfaces, independent deployment of components, the scalability of interactions between them, and creating a layered architecture to promote caching to reduce user-perceived latency, enforce security, and encapsulate legacy systems.
REST has been employed throughout the software industry to create stateless, reliable web-based applications. An application that adheres to the REST architectural constraints may be informally described as RESTful, although this term is more commonly associated with the design of HTTP-based APIs and what are widely considered best practices regarding the "verbs" (HTTP methods) a resource responds to while having little to do with REST as originally formulated—and is often even at odds with the concept.
Principle
The term representational state transfer was introduced and defined in 2000 by computer scientist Roy Fielding in his doctoral dissertation. It means that a server will respond with the representation of a resource (today, it will most often be an HTML, XML or JSON document) and that resource will contain hypermedia links that can be followed to make the state of the system change. Any such request will in turn receive the representation of a resource, and so on.
An important consequence is that the only identifier that needs to be known is the identifier of the first resource requested, and all other identifiers will be discovered. This means that those identifiers can change without the need to inform the client beforehand and that there can be only loose coupling between client and server.
History
The Web began to enter everyday use in 1993–1994, when websites for general use started to become available. At the time, there was only a fragmented description of the Web's architecture, and there was pressure in the industry to agree on some standard for the Web interface protocols. For instance, several experimental extensions had been added to the communication protocol (HTTP) to support proxies, and more extensions were being proposed, but there was a need for a formal Web architecture with which to evaluate the impact of these changes.
The W3C and IETF working groups together started work on creating formal descriptions of the Web's three primary standards: URI, HTTP, and HTML. Roy Fielding was involved in the creation of these standards (specifically HTTP 1.0 and 1.1, and URI), and during the next six years he created the REST architectural style, testing its constraints on the Web's protocol standards and using it as a means to define architectural improvements — and to identify architectural mismatches. Fielding defined REST in his 2000 PhD dissertation "Architectural Styles and the Design of Network-based Software Architectures" at UC Irvine.
To create the REST architectural style, Fielding identified the requirements that apply when creating a world-wide network-based application, such as the need for a low entry barrier to enable global adoption. He also surveyed many existing architectural styles for network-based applications, identifying which features are shared with other styles, such as caching and client–server features, and those which are unique to REST, such as the concept of resources. Fielding was trying to both categorise the existing architecture of the current implementation and identify which aspects should be considered central to the behavioural and performance requirements of the Web.
By their nature, architectural styles are independent of any specific implementation, and while REST was created as part of the development of the Web standards, the implementation of the Web does not obey every constraint in the REST architectural style. Mismatches can occur due to ignorance or oversight, but the existence of the REST architectural style means that they can be identified before they become standardised. For example, Fielding identified the embedding of session information in URIs as a violation of the constraints of REST which can negatively affect shared caching and server scalability. HTTP cookies also violated REST constraints because they can become out of sync with the browser's application state, making them unreliable; they also contain opaque data that can be a concern for privacy and security.
Architectural properties
The REST architectural style is designed for network-based applications, specifically client-server applications. But more than that, it is designed for Internet-scale usage, so the coupling between the user agent (client) and the origin server must be as loose as possible to facilitate large-scale adoption.
The strong decoupling of client and server together with the text-based transfer of information using a uniform addressing protocol provided the basis for meeting the requirements of the Web: extensibility, anarchic scalability and independent deployment of components, large-grain data transfer, and a low entry-barrier for content readers, content authors and developers.
The constraints of the REST architectural style affect the following architectural properties:
Performance in component interactions, which can be the dominant factor in user-perceived performance and network efficiency;
Scalability allowing the support of large numbers of components and interactions among components;
Simplicity of a uniform interface;
Modifiability of components to meet changing needs (even while the application is running);
Visibility of communication between components by service agents;
Portability of components by moving program code with the data;
Reliability in the resistance to failure at the system level in the presence of failures within components, connectors, or data.
Architectural constraints
The REST architectural style defines six guiding constraints. When these constraints are applied to the system architecture, it gains desirable non-functional properties, such as performance, scalability, simplicity, modifiability, visibility, portability, and reliability.
The formal REST constraints are as follows:
Client/Server – Clients are separated from servers by a well-defined interface
Stateless – A specific client does not consume server storage when the client is "at rest"
Cache – Responses indicate their own cacheability
Uniform interface
Layered system – A client cannot ordinarily tell whether it is connected directly to the end server, or to an intermediary along the way
Code on demand (optional) – Servers are able to temporarily extend or customize the functionality of a client by transferring logic to the client that can be executed within a standard virtual machine
Uniform interface
The uniform interface constraint is fundamental to the design of any RESTful system. It simplifies and decouples the architecture, which enables each part to evolve independently. The four constraints for this uniform interface are:
Resource identification in requests: Individual resources are identified in requests using URIs. The resources themselves are conceptually separate from the representations that are returned to the client. For example, the server could send data from its database as HTML, XML or as JSON—none of which are the server's internal representation.
Resource manipulation through representations: When a client holds a representation of a resource, including any metadata attached, it has enough information to modify or delete the resource's state.
Self-descriptive messages: Each message includes enough information to describe how to process the message. For example, which parser to invoke can be specified by a media type.
Hypermedia as the engine of application state (HATEOAS) – Having accessed an initial URI for the REST application—analogous to a human Web user accessing the home page of a website—a REST client should then be able to use server-provided links dynamically to discover all the available resources it needs. As access proceeds, the server responds with text that includes hyperlinks to other resources that are currently available. There is no need for the client to be hard-coded with information regarding the structure of the server.
Classification models
Several models have been developed to help classify REST APIs according to their adherence to various principles of REST design, such as
the Richardson Maturity Model
the Classification of HTTP-based APIs
the W S3 maturity model
See also
(DAP)
References
Further reading
Cloud standards
Hypertext Transfer Protocol
Software architecture
Web 2.0 neologisms | 0.765025 | 0.999248 | 0.764449 |
PYTHIA | PYTHIA is a computer simulation program for particle collisions at very high energies (see event (particle physics)) in particle accelerators.
History
PYTHIA was originally written in FORTRAN 77, until the 2007 release of PYTHIA 8.1 which was rewritten in C++. Both the Fortran and C++ versions were maintained until 2012 because not all components had been merged into the 8.1 version. However, the latest version already includes new features not available in the Fortran release. PYTHIA is developed and maintained by an international collaboration of physicists, consisting of Christian Bierlich, Nishita Desai, Leif Gellersen, Ilkka Helenius, Philip Ilten, Leif Lönnblad, Stephen Mrenna, Stefan Prestel, Christian Preuss, Torbjörn Sjöstrand, Peter Skands, Marius Utheim and Rob Verheyen.
Features
The following is a list of some of the features PYTHIA is capable of simulating:
Hard and soft interactions
Parton distributions
Initial/final-state parton showers
Multiparton interactions
Fragmentation and decay
See also
Particle physics
Particle decay
References
Further reading
External links
The official PYTHIA page
Monte Carlo particle physics software
Physics software
Software that was rewritten in C++ | 0.765173 | 0.999028 | 0.764429 |
Wind turbine design | Wind turbine design is the process of defining the form and configuration of a wind turbine to extract energy from the wind. An installation consists of the systems needed to capture the wind's energy, point the turbine into the wind, convert mechanical rotation into electrical power, and other systems to start, stop, and control the turbine.
In 1919, German physicist Albert Betz showed that for a hypothetical ideal wind-energy extraction machine, the fundamental laws of conservation of mass and energy allowed no more than 16/27 (59.3%) of the wind's kinetic energy to be captured. This Betz' law limit can be approached by modern turbine designs which reach 70 to 80% of this theoretical limit.
In addition to the blades, design of a complete wind power system must also address the hub, controls, generator, supporting structure and foundation. Turbines must also be integrated into power grids.
Aerodynamics
Blade shape and dimension are determined by the aerodynamic performance required to efficiently extract energy, and by the strength required to resist forces on the blade.
The aerodynamics of a horizontal-axis wind turbine are not straightforward. The air flow at the blades is not the same as that away from the turbine. The way that energy is extracted from the air also causes air to be deflected by the turbine. Wind turbine aerodynamics at the rotor surface exhibit phenomena that are rarely seen in other aerodynamic fields.
Power control
Rotation speed must be controlled for efficient power generation and to keep the turbine components within speed and torque limits. The centrifugal force on the blades increases as the square of the rotation speed, which makes this structure sensitive to overspeed. Because power increases as the cube of the wind speed, turbines have must survive much higher wind loads (such as gusts of wind) than those loads from which they generate power.
A wind turbine must produce power over a range of wind speeds. The cut-in speed is around 3–4 m/s for most turbines, and cut-out at 25 m/s. If the rated wind speed is exceeded the power has to be limited.
A control system involves three basic elements: sensors to measure process variables, actuators to manipulate energy capture and component loading, and control algorithms that apply information gathered by the sensors to coordinate the actuators.
Any wind blowing above the survival speed damages the turbine. The survival speed of commercial wind turbines ranges from 40 m/s (144 km/h, 89 MPH) to 72 m/s (259 km/h, 161 MPH), typically around 60 m/s (216 km/h, 134 MPH). Some turbines can survive .
Stall
A stall on an airfoil occurs when air passes over it in such a way that the generation of lift rapidly decreases. Usually this is due to a high angle of attack (AOA), but can also result from dynamic effects. The blades of a fixed pitch turbine can be designed to stall in high wind speeds, slowing rotation. This is a simple fail-safe mechanism to help prevent damage. However, other than systems with dynamically controlled pitch, it cannot produce a constant power output over a large range of wind speeds, which makes it less suitable for large scale, power grid applications.
A fixed-speed HAWT (Horizontal Axis Wind Turbine) inherently increases its angle of attack at higher wind speed as the blades speed up. A natural strategy, then, is to allow the blade to stall when the wind speed increases. This technique was successfully used on many early HAWTs. However, the degree of blade pitch tended to increase noise levels.
Vortex generators may be used to control blade lift characteristics. VGs are placed on the airfoil to enhance the lift if they are placed on the lower (flatter) surface or limit the maximum lift if placed on the upper (higher camber) surface.
Furling
Furling works by decreasing the angle of attack, which reduces drag and blade cross-section. One major problem is getting the blades to stall or furl quickly enough in a wind gust. A fully furled turbine blade, when stopped, faces the edge of the blade into the wind.
Loads can be reduced by making a structural system softer or more flexible. This can be accomplished with downwind rotors or with curved blades that twist naturally to reduce angle of attack at higher wind speeds. These systems are nonlinear and couple the structure to the flow field - requiring design tools to evolve to model these nonlinearities.
Standard turbines all furl in high winds. Since furling requires acting against the torque on the blade, it requires some form of pitch angle control, which is achieved with a slewing drive. This drive precisely angles the blade while withstanding high torque loads. In addition, many turbines use hydraulic systems. These systems are usually spring-loaded, so that if hydraulic power fails, the blades automatically furl. Other turbines use an electric servomotor for every blade. They have a battery-reserve in case of grid failure. Small wind turbines (under 50 kW) with variable-pitching generally use systems operated by centrifugal force, either by flyweights or geometric design, and avoid electric or hydraulic controls.
Fundamental gaps exist in pitch control, limiting the reduction of energy costs, according to a report funded by the Atkinson Center for a Sustainable Future. Load reduction is currently focused on full-span blade pitch control, since individual pitch motors are the actuators on commercial turbines. Significant load mitigation has been demonstrated in simulations for blades, tower, and drive train. However, further research is needed to increase energy capture and mitigate fatigue loads.
A control technique applied to the pitch angle is done by comparing the power output with the power value at the rated engine speed (power reference, Ps reference). Pitch control is done with PI controller. In order to adjust pitch rapidly enough, the actuator uses the time constant Tservo, an integrator and limiters. The pitch angle remains from 0° to 30° with a change rate of 10°/second.
As in the figure at the right, the reference pitch angle is compared with the actual pitch angle b and then the difference is corrected by the actuator. The reference pitch angle, which comes from the PI controller, goes through a limiter. Restrictions are important to maintain the pitch angle in real terms. Limiting the change rate is especially important during network faults. The importance is due to the fact that the controller decides how quickly it can reduce the aerodynamic energy to avoid acceleration during errors.
Other controls
Generator torque
Modern large wind turbines operate at variable speeds. When wind speed falls below the turbine's rated speed, generator torque is used to control the rotor speed to capture as much power as possible. The most power is captured when the tip speed ratio is held constant at its optimum value (typically between 6 and 7). This means that rotor speed increases proportional to wind speed. The difference between the aerodynamic torque captured by the blades and the applied generator torque controls the rotor speed. If the generator torque is lower, the rotor accelerates, and if the generator torque is higher, the rotor slows. Below rated wind speed, the generator torque control is active while the blade pitch is typically held at the constant angle that captures the most power, fairly flat to the wind. Above rated wind speed, the generator torque is typically held constant while the blade pitch is adjusted accordingly.
One technique to control a permanent magnet synchronous motor is field-oriented control. Field-oriented control is a closed loop strategy composed of two current controllers (an inner loop and cascading outer loop) necessary for controlling the torque, and one speed controller.
Constant torque angle control
In this control strategy the d axis current is kept at zero, while the vector current aligns with the q axis in order to maintain the torque angle at 90o. This is a common control strategy because only the Iqs current must be controlled. The torque equation of the generator is a linear equation dependent only on the Iqs current.
So, the electromagnetic torque for Ids = 0 (we can achieve that with the d-axis controller) is now:
Thus, the complete system of the machine side converter and the cascaded PI controller loops is given by the figure. The control inputs are the duty rations mds and mqs, of the PWM-regulated converter. It displays the control scheme for the wind turbine in the machine side and simultaneously how the Ids to zero (the torque equation is linear).
Yawing
Large turbines are typically actively controlled to face the wind direction measured by a wind vane situated on the back of the nacelle. By minimizing the yaw angle (the misalignment between wind and turbine pointing direction), power output is maximized and non-symmetrical loads minimized. However, since wind direction varies, the turbine does not strictly follow the wind and experiences a small yaw angle on average. The power output losses can be approximated to fall with (cos(yaw angle))3. Particularly at low-to-medium wind speeds, yawing can significantly reduce output, with wind common variations reaching 30°. At high wind speeds, wind direction is less variable.
Electrical braking
Braking a small turbine can be done by dumping energy from the generator into a resistor bank, converting kinetic energy into heat. This method is useful if the kinetic load on the generator is suddenly reduced or is too small to keep the turbine speed within its allowed limit.
Cyclically braking slows the blades, which increases the stalling effect and reducing efficiency. Rotation can be kept at a safe speed in faster winds while maintaining (nominal) power output. This method is usually not applied on large, grid-connected wind turbines.
Mechanical braking
A mechanical drum brake or disc brake stops rotation in emergency situations such as extreme gust events. The brake is a secondary means to hold the turbine at rest for maintenance, with a rotor lock system as primary means. Such brakes are usually applied only after blade furling and electromagnetic braking have reduced the turbine speed because mechanical brakes can ignite a fire inside the nacelle if used at full speed. Turbine load increases if the brake is applied at rated RPM.
Turbine size
Turbines come in size classes. The smallest, with power less than 10 kW are used in homes, farms and remote applications whereas intermediate wind turbines (10-250 kW ) are useful for village power, hybrid systems and distributed power. The world's largest wind turbine as of 2021 was Vestas' V236-15.0 MW turbine. The new design's blades offer the largest swept area in the world with three blades giving a rotor diameter of . Ming Yang in China have announced a larger 16 MW design.
For a given wind speed, turbine mass is approximately proportional to the cube of its blade-length. Wind power intercepted is proportional to the square of blade-length. The maximum blade-length of a turbine is limited by strength, stiffness, and transport considerations.
Labor and maintenance costs increase slower than turbine size, so to minimize costs, wind farm turbines are basically limited by the strength of materials, and siting requirements.
Low temperature
Utility-scale wind turbine generators have minimum temperature operating limits that apply in areas with temperatures below . Turbines must be protected from ice accumulation that can make anemometer readings inaccurate and which, in certain turbine control designs, can cause high structure loads and damage. Some turbine manufacturers offer low-temperature packages at extra cost, which include internal heaters, different lubricants, and different alloys for structural elements. If low-temperatures are combined with a low-wind condition, the turbine requires an external supply of power, equivalent to a few percent of its rated output, for internal heating. For example, the St. Leon Wind Farm in Manitoba, Canada, has a total rating of 99 MW and is estimated to need up to 3 MW (around 3% of capacity) of station service power a few days a year for temperatures down to .
Nacelle
The nacelle houses the gearbox and generator connecting the tower and rotor. Sensors detect the wind speed and direction, and motors turn the nacelle into the wind to maximize output.
Gearbox
In conventional wind turbines, the blades spin a shaft that is connected through a gearbox to the generator. The gearbox converts the turning speed of the blades (15 to 20 RPM for a one-megawatt turbine) into the 1,800 (750-3600) RPM that the generator needs to generate electricity. Gearboxes are one of the more expensive components for installing and maintaining wind turbines. Analysts from GlobalData estimate that the gearbox market grew from $3.2bn in 2006 to $6.9bn in 2011. The market leader for Gearbox production was Winergy in 2011. The use of magnetic gearboxes has been explored as a way of reducing maintenance costs.
Generator
For large horizontal-axis wind turbines (HAWT), the generator is mounted in a nacelle at the top of a tower, behind the rotor hub. Older wind turbines generate electricity through asynchronous machines directly connected to the grid. The gearbox reduces generator cost and weight. Commercial generators have a rotor carrying a winding so that a rotating magnetic field is produced inside a set of windings called the stator. While the rotating winding consumes a fraction of a percent of the generator output, adjustment of the field current allows good control over the output voltage.
The rotor's varying output frequency and voltage can be matched to the fixed values of the grid using multiple technologies such as doubly fed induction generators or full-effect converters, which converts the variable frequency current to DC and then back to AC using inverters. Although such alternatives require costly equipment and cost power, the turbine can capture a significantly larger fraction of the wind energy. Most are low voltage 660 Volt, but some offshore turbines (several MW) are 3.3 kV medium voltage.
In some cases, especially when offshore, a large collector transformer converts the wind farm's medium-voltage AC grid to DC and transmits the energy through a power cable to an onshore HVDC converter station.
Hydraulic
Hydraulic wind turbines perform the frequency and torque adjustments of gearboxes via a pressurized hydraulic fluid. Typically, the action of the turbine pressurizes the fluid with a hydraulic pump at the nacelle. Meanwhile, components on the ground can transform this pressure into energy, and recirculate the working fluid. Typically, the working fluid used in this kind of hydrostatic transmission is oil, which serves as a lubricant, reducing losses due to friction in the hydraulic units and allowing for a broad range of operating temperatures. However, other concepts are currently under study, which involve using water as the working fluid because it is abundant and eco-friendly.
Hydraulic turbines provide benefits to both operation and capital costs. They can use hydraulic units with variable displacement to have a continuously variable transmission that adapts in real time. This decouples generator speed to rotor speed, avoiding stalling and allowing for operating the turbine at an optimum speed and torque. This built-in transmission is how these hydraulic systems avoid the need for a conventional gearbox. Furthermore, hydraulic instead of mechanical power conversion introduces a damping effect on rotation fluctuations, reducing fatigue of the drivetrain and improving turbine structural integrity. Additionally, using a pressurized fluid instead of mechanical components allows for the electrical conversion to occur on the ground instead of the nacelle: this reduces maintenance difficulty, and reduces weight and center of gravity of the turbine. Studies estimate that these benefits may yield to a 3.9-18.9% reduction in the levelized cost of power for offshore wind turbines.
Some years ago, Mitsubishi, through its branch Artemis, deployed the Sea Angel, a unique hydraulic wind turbine at the utility scale. The Digital Displacement technology underwent trials on the Sea Angel, a wind turbine rated at 7 MW. This design is capable of adjusting the displacement of the central unit in response to erratic wind velocities, thereby maintaining the optimal efficiency of the system. Still, these systems are newer and in earlier stages of commercialization compared to conventional gearboxes.
Gearless
Gearless wind turbines (also called direct drive) eliminate the gearbox. Instead, the rotor shaft is attached directly to the generator, which spins at the same speed as the blades.
Advantages of permanent magnet direct drive generators (PMDD) over geared generators include increased efficiency, reduced noise, longer lifetime, high torque at low RPM, faster and precise positioning, and drive stiffness. PMDD generators "eliminate the gear-speed increaser, which is susceptible to significant accumulated fatigue torque loading, related reliability issues, and maintenance costs".
To make up for a direct-drive generator's slower rotation rate, the diameter of the generator's rotor is increased so that it can contain more magnets to create the required frequency and power. Gearless wind turbines are often heavier than geared wind turbines. An EU study showed that gearbox reliability is not the main problem in wind turbines. The reliability of direct drive turbines offshore is still not known, given the small sample size.
Experts from Technical University of Denmark estimate that a geared generator with permanent magnets may require 25 kg/MW of the rare-earth element neodymium, while a gearless may use 250 kg/MW.
In December 2011, the US Department of Energy announced a critical shortage of rare-earth elements such as neodymium. China produces more than 95% of rare-earth elements, while Hitachi holds more than 600 patents covering neodymium magnets. Direct-drive turbines require 600 kg of permanent magnet material per megawatt, which translates to several hundred kilograms of rare-earth content per megawatt, as neodymium content is estimated to be 31% of magnet weight. Hybrid drivetrains (intermediate between direct drive and traditional geared) use significantly less rare-earth materials. While permanent magnet wind turbines only account for about 5% of the market outside of China, their market share inside of China is estimated at 25% or higher. In 2011, demand for neodymium in wind turbines was estimated to be 1/5 of that in electric vehicles.
Blades
Blade design
The ratio between the blade speed and the wind speed is called tip-speed ratio. High efficiency 3-blade-turbines have tip speed/wind speed ratios of 6 to 7. Wind turbines spin at varying speeds (a consequence of their generator design). Use of aluminum and composite materials has contributed to low rotational inertia, which means that newer wind turbines can accelerate quickly if the winds pick up, keeping the tip speed ratio more nearly constant. Operating closer to their optimal tip speed ratio during energetic gusts of wind allows wind turbines to improve energy capture from sudden gusts.
Noise increases with tip speed. To increase tip speed without increasing noise would reduce torque into the gearbox and generator, reducing structural loads, thereby reducing cost. The noise reduction is linked to the detailed blade aerodynamics, especially factors that reduce abrupt stalling. The inability to predict stall restricts the use of aggressive aerodynamics. Some blades (mostly on Enercon) have a winglet to increase performance and reduce noise.
A blade can have a lift-to-drag ratio of 120, compared to 70 for a sailplane and 15 for an airliner.
The hub
In simple designs, the blades are directly bolted to the hub and are unable to pitch, which leads to aerodynamic stall above certain windspeeds. In more sophisticated designs, they are bolted to the pitch bearing, which adjusts their angle of attack with the help of a pitch system according to the wind speed. Pitch control is performed by hydraulic or electric systems (battery or ultracapacitor). The pitch bearing is bolted to the hub. The hub is fixed to the rotor shaft, which drives the generator directly or through a gearbox.
Blade count
The number of blades is selected for aerodynamic efficiency, component costs, and system reliability. Noise emissions are affected by the location of the blades upwind or downwind of the tower and the rotor speed. Given that the noise emissions from the blades' trailing edges and tips vary by the 5th power of blade speed, a small increase in tip speed dramatically increases noise.
Wind turbines almost universally use either two or three blades. However, patents present designs with additional blades, such as Chan Shin's multi-unit rotor blade system. Aerodynamic efficiency increases with number of blades but with diminishing return. Increasing from one to two yields a six percent increase, while going from two to three yields an additional three percent. Further increasing the blade count yields minimal improvements and sacrifices too much in blade stiffness as the blades become thinner.
Theoretically, an infinite number of blades of zero width is the most efficient, operating at a high value of the tip speed ratio, but this is not practical.
Component costs affected by blade count are primarily for materials and manufacturing of the turbine rotor and drive train. Generally, the lower the number of blades, the lower the material and manufacturing costs. In addition, fewer blades allow higher rotational speed. Blade stiffness requirements to avoid tower interference limit blade thickness, but only when the blades are upwind of the tower; deflection in a downwind machine increases tower clearance. Fewer blades with higher rotational speeds reduce peak torque in the drive train, resulting in lower gearbox and generator costs.
System reliability is affected by blade count primarily through the dynamic loading of the rotor into the drive train and tower systems. While aligning the wind turbine to changes in wind direction (yawing), each blade experiences a cyclic load at its root end depending on blade position. However, these cyclic loads when combined at the drive train shaft are symmetrically balanced for three blades, yielding smoother operation during yaw. One or two blade turbines can use a pivoting teetered hub to nearly eliminate the cyclic loads into the drive shaft and system during yawing. In 2012, a Chinese 3.6 MW two-blade turbine was tested in Denmark.
Aesthetics are a factor in that the three-bladed rotor rates more pleasing to look at than a one- or two-bladed rotor.
Blade materials
In general, materials should meet the following criteria:
wide availability and easy processing to reduce cost and maintenance
low weight or density to reduce gravitational forces
high strength to withstand wind and gravitational loading
high fatigue resistance to withstand cyclic loading
high stiffness to ensure stability of the optimal shape and orientation of the blade and clearance with the tower
high fracture toughness
the ability to withstand environmental impacts such as lightning strikes, humidity, and temperature
Metals are undesirable because of their vulnerability to fatigue. Ceramics have low fracture toughness, resulting in early blade failure. Traditional polymers are not stiff enough to be useful, and wood has problems with repeatability, especially considering the blade length. That leaves fiber-reinforced composites, which have high strength and stiffness and low density.
Wood and canvas sails were used on early windmills due to their low price, availability, and ease of manufacture. Smaller blades can be made from light metals such as aluminium. These materials, however, require frequent maintenance. Wood and canvas construction limits the airfoil shape to a flat plate, which has a relatively high ratio of drag to force captured(low aerodynamic efficiency) compared to solid airfoils. Construction of solid airfoil designs requires inflexible materials such as metals or composites. Some blades incorporate lightning conductors.
Increasing blade length pushed power generation from the single megawatt range to upwards of 10 megawatts. A larger area effectively increases tip-speed ratio at a given wind speed, thus increasing its energy extraction. Software such as HyperSizer (originally developed for spacecraft design) can be used to improve blade design.
As of 2015 the rotor diameters of onshore wind turbine blades reached 130 meters, while the diameter of offshore turbines reached 170 meters. In 2001, an estimated 50 million kilograms of fiberglass laminate were used in wind turbine blades.
An important goal is to control blade weight. Since blade mass scales as the cube of the turbine radius, gravity loading constrains systems with larger blades. Gravitational loads include axial and tensile/ compressive loads (top/bottom of rotation) as well as bending (lateral positions). The magnitude of these loads fluctuates cyclically and the edgewise moments (see below) are reversed every 180° of rotation. Typical rotor speeds and design life are ~10 and 20 years, respectively, with the number of lifetime revolutions on the order of 10^8. Considering wind, it is expected that turbine blades go through ~10^9 loading cycles.
Wind is another source of rotor blade loading. Lift causes bending in the flatwise direction (out of rotor plane) while airflow around the blade cause edgewise bending (in the rotor plane). Flaps bending involves tension on the pressure (upwind) side and compression on the suction (downwind) side. Edgewise bending involves tension on the leading edge and compression on the trailing edge.
Wind loads are cyclical because of natural variability in wind speed and wind shear (higher speeds at top of rotation).
Failure in ultimate loading of wind-turbine rotor blades exposed to wind and gravity loading is a failure mode that needs to be considered when the rotor blades are designed. The wind speed that causes bending of the rotor blades exhibits a natural variability, and so does the stress response in the rotor blades. Also, the resistance of the rotor blades, in terms of their tensile strengths, exhibits a natural variability. Given the increasing size of production wind turbines, blade failures are increasingly relevant when assessing public safety risks from wind turbines. The most common failure is the loss of a blade or part thereof. This has to be considered in the design.
In light of these failure modes and increasingly larger blade systems, researchers seek cost-effective materials with higher strength-to-mass ratios.
Polymer
The majority of commercialized wind turbine blades are made from fiber-reinforced polymers (FRPs), which are composites consisting of a polymer matrix and fibers. The long fibers provide longitudinal stiffness and strength, and the matrix provides fracture toughness, delamination strength, out-of-plane strength, and stiffness. Material indices based on maximizing power efficiency, high fracture toughness, fatigue resistance, and thermal stability are highest for glass and carbon fiber reinforced plastics (GFRPs and CFRPs).
In turbine blades, matrices such as thermosets or thermoplastics are used; as of 2017, thermosets are more common. These allow for the fibers to be bound together and add toughness. Thermosets make up 80% of the market, as they have lower viscosity, and also allow for low temperature cure, both features contributing to ease of processing during manufacture. Thermoplastics offer recyclability that the thermosets do not, however their processing temperature and viscosity are much higher, limiting the product size and consistency, which are both important for large blades. Fracture toughness is higher for thermoplastics, but the fatigue behavior is worse.
Manufacturing blades in the 40 to 50-metre range involves proven fiberglass composite fabrication techniques. Manufacturers such as Nordex SE and GE Wind use an infusion process. Other manufacturers vary this technique, some including carbon and wood with fiberglass in an epoxy matrix. Other options include pre-impregnated ("prepreg") fiberglass and vacuum-assisted resin transfer moulding. Each of these options uses a glass-fiber reinforced polymer composite constructed with differing complexity. Perhaps the largest issue with open-mould, wet systems is the emissions associated with the volatile organic compounds ("VOCs") released. Preimpregnated materials and resin infusion techniques contain all VOCs, however these contained processes have their challenges, because the production of thick laminates necessary for structural components becomes more difficult. In particular, the preform resin permeability dictates the maximum laminate thickness; also, bleeding is required to eliminate voids and ensure proper resin distribution. One solution to resin distribution is to use partially impregnated fiberglass. During evacuation, the dry fabric provides a path for airflow and, once heat and pressure are applied, the resin may flow into the dry region, resulting in an evenly impregnated laminate structure.
Epoxy
Epoxy-based composites have environmental, production, and cost advantages over other resin systems. Epoxies also allow shorter cure cycles, increased durability, and improved surface finish. Prepreg operations further reduce processing time over wet lay-up systems. As turbine blades passed 60 metres, infusion techniques became more prevalent, because traditional resin transfer moulding injection times are too long compared to resin set-up time, limiting laminate thickness. Injection forces resin through a thicker ply stack, thus depositing the resin in the laminate structure before gelation occurs. Specialized epoxy resins have been developed to customize lifetimes and viscosity.
Carbon fiber-reinforced load-bearing spars can reduce weight and increase stiffness. Using carbon fibers in 60-metre turbine blades is estimated to reduce total blade mass by 38% and decrease cost by 14% compared to 100% fiberglass. Carbon fibers have the added benefit of reducing the thickness of fiberglass laminate sections, further addressing the problems associated with resin wetting of thick lay-up sections. Wind turbines benefit from the trend of decreasing carbon fiber costs.
Although glass and carbon fibers have many optimal qualities, their downsides include the fact that high filler fraction (10-70 wt%) causes increased density as well as microscopic defects and voids that can lead to premature failure.
Carbon nanotubes
Carbon nanotubes (CNTs) can reinforce polymer-based nanocomposites. CNTs can be grown or deposited on the fibers or added into polymer resins as a matrix for FRP structures. Using nanoscale CNTs as filler instead of traditional microscale filler (such as glass or carbon fibers) results in CNT/polymer nanocomposites, for which the properties can be changed significantly at low filler contents (typically < 5 wt%). They have low density and improve the elastic modulus, strength, and fracture toughness of the polymer matrix. The addition of CNTs to the matrix also reduces the propagation of interlaminar cracks.
Research on a low-cost carbon fiber (LCCF) at Oak Ridge National Laboratory gained attention in 2020, because it can mitigate the structural damage from lightning strikes. On glass fiber wind turbines, lightning strike protection (LSP) is usually added on top, but this is effectively deadweight in terms of structural contribution. Using conductive carbon fiber can avoid adding this extra weight.
Research
Some polymer composites feature self-healing properties. Since the blades of the turbine form cracks from fatigue due to repetitive cyclic stresses, self-healing polymers are attractive for this application, because they can improve reliability and buffer various defects such as delamination. Embedding paraffin wax-coated copper wires in a fiber reinforced polymer creates a network of tubes. Using a catalyst, these tubes and dicyclopentadiene (DCPD) then react to form a thermosetting polymer, which repairs the cracks as they form in the material. As of 2019, this approach is not yet commercial.
Further improvement is possible through the use of carbon nanofibers (CNFs) in the blade coatings. A major problem in desert environments is erosion of the leading edges of blades by sand-laden wind, which increases roughness and decreases aerodynamic performance. The particle erosion resistance of fiber-reinforced polymers is poor when compared to metallic materials and elastomers. Replacing glass fiber with CNF on the composite surface greatly improves erosion resistance. CNFs provide good electrical conductivity (important for lightning strikes), high damping ratio, and good impact-friction resistance.
For wind turbines, especially those offshore, or in wet environments, base surface erosion also occurs. For example, in cold climates, ice can build up on the blades and increase roughness. At high speeds, this same erosion impact can occur from rainwater. A useful coating must have good adhesion, temperature tolerance, weather tolerance (to resist erosion from salt, rain, sand, etc.), mechanical strength, ultraviolet light tolerance, and have anti-icing and flame retardant properties. Along with this, the coating should be cheap and environmentally friendly.
Super hydrophobic surfaces (SHS) cause water droplets to bead, and roll off the blades. SHS prevents ice formation, up to -25 C, as it changes the ice formation process.; specifically, small ice islands form on SHS, as opposed to a large ice front. Further, due to the lowered surface area from the hydrophobic surface, aerodynamic forces on the blade allow these islands to glide off the blade, maintaining proper aerodynamics. SHS can be combined with heating elements to further prevent ice formation.
Lightning
Lightning damage over the course of a 25-year lifetime goes from surface level scorching and cracking of the laminate material, to ruptures in the blade or full separation in the adhesives that hold the blade together. It is most common to observe lightning strikes on the tips of the blades, especially in rainy weather due to embedded copper wiring. The most common method countermeasure, especially in non-conducting blade materials like GFRPs and CFRPs, is to add lightning "arresters", which are metallic wires that ground the blade, skipping the blades and gearbox entirely.
Blade repair
Wind turbine blades typically require repair after 2–5 years. Notable causes of blade damage comes from manufacturing defects, transportation, assembly, installation, lightning strikes, environmental wear, thermal cycling, leading edge erosion, or fatigue. Due to composite blade material and function, repair techniques found in aerospace applications often apply or provide a basis for basic repairs.
Depending on the nature of the damage, the approach of blade repairs can vary. Erosion repair and protection includes coatings, tapes, or shields. Structural repairs require bonding or fastening new material to the damaged area. Nonstructural matrix cracks and delaminations require fills and seals or resin injections. If ignored, minor cracks or delaminations can propagate and create structural damage.
Four zones have been identified with their respective repair needs:
Zone 1- the blade's leading edge. Requires erosion or crack repair.
Zone 2- close to the tip but behind the leading edge. Requires aeroelastic semi-structural repair.
Zone 3- Middle area behind the leading edge. Requires erosion repair.
Zone 4- Root and near root of the blade. Requires semi-structural or structural repairs
After the past few decades of rapid wind expansion across the globe, wind turbines are aging. This aging brings operation and maintenance(O&M) costs along with it, increasing as turbines approach their end of life. If damages to blades are not caught in time, power production and blade lifespan are decreased. Estimates project that 20-25% of the total levelized cost per kWh produced stems from blade O&M alone.
Blade recycling
The Global Wind Energy Council (GWEC) predicted that wind energy will supply 28.5% of global energy by 2030. This requires a newer and larger fleet of more efficient turbines and the corresponding decommissioning of older ones. Based on a European Wind Energy Association study, in 2010 between 110 and 140 kilotonnes of composites were consumed to manufacture blades. The majority of the blade material ends up as waste, and requires recycling. As of 2020, most end-of-use blades are stored or sent to landfills rather than recycled. Typically, glass-fiber-reinforced polymers (GFRPs) comprise around 70% of the laminate material in the blade. GFRPs are not combustible, and so hinder the incineration of combustible materials. Therefore, conventional recycling methods are inappropriate. Depending on whether individual fibers are to be recovered, GFRP recycling may involve:
Mechanical recycling: This method doesn't recover individual fibers. Initial processes involve shredding, crushing, or milling. The crushed pieces are then separated into fiber-rich and resin-rich fractions. These fractions are ultimately incorporated into new composites either as fillers or reinforcements.
Chemical processing/pyrolysis: Thermal decomposition of the composites recovers individual fibers. For pyrolysis, the material is heated up to 500 °C in an environment without oxygen, causing it to break down into lower-weight organic substances and gaseous products. The glass fibers generally lose 50% of their strength and can be downcycled for fiber reinforcement applications in paints or concrete. This can recover up to approximately 19 MJ/kg at relatively high cost. It requires mechanical pre-processing, similar to that involved in purely mechanical recycling.
Direct structural recycling of composites: The general idea is to reuse the composite as is, without altering its chemical properties, which can be achieved especially for larger composite material parts by partitioning them into pieces that can be used directly in other applications.
Start-up company Global Fiberglass Solutions claimed in 2020 that it had a method to process blades into pellets and fiber boards for use in flooring and walls. The company started producing samples at a plant in Sweetwater, Texas.
Tower
Height
Wind velocities increase at higher altitudes due to surface aerodynamic drag (by land or water surfaces) and air viscosity. The variation in velocity with altitude, called wind shear, is most dramatic near the surface. Typically, the variation follows the wind profile power law, which predicts that wind speed rises proportionally to the seventh root of altitude. Doubling the altitude of a turbine, then, increases the expected wind speeds by 10% and the expected power by 34%. To avoid buckling, doubling the tower height generally requires doubling the tower diameter, increasing the amount of material by a factor of at least four.
During the night, or when the atmosphere becomes stable, wind speed close to the ground usually subsides whereas at turbine hub altitude it does not decrease that much or may even increase. As a result, the wind speed is higher and a turbine will produce more power than expected from the 1/7 power law: doubling the altitude may increase wind speed by 20% to 60%. A stable atmosphere is caused by radiative cooling of the surface and is common in a temperate climate: it usually occurs when there is a (partly) clear sky at night. When the (high altitude) wind is strong (a 10-meter wind speed higher than approximately 6 to 7 m/s) the stable atmosphere is disrupted because of friction turbulence and the atmosphere turns neutral. A daytime atmosphere is either neutral (no net radiation; usually with strong winds and heavy clouding) or unstable (rising air because of ground heating—by the sun). The 1/7 power law is a good approximation of the wind profile. Indiana was rated as having a wind capacity of 30,000 MW, but by raising the expected turbine height from 50 m to 70 m raised the wind capacity to 40,000 MW, and could be double that at 100 m.
For HAWTs, tower heights approximately two to three times the blade length balance material costs of the tower against better utilisation of the more expensive active components.
Road restrictions make tower transport with a diameter of more than 4.3 m difficult. Swedish analyses showed that the bottom wing tip must be at least 30 m above the tree tops. A 3 MW turbine may increase output from 5,000 MWh to 7,700 MWh per year by rising from 80 to 125 meters. A tower profile made of connected shells rather than cylinders can have a larger diameter and still be transportable. A 100 m prototype tower with TC bolted 18 mm 'plank' shells at the wind turbine test center Høvsøre in Denmark was certified by Det Norske Veritas, with a Siemens nacelle. Shell elements can be shipped in standard 12 m shipping containers.
As of 2003, typical modern wind turbine installations used towers. Height is typically limited by the availability of cranes. This led to proposals for "partially self-erecting wind turbines" that, for a given available crane, allow taller towers that locate a turbine in stronger and steadier winds, and "self-erecting wind turbines" that could be installed without cranes.
Materials
Currently, the majority of wind turbines are supported by conical tubular steel towers. These towers represent 30% – 65% of the turbine weight and therefore account for a large percentage of transport costs. The use of lighter tower materials could reduce the overall transport and construction cost, as long as stability is maintained. Higher grade S500 steel costs 20%-25% more than S335 steel (standard structural steel), but it requires 30% less material because of its improved strength. Therefore, replacing wind turbine towers with S500 steel offer savings in weight and cost.
Another disadvantage of conical steel towers is meeting the requirements of wind turbines taller than 90 meters. High performance concrete may increase tower height and increase lifetime. A hybrid of prestressed concrete and steel improves performance over standard tubular steel at tower heights of 120 meters. Concrete also allows small precast sections to be assembled on site. One downside of concrete towers is the higher emissions during concrete production. However, the overall environmental impact should be positive if concrete towers can double the wind turbine lifetime.
Wood is another alternative: a 100-metre tower supporting a 1.5 MW turbine operates in Germany. The wood tower shares the same transportation benefits of the segmented steel shell tower, but without the steel. A 2 MW turbine on a wooden tower started operating in Sweden in 2023.
Another approach is to form the tower on site via spiral welding rolled sheet steel. Towers of any height and diameter can be formed this way, eliminating restrictions driven by transport requirements. A factory can be built in one month. The developer claims 80% labor savings over conventional approaches.
Grid connection
Grid-connected wind turbines, until the 1970s, were fixed-speed. As recently as 2003, nearly all grid-connected wind turbines operated at constant speed (synchronous generators) or within a few percent of constant speed (induction generators). As of 2011, many turbines used fixed-speed induction generators (FSIG). By then, most newly connected turbines were variable speed.
Early control systems were designed for peak power extraction, also called maximum power point tracking—they attempted to pull the maximum power from a given wind turbine under the current wind conditions. More recent systems deliberately pull less than maximum power in most circumstances, in order to provide other benefits, which include:
Spinning reserves to produce more power when needed—such as when some other generator drops from the grid
Variable-speed turbines can transiently produce slightly more power than wind conditions support, by storing some energy as kinetic energy (accelerating during brief gusts of faster wind) and later converting that kinetic energy to electric energy (decelerating). either when more power is needed, or to compensate for variable windspeeds.
damping (electrical) subsynchronous resonances in the grid
damping (mechanical) tower resonances
The generator produces alternating current (AC). The most common method in large modern turbines is to use a doubly fed induction generator directly connected to the grid. Some turbines drive an AC/AC converter—which converts the AC to direct current (DC) with a rectifier and then back to AC with an inverter—in order to match grid frequency and phase.
A useful technique to connect a PMSG to the grid is via a back-to-back converter. Control schemes can achieve unity power factor in the connection to the grid. In that way the wind turbine does not consume reactive power, which is the most common problem with turbines that use induction machines. This leads to a more stable power system. Moreover, with different control schemes a PMSG turbine can provide or consume reactive power. So, it can work as a dynamic capacitor/inductor bank to help with grid stability.
The diagram shows the control scheme for a unity power factor :
Reactive power regulation consists of one PI controller in order to achieve operation with unity power factor (i.e. Qgrid = 0 ). IdN has to be regulated to reach zero at steady-state (IdNref = 0).
The complete system of the grid side converter and the cascaded PI controller loops is displayed in the figure.
Construction
As wind turbine usage has increased, so have companies that assist in the planning and construction of wind turbines. Most often, turbine parts are shipped via sea or rail, and then via truck to the installation site. Due to the massive size of the components involved, companies usually need to obtain transportation permits and ensure that the chosen trucking route is free of potential obstacles such as overpasses, bridges, and narrow roads. Groups known as "reconnaissance teams" will scout the way up to a year in advance as they identify problematic roads, cut down trees, and relocate utility poles. Turbine blades continue to increase in size, sometimes necessitating brand new logistical plans, as previously used routes may not allow a larger blade. Specialized vehicles known as Schnabel trailers are custom-designed to load and transport turbine sections: tower sections can be loaded without a crane and the rear end of the trailer is steerable, allowing for easier maneuvering. Drivers must be specially trained.
Foundations
Wind turbines, by their nature, are very tall, slender structures, and this can cause a number of issues when the structural design of the foundations are considered. The foundations for a conventional engineering structure are designed mainly to transfer the vertical load (dead weight) to the ground, generally allowing comparatively unsophisticated arrangement to be used. However, in the case of wind turbines, the force of the wind's interaction with the rotor at the top of the tower creates a strong tendency to tip the wind turbine over. This loading regime causes large moment loads to be applied to the foundations of a wind turbine. As a result, considerable attention needs to be given when designing the footings to ensure that the foundation will resist this tipping tendency.
One of the most common foundations for offshore wind turbines is the monopile, a single large-diameter (4 to 6 metres) tubular steel pile driven to a depth of 5-6 times the diameter of the pile into the seabed. The cohesion of the soil, and friction between the pile and the soil provide the necessary structural support for the wind turbine.
In onshore turbines the most common type of foundation is a gravity foundation, where a large mass of concrete spread out over a large area is used to resist the turbine loads. Wind turbine size & type, wind conditions and soil conditions at the site are all determining factors in the design of the foundation. Prestressed piles or rock anchors are alternative foundation designs that use much less concrete and steel.
Costs
A wind turbine is a complex and integrated system. Structural elements comprise the majority of the weight and cost. All parts of the structure must be inexpensive, lightweight, durable, and manufacturable, surviving variable loading and environmental conditions. Turbine systems with fewer failures require less maintenance, are lighter and last longer, reducing costs.
The major parts of a turbine divide as: tower 22%, blades 18%, gearbox 14%, generator 8%.
Specification
Turbine design specifications contain a power curve and availability guarantee. The wind resource assessment makes it possible to calculate commercial viability. Typical operating temperature range is . In areas with extreme climate (like Inner Mongolia or Rajasthan) climate-specific versions are required.
Wind turbines can be designed and validated according to IEC 61400 standards.
RDS-PP (Reference Designation System for Power Plants) is a standardized system used worldwide to create structured hierarchy of wind turbine components. This facilitates turbine maintenance and operation cost, and is used during all stages of a turbine creation.
See also
Brushless wound-rotor doubly fed electric machine
Floating wind turbine
Vertical-axis wind turbine
Wind-turbine aerodynamics
Copper in renewable energy, section Wind
Unconventional wind turbines
References
Further reading
Robert Gasch, Jochen Twele (ed.), Wind power plants. Fundamentals, design, construction and operation, Springer 2012 .
Erich Hau, Wind turbines: fundamentals, technologies, application, economicsSpringer, 2013 (preview on Google Books)
Siegfried Heier, Grid integration of wind energy conversion systems Wiley 2006, .
Peter Jamieson, Innovation in Wind Turbine Design. Wiley & Sons 2011,
David Spera (ed,) Wind Turbine Technology: Fundamental Concepts in Wind Turbine Engineering, Second Edition (2009), ASME Press,
Alois Schaffarczyk (ed.), Understanding wind power technology, Wiley & Sons 2014, .
Hermann-Josef Wagner, Jyotirmay Mathur, Introduction to wind energy systems. Basics, technology and operation. Springer 2013, .
External links
Offshore Wind Turbines - Installation and Operation of Turbines
Department of Energy- Energy Efficiency and Renewable Energy
RenewableUK - Wind Energy Reference and FAQs
How is Wind turbine made
Wind turbines | 0.771344 | 0.991017 | 0.764415 |
Otto cycle | An Otto cycle is an idealized thermodynamic cycle that describes the functioning of a typical spark ignition piston engine. It is the thermodynamic cycle most commonly found in automobile engines.
The Otto cycle is a description of what happens to a gas as it is subjected to changes of pressure, temperature, volume, addition of heat, and removal of heat. The gas that is subjected to those changes is called the system. The system, in this case, is defined to be the fluid (gas) within the cylinder. Conversely, by describing the changes that take place within the system it also describes the system's effect on the environment. The purpose of the Otto cycle is to study the production of net work from the system that can propel a vehicle and its occupants in the environment.
The Otto cycle is constructed from:
Top and bottom of the loop: a pair of quasi-parallel and isentropic processes (frictionless, adiabatic reversible).
Left and right sides of the loop: a pair of parallel isochoric processes (constant volume).
The isentropic process of compression or expansion implies that there will be no inefficiency (loss of mechanical energy), and there be no transfer of heat into or out of the system during that process. The cylinder and piston are assumed to be impermeable to heat during that time. Work is performed on the system during the lower isentropic compression process. Heat flows into the Otto cycle through the left pressurizing process and some of it flows back out through the right depressurizing process. The summation of the work added to the system plus the heat added minus the heat removed yields the net mechanical work generated by the system.
Processes
The processes are described by:
Process 0–1 a mass of air is drawn into piston/cylinder arrangement at constant pressure.
Process 1–2 is an adiabatic (isentropic) compression of the charge as the piston moves from bottom dead center (BDC) to top dead center (TDC).
Process 2–3 is a constant-volume heat transfer to the working gas from an external source while the piston is at top dead center. This process is intended to represent the ignition of the fuel-air mixture and the subsequent rapid burning.
Process 3–4 is an adiabatic (isentropic) expansion (power stroke).
Process 4–1 completes the cycle by a constant-volume process in which heat is rejected from the air while the piston is at bottom dead center.
Process 1–0 the mass of air is released to the atmosphere in a constant pressure process.
The Otto cycle consists of isentropic compression, heat addition at constant volume, isentropic expansion, and rejection of heat at constant volume. In the case of a four-stroke Otto cycle, technically there are two additional processes: one for the exhaust of waste heat and combustion products at constant pressure (isobaric), and one for the intake of cool oxygen-rich air also at constant pressure; however, these are often omitted in a simplified analysis. Even though those two processes are critical to the functioning of a real engine, wherein the details of heat transfer and combustion chemistry are relevant, for the simplified analysis of the thermodynamic cycle, it is more convenient to assume that all of the waste-heat is removed during a single volume change.
History
The four-stroke engine was first patented by Alphonse Beau de Rochas in 1861. Before, in about 1854–57, two Italians (Eugenio Barsanti and Felice Matteucci) invented an engine that was rumored to be very similar, but the patent was lost.
The first person to build a working four-stroke engine, a stationary engine using a coal gas-air mixture for fuel (a gas engine), was German engineer Nicolaus Otto. This is why the four-stroke principle today is commonly known as the Otto cycle and four-stroke engines using spark plugs often are called Otto engines.
Processes
The cycle has four parts: a mass containing a mixture of fuel and oxygen is drawn into the cylinder by the descending piston, it is compressed by the piston rising, the mass is ignited by a spark releasing energy in the form of heat, the resulting gas is allowed to expand as it pushes the piston down, and finally the mass is exhausted as the piston rises a second time. As the piston is capable of moving along the cylinder, the volume of the gas changes with its position in the cylinder. The compression and expansion processes induced on the gas by the movement of the piston are idealized as reversible, i.e., no useful work is lost through turbulence or friction and no heat is transferred to or from the gas during those two processes. After the expansion is completed in the cylinder, the remaining heat is extracted and finally the gas is exhausted to the environment. Mechanical work is produced during the expansion process and some of that used to compress the air mass of the next cycle. The mechanical work produced minus that used for the compression process is the net work gained and that can be used for propulsion or for driving other machines. Alternatively the net work gained is the difference between the heat produced and the heat removed.
Process 0–1 intake stroke (blue shade)
A mass of air (working fluid) is drawn into the cylinder, from 0 to 1, at atmospheric pressure (constant pressure) through the open intake valve, while the exhaust valve is closed during this process. The intake valve closes at point 1.
Process 1–2 compression stroke (B on diagrams)
Piston moves from crank end (BDC, bottom dead centre and maximum volume) to cylinder head end (TDC, top dead centre and minimum volume) as the working gas with initial state 1 is compressed isentropically to state point 2, through compression ratio . Mechanically this is the isentropic compression of the air/fuel mixture in the cylinder, also known as the compression stroke. This isentropic process assumes that no mechanical energy is lost due to friction and no heat is transferred to or from the gas, hence the process is reversible. The compression process requires that mechanical work be added to the working gas. Generally the compression ratio is around 9–10:1 for a typical engine.
Process 2–3 ignition phase (C on diagrams)
The piston is momentarily at rest at TDC. During this instant, which is known as the ignition phase, the air/fuel mixture remains in a small volume at the top of the compression stroke. Heat is added to the working fluid by the combustion of the injected fuel, with the volume essentially being held constant. The pressure rises and the ratio is called the "explosion ratio".
Process 3–4 expansion stroke (D on diagrams)
The increased high pressure exerts a force on the piston and pushes it towards the BDC. Expansion of working fluid takes place isentropically and work is done by the system on the piston. The volume ratio is called the "isentropic expansion ratio". (For the Otto cycle is the same as the compression ratio ). Mechanically this is the expansion of the hot gaseous mixture in the cylinder known as expansion (power) stroke.
Process 4–1 idealized heat rejection (A on diagrams)
The piston is momentarily at rest at BDC. The working gas pressure drops instantaneously from point 4 to point 1 during a constant volume process as heat is removed to an idealized external sink that is brought into contact with the cylinder head. In modern internal combustion engines, the heat-sink may be surrounding air (for low powered engines), or a circulating fluid, such as coolant. The gas has returned to state 1.
Process 1–0 exhaust stroke
The exhaust valve opens at point 1. As the piston moves from "BDC" (point 1) to "TDC" (point 0) with the exhaust valve opened, the gaseous mixture is vented to the atmosphere and the process starts anew.
Cycle analysis
In this process 1–2 the piston does work on the gas and in process 3–4 the gas does work on the piston during those isentropic compression and expansion processes, respectively. Processes 2–3 and 4–1 are isochoric processes; heat is transferred into the system from 2—3 and out of the system from 4–1 but no work is done on the system or extracted from the system during those processes. No work is done during an isochoric (constant volume) process because addition or removal of work from a system requires the movement of the boundaries of the system; hence, as the cylinder volume does not change, no shaft work is added to or removed from the system.
Four different equations are used to describe those four processes. A simplification is made by assuming changes of the kinetic and potential energy that take place in the system (mass of gas) can be neglected and then applying the first law of thermodynamics (energy conservation) to the mass of gas as it changes state as characterized by the gas's temperature, pressure, and volume.
During a complete cycle, the gas returns to its original state of temperature, pressure and volume, hence the net internal energy change of the system (gas) is zero. As a result, the energy (heat or work) added to the system must be offset by energy (heat or work) that leaves the system. In the analysis of thermodynamic systems, the convention is to account energy that enters the system as positive and energy that leaves the system is accounted as negative.
Equation 1a.
During a complete cycle, the net change of energy of the system is zero:
The above states that the system (the mass of gas) returns to the original thermodynamic state it was in at the start of the cycle.
Where is energy added to the system from 1–2–3 and is energy removed from the system from 3–4–1. In terms of work and heat added to the system
Equation 1b:
Each term of the equation can be expressed in terms of the internal energy of the gas at each point in the process:
The energy balance Equation 1b becomes
To illustrate the example we choose some values to the points in the illustration:
These values are arbitrarily but rationally selected. The work and heat terms can then be calculated.
The energy added to the system as work during the compression from 1 to 2 is
The energy added to the system as heat from point 2 to 3 is
The energy removed from the system as work during the expansion from 3 to 4 is
The energy removed from the system as heat from point 4 to 1 is
The energy balance is
Note that energy added to the system is counted as positive and energy leaving the system is counted as negative and the summation is zero as expected for a complete cycle that returns the system to its original state.
From the energy balance the work out of the system is:
The net energy out of the system as work is -1, meaning the system has produced one net unit of energy that leaves the system in the form of work.
The net heat out of the system is:
As energy added to the system as heat is positive. From the above it appears as if the system gained one unit of heat. This matches the energy produced by the system as work out of the system.
Thermal efficiency is the quotient of the net work from the system, to the heat added to system.
Equation 2:
Alternatively, thermal efficiency can be derived by strictly heat added and heat rejected.
Supplying the fictitious values
In the Otto cycle, there is no heat transfer during the process 1–2 and 3–4 as they are isentropic processes. Heat is supplied only during the constant volume processes 2–3 and heat is rejected only during the constant volume processes 4–1.
The above values are absolute values that might, for instance , have units of joules (assuming the MKS system of units are to be used) and would be of use for a particular engine with particular dimensions. In the study of thermodynamic systems the extensive quantities such as energy, volume, or entropy (versus intensive quantities of temperature and pressure) are placed on a unit mass basis, and so too are the calculations, making those more general and therefore of more general use. Hence, each term involving an extensive quantity could be divided by the mass, giving the terms units of joules/kg (specific energy), meters3/kg (specific volume), or joules/(kelvin·kg) (specific entropy, heat capacity) etc. and would be represented using lower case letters, u, v, s, etc.
Equation 1 can now be related to the specific heat equation for constant volume. The specific heats are particularly useful for thermodynamic calculations involving the ideal gas model.
Rearranging yields:
Inserting the specific heat equation into the thermal efficiency equation (Equation 2) yields.
Upon rearrangement:
Next, noting from the diagrams (see isentropic relations for an ideal gas), thus both of these can be omitted. The equation then reduces to:
Equation 2:
Since the Otto cycle uses isentropic processes during the compression (process 1 to 2) and expansion (process 3 to 4) the isentropic equations of ideal gases and the constant pressure/volume relations can be used to yield Equations 3 & 4.
Equation 3:
Equation 4:
where
is the specific heat ratio
The derivation of the previous equations are found by solving these four equations respectively (where is the specific gas constant):
Further simplifying Equation 4, where is the compression ratio :
Equation 5:
From inverting Equation 4 and inserting it into Equation 2 the final thermal efficiency can be expressed as:
Equation 6:
From analyzing equation 6 it is evident that the Otto cycle efficiency depends directly upon the compression ratio . Since the for air is 1.4, an increase in will produce an increase in . However, the for combustion products of the fuel/air mixture is often taken at approximately 1.3.
The foregoing discussion implies that it is more efficient to have a high compression ratio. The standard ratio is approximately 10:1 for typical automobiles. Usually this does not increase much because of the possibility of autoignition, or "knock", which places an upper limit on the compression ratio. During the compression process 1–2 the temperature rises, therefore an increase in the compression ratio causes an increase in temperature. Autoignition occurs when the temperature of the fuel/air mixture becomes too high before it is ignited by the flame front. The compression stroke is intended to compress the products before the flame ignites the mixture. If the compression ratio is increased, the mixture may auto-ignite before the compression stroke is complete, leading to "engine knocking". This can damage engine components and will decrease the brake horsepower of the engine.
Power
The power produced by the Otto cycle is an energy developed per unit of time. The Otto engines are called four-stroke engines.
The intake stroke and compression stroke require one rotation of the engine crankshaft. The power stroke and exhaust stroke require another rotation. For two rotations there is one work generating stroke..
From the above cycle analysis the net work produced by the system :
(again, using the sign convention, the minus sign implies energy is leaving the system as work)
If the units used were MKS the cycle would have produced one joule of energy in the form of work. For an engine of a particular displacement, such as one liter, the mass of gas of the system can be calculated assuming the engine is operating at standard temperature (20 °C) and pressure (1 atm). Using the Universal Gas Law the mass of one liter of gas is at room temperature and sea level pressure:
V=0.001 m3, R=0.286 kJ/(kg·K), T=293 K, P=101.3 kN/m2
M=0.00121 kg
At an engine speed of 3000 RPM there are 1500 work-strokes/minute or 25 work-strokes/second.
Power is 25 times that since there are 25 work-strokes/second
If the engine uses multiple cylinders with the same displacement, the result would be multiplied by the number of cylinders. These results are the product of the values of the internal energy that were assumed for the four states of the system at the end each of the four strokes (two rotations). They were selected only for the sake of illustration, and are obviously of low value. Substitution of actual values from an actual engine would produce results closer to that of the engine. Whose results would be higher than the actual engine as there are many simplifying assumptions made in the analysis that overlook inefficiencies. Such results would overestimate the power output.
Increasing power and efficiency
The difference between the exhaust and intake pressures and temperatures means that some increase in efficiency can be gained by use of a turbocharger, removing from the exhaust flow some part of the remaining energy and transferring that to the intake flow to increase the intake pressure. A gas turbine can extract useful work energy from the exhaust stream and that can then be used to pressurize the intake air. The pressure and temperature of the exhausting gases would be reduced as they expand through the gas turbine and that work is then applied to the intake gas stream, increasing its pressure and temperature. The transfer of energy amounts to an efficiency improvement and the resulting power density of the engine is also improved. The intake air is typically cooled so as to reduce its volume as the work produced per stroke is a direct function of the amount of mass taken into the cylinder; denser air will produce more work per cycle. Practically speaking the intake air mass temperature must also be reduced to prevent premature ignition in a petrol fueled engine; hence, an intercooler is used to remove some energy as heat and so reduce the intake temperature. Such a scheme both increases the engine's efficiency and power.
The application of a supercharger driven by the crankshaft does increase the power output (power density) but does not increase efficiency as it uses some of the net work produced by the engine to pressurize the intake air and fails to extract otherwise wasted energy associated with the flow of exhaust at high temperature and a pressure to the ambient.
References
Piston engines
Thermodynamic cycles
fr:Cycle de Beau de Rochas#Étude thermodynamique | 0.767841 | 0.995515 | 0.764397 |
Linear energy transfer | In dosimetry, linear energy transfer (LET) is the amount of energy that an ionizing particle transfers to the material traversed per unit distance. It describes the action of radiation into matter.
It is identical to the retarding force acting on a charged ionizing particle travelling through the matter. By definition, LET is a positive quantity. LET depends on the nature of the radiation as well as on the material traversed.
A high LET will slow down the radiation more quickly, generally making shielding more effective and preventing deep penetration. On the other hand, the higher concentration of deposited energy can cause more severe damage to any microscopic structures near the particle track. If a microscopic defect can cause larger-scale failure, as is the case in biological cells and microelectronics, the LET helps explain why radiation damage is sometimes disproportionate to the absorbed dose. Dosimetry attempts to factor in this effect with radiation weighting factors.
Linear energy transfer is closely related to stopping power, since both equal the retarding force. The unrestricted linear energy transfer is identical to linear electronic stopping power, as discussed below. But the stopping power and LET concepts are different in the respect that total stopping power has the nuclear stopping power component, and this component does not cause electronic excitations. Hence nuclear stopping power is not contained in LET.
The appropriate SI unit for LET is the newton, but it is most typically expressed in units of kiloelectronvolts per micrometre (keV/μm) or megaelectronvolts per centimetre (MeV/cm). While medical physicists and radiobiologists usually speak of linear energy transfer, most non-medical physicists talk about stopping power.
Restricted and unrestricted LET
The secondary electrons produced during the process of ionization by the primary charged particle are conventionally called delta rays, if their energy is large enough so that they themselves can ionize. Many studies focus upon the energy transferred in the vicinity of the primary particle track and therefore exclude interactions that produce delta rays with energies larger than a certain value Δ. This energy limit is meant to exclude secondary electrons that carry energy far from the primary particle track, since a larger energy implies a larger range. This approximation neglects the directional distribution of secondary radiation and the non-linear path of delta rays, but simplifies analytic evaluation.
In mathematical terms, Restricted linear energy transfer is defined by
where is the energy loss of the charged particle due to electronic collisions while traversing a distance , excluding all secondary electrons with kinetic energies larger than Δ. If Δ tends toward infinity, then there are no electrons with larger energy, and the linear energy transfer becomes the unrestricted linear energy transfer which is identical to the linear electronic stopping power. Here, the use of the term "infinity" is not to be taken literally; it simply means that no energy transfers, however large, are excluded.
Application to radiation types
During his investigations of radioactivity, Ernest Rutherford coined the terms alpha rays, beta rays and gamma rays for the three types of emissions that occur during radioactive decay.
Alpha particles and other positive ions
Linear energy transfer is best defined for monoenergetic ions, i.e. protons, alpha particles, and the heavier nuclei called HZE ions found in cosmic rays or produced by particle accelerators. These particles cause frequent direct ionizations within a narrow diameter around a relatively straight track, thus approximating continuous deceleration. As they slow down, the changing particle cross section modifies their LET, generally increasing it to a Bragg peak just before achieving thermal equilibrium with the absorber, i.e., before the end of range. At equilibrium, the incident particle essentially comes to rest or is absorbed, at which point LET is undefined.
Since the LET varies over the particle track, an average value is often used to represent the spread. Averages weighted by track length or weighted by absorbed dose are present in the literature, with the latter being more common in dosimetry. These averages are not widely separated for heavy particles with high LET, but the difference becomes more important in the other type of radiations discussed below.
Often overlooked for alpha particles is the recoil-nucleus of the alpha emitter, which has significant ionization energy of roughly 5% of the alpha particle, but because of its high electric charge and large mass, has an ultra-short range of only a few Angstroms. This can skew results significantly if one is examining the Relative Biological Effectiveness of the alpha particle in the cytoplasm, while ignoring the recoil nucleus contribution, which alpha-parent being one of numerous heavy metals, is typically adhered to chromatic material such as chromosomes.
Beta particles
Electrons produced in nuclear decay are called beta particles. Because of their low mass relative to atoms, they are strongly scattered by nuclei (Coulomb or Rutherford scattering), much more so than heavier particles. Beta particle tracks are therefore crooked. In addition to producing secondary electrons (delta rays) while ionizing atoms, they also produce bremsstrahlung photons. A maximum range of beta radiation can be defined experimentally which is smaller than the range that would be measured along the particle path.
Gamma rays
Gamma rays are photons, whose absorption cannot be described by LET. When a gamma quantum passes through matter, it may be absorbed in a single process (photoelectric effect, Compton effect or pair production), or it continues unchanged on its path. (Only in the case of the Compton effect, another gamma quantum of lower energy proceeds). Gamma ray absorption therefore obeys an exponential law (see Gamma rays); the absorption is described by the absorption coefficient or by the half-value thickness.
LET has therefore no meaning when applied to photons. However, many authors speak of "gamma LET" anyway, where they are actually referring to the LET of the secondary electrons, i.e., mainly Compton electrons, produced by the gamma radiation. The secondary electrons will ionize far more atoms than the primary photon. This gamma LET has little relation to the attenuation rate of the beam, but it may have some correlation to the microscopic defects produced in the absorber. Even a monoenergetic gamma beam will produce a spectrum of electrons, and each secondary electron will have a variable LET as it slows down, as discussed above. The "gamma LET" is therefore an average.
The transfer of energy from an uncharged primary particle to charged secondary particles can also be described by using the mass energy-transfer coefficient.
Biological effects
Many studies have attempted to relate linear energy transfer to the relative biological effectiveness (RBE) of radiation, with inconsistent results. The relationship varies widely depending on the nature of the biological material, and the choice of endpoint to define effectiveness. Even when these are held constant, different radiation spectra that shared the same LET have significantly different RBE.
Despite these variations, some overall trends are commonly seen. The RBE is generally independent of LET for any LET less than 10 keV/μm, so a low LET is normally chosen as the reference condition where RBE is set to unity. Above 10 keV/μm, some systems show a decline in RBE with increasing LET, while others show an initial increase to a peak before declining. Mammalian cells usually experience a peak RBE for LET's around 100 keV/μm. These are very rough numbers; for example, one set of experiments found a peak at 30 keV/μm.
The International Commission on Radiation Protection (ICRP) proposed a simplified model of RBE-LET relationships for use in dosimetry. They defined a quality factor of radiation as a function of dose-averaged unrestricted LET in water, and intended it as a highly uncertain, but generally conservative, approximation of RBE. Different iterations of their model are shown in the graph to the right. The 1966 model was integrated into their 1977 recommendations for radiation protection in ICRP 26. This model was largely replaced in the 1991 recommendations of ICRP 60 by radiation weighting factors that were tied to the particle type and independent of LET. ICRP 60 revised the quality factor function and reserved it for use with unusual radiation types that did not have radiation weighting factors assigned to them.
Application fields
When used to describe the dosimetry of ionizing radiation in the biological or biomedical setting, the LET (like linear stopping power) is usually expressed in units of keV/μm.
In space applications, electronic devices can be disturbed by the passage of energetic electrons, protons or heavier ions that may alter the state of a circuit, producing "single event effects". The effect of the radiation is described by the LET (which is here taken as synonymous with stopping power), typically expressed in units of MeV·cm2/mg of material, the units used for mass stopping power (the material in question is usually Si for MOS devices). The units of measurement arise from a combination of the energy lost by the particle to the material per unit path length (MeV/cm) divided by the density of the material (mg/cm3).
"Soft errors" of electronic devices due to cosmic rays on earth are, however, mostly due to neutrons which do not directly interact with the material and whose passage can therefore not be described by LET. Rather, one measures their effect in terms of neutrons per cm2 per hour, see Soft error.
References
Nuclear physics
Radiation effects
Radiobiology | 0.777849 | 0.982691 | 0.764385 |
Aestivation | Aestivation ( (summer); also spelled estivation in American English) is a state of animal dormancy, similar to hibernation, although taking place in the summer rather than the winter. Aestivation is characterized by inactivity and a lowered metabolic rate, that is entered in response to high temperatures and arid conditions. It takes place during times of heat and dryness, which are often the summer months.
Invertebrate and vertebrate animals are known to enter this state to avoid damage from high temperatures and the risk of desiccation. Both terrestrial and aquatic animals undergo aestivation. Fossil records suggest that aestivation may have evolved several hundred million years ago.
Physiology
Organisms that aestivate appear to be in a fairly "light" state of dormancy, as their physiological state can be rapidly reversed, and the organism can quickly return to a normal state. A study done on Otala lactea, a snail native to parts of Europe and Northern Africa, shows that they can wake from their dormant state within ten minutes of being introduced to a wetter environment.
The primary physiological and biochemical concerns for an aestivating animal are to conserve energy, retain water in the body, ration the use of stored energy, handle the nitrogenous end products, and stabilize bodily organs, cells, and macromolecules. This can be quite a task as hot temperatures and arid conditions may last for months, in some cases for years. The depression of metabolic rate during aestivation causes a reduction in macromolecule synthesis and degradation. To stabilise the macromolecules, aestivators will enhance antioxidant defenses and elevate chaperone proteins. This is a widely used strategy across all forms of hypometabolism. These physiological and biochemical concerns appear to be the core elements of hypometabolism throughout the animal kingdom. In other words, animals which aestivate appear to go through nearly the same physiological processes as animals that hibernate.
Invertebrates
Mollusca
Gastropoda: some air-breathing land snails, including species in the genera Helix, Cernuella, Theba, Helicella, Achatina and Otala, commonly aestivate during periods of heat. Some species move into shaded vegetation or rubble. Others climb up tall plants, including crop species as well as bushes and trees, and will also climb human-made structures such as posts, fences, etc.
Their habit of climbing vegetation to aestivate has caused more than one introduced snail species to be declared an agricultural nuisance.
To seal the opening to their shell to prevent water loss, pulmonate land snails secrete a membrane of dried mucus called an epiphragm. In certain species, such as Helix pomatia, this barrier is reinforced with calcium carbonate, and thus it superficially resembles an operculum, except that it has a tiny hole to allow some oxygen exchange.
There is a decrease in metabolic rate and reduced rate of water loss in aestivating snails like Rhagada tescorum, Sphincterochila boissieri and others.
Arthropoda
Insecta: Lady beetles (Coccinellidae) have been reported to aestivate. Another type of beetle (Blepharida rhois) also chooses to aestivate. They usually do so when the temperature is warmer and will re-emerge in the late summer or early fall. Mosquitoes also are reported to undergo aestivation. False honey ants are well known for being winter active and aestivate in temperate climates. Bogong moths will aestivate over the summer to avoid the heat and lack of food sources. Adult alfalfa weevils (Hypera postica) aestivate during the summer in the southeastern United States, during which their metabolism, respiration, and nervous systems show a dampening of activity.
Crustacea: An example of a crustacean undergoing aestivation is with the Australian crab Austrothelphusa transversa , which undergoes aestivation underground during the dry season.
Vertebrates
Reptiles and amphibians
Non-mammalian animals that aestivate include North American desert tortoises, crocodiles, and salamanders. Some amphibians (e.g. the cane toad and greater siren) aestivate during the hot dry season by moving underground where it is cooler and more humid. The California red-legged frog may aestivate to conserve energy when its food and water supply is low.
The water-holding frog has an aestivation cycle. It buries itself in sandy ground in a secreted, water-tight mucus cocoon during periods of hot, dry weather. Australian Aboriginals discovered a means to take advantage of this by digging up one of these frogs and squeezing it, causing the frog to empty its bladder. This dilute urine—up to half a glassful—can be drunk. However, this will cause the death of the frog which will be unable to survive until the next rainy season without the water it had stored.
The western swamp turtle aestivates to survive hot summers in the ephemeral swamps it lives in. It buries itself in various media which change depending on location and available substrates. Because the species is critically endangered, the Perth Zoo began a conservation and breeding program for it. However, zookeepers were unaware of the importance of their aestivation cycle and during the first summer period would perform weekly checks on the animals. This repeated disturbance was detrimental to the health of the animals, with many losing significant weight and some dying. The zookeepers quickly changed their procedures and now leave their captive turtles undisturbed during their aestivation period.
Fish
African lungfish also aestivate as can salamanderfish.
Mammals
Although relatively uncommon, a small number of mammals aestivate. Animal physiologist Kathrin Dausmann of Philipps University of Marburg, Germany, and coworkers presented evidence in a 2004 edition of Nature that the Malagasy fat-tailed dwarf lemur hibernates or aestivates in a small tree hole for seven months of the year. According to the Oakland Zoo in California, four-toed hedgehogs are thought to aestivate during the dry season.
See also
Critical thermal maximum
Hibernation induction trigger
Siesta
Torpor
Splooting
References
Further reading
External links
Abstract of an Australian paper on aestivation in snails
Some info in aestivation in the snail Theba pisana
Hibernation on demand
Basic definition
Sleep physiology
Ethology
Articles containing video clips | 0.76941 | 0.993467 | 0.764384 |
Four-current | In special and general relativity, the four-current (technically the four-current density) is the four-dimensional analogue of the current density, with units of charge per unit time per unit area. Also known as vector current, it is used in the geometric context of four-dimensional spacetime, rather than separating time from three-dimensional space. Mathematically it is a four-vector and is Lorentz covariant.
This article uses the summation convention for indices. See covariance and contravariance of vectors for background on raised and lowered indices, and raising and lowering indices on how to switch between them.
Definition
Using the Minkowski metric of metric signature , the four-current components are given by:
where:
is the speed of light;
is the volume charge density;
is the conventional current density;
The dummy index labels the spacetime dimensions.
Motion of charges in spacetime
This can also be expressed in terms of the four-velocity by the equation:
where:
is the charge density measured by an inertial observer O who sees the electric current moving at speed (the magnitude of the 3-velocity);
is “the rest charge density”, i.e., the charge density for a comoving observer (an observer moving at the speed - with respect to the inertial observer O - along with the charges).
Qualitatively, the change in charge density (charge per unit volume) is due to the contracted volume of charge due to Lorentz contraction.
Physical interpretation
Charges (free or as a distribution) at rest will appear to remain at the same spatial position for some interval of time (as long as they're stationary). When they do move, this corresponds to changes in position, therefore the charges have velocity, and the motion of charge constitutes an electric current. This means that charge density is related to time, while current density is related to space.
The four-current unifies charge density (related to electricity) and current density (related to magnetism) in one electromagnetic entity.
Continuity equation
In special relativity, the statement of charge conservation is that the Lorentz invariant divergence of J is zero:
where is the four-gradient. This is the continuity equation.
In general relativity, the continuity equation is written as:
where the semi-colon represents a covariant derivative.
Maxwell's equations
The four-current appears in two equivalent formulations of Maxwell's equations, in terms of the four-potential when the Lorenz gauge condition is fulfilled:
where is the D'Alembert operator, or the electromagnetic field tensor:
where μ0 is the permeability of free space and ∇α is the covariant derivative.
General relativity
In general relativity, the four-current is defined as the divergence of the electromagnetic displacement, defined as:
then:
Quantum field theory
The four-current density of charge is an essential component of the Lagrangian density used in quantum electrodynamics. In 1956 Semyon Gershtein and Yakov Zeldovich considered the conserved vector current (CVC) hypothesis for electroweak interactions.
See also
Four-vector
Noether's theorem
Covariant formulation of classical electromagnetism
Ricci calculus
References
Electromagnetism
Four-vectors | 0.782446 | 0.976914 | 0.764382 |
Propulsion | Propulsion is the generation of force by any combination of pushing or pulling to modify the translational motion of an object, which is typically a rigid body (or an articulated rigid body) but may also concern a fluid. The term is derived from two Latin words: pro, meaning before or forward; and pellere, meaning to drive.
A propulsion system consists of a source of mechanical power, and a propulsor (means of converting this power into propulsive force).
Plucking a guitar string to induce a vibratory translation is technically a form of propulsion of the guitar string; this is not commonly depicted in this vocabulary, even though human muscles are considered to propel the fingertips. The motion of an object moving through a gravitational field is affected by the field, and within some frames of reference physicists speak of the gravitational field generating a force upon the object, but for deep theoretic reasons, physicists now consider the curved path of an object moving freely through space-time as shaped by gravity as a natural movement of the object, unaffected by a propulsive force (in this view, the falling apple is considered to be unpropelled, while the observer of the apple standing on the ground is considered to be propelled by the reactive force of the Earth's surface).
Biological propulsion systems use an animal's muscles as the power source, and limbs such as wings, fins or legs as the propulsors. A technological system uses an engine or motor as the power source (commonly called a powerplant), and wheels and axles, propellers, or a propulsive nozzle to generate the force. Components such as clutches or gearboxes may be needed to connect the motor to axles, wheels, or propellers. A technological/biological system may use human, or trained animal, muscular work to power a mechanical device.
Small objects, such as bullets, propelled at high speed are known as projectiles; larger objects propelled at high speed, often into ballistic flight, are known as rockets or missiles.
Influencing rotational motion is also technically a form of propulsion, but in speech, an automotive mechanic might prefer to describe the hot gasses in an engine cylinder as propelling the piston (translational motion), which drives the crankshaft (rotational motion), the crankshaft then drives the wheels (rotational motion), and the wheels propel the car forward (translational motion). In common speech, propulsion is associated with spatial displacement more strongly than locally contained forms of motion, such as rotation or vibration. As another example, internal stresses in a rotating baseball cause the surface of the baseball to travel along a sinusoidal or helical trajectory, which would not happen in the absence of these interior forces; these forces meet the technical definition of propulsion from Newtonian mechanics, but are not commonly spoken of in this language.
Vehicular propulsion
Air propulsion
An aircraft propulsion system generally consists of an aircraft engine and some means to generate thrust, such as a propeller or a propulsive nozzle.
An aircraft propulsion system must achieve two things. First, the thrust from the propulsion system must balance the drag of the airplane when the airplane is cruising. And second, the thrust from the propulsion system must exceed the drag of the airplane for the airplane to accelerate. The greater the difference between the thrust and the drag, called the excess thrust, the faster the airplane will accelerate.
Some aircraft, like airliners and cargo planes, spend most of their life in a cruise condition. For these airplanes, excess thrust is not as important as high engine efficiency and low fuel usage. Since thrust depends on both the amount of gas moved and the velocity, we can generate high thrust by accelerating a large mass of gas by a small amount, or by accelerating a small mass of gas by a large amount. Because of the aerodynamic efficiency of propellers and fans, it is more fuel efficient to accelerate a large mass by a small amount, which is why high-bypass turbofans and turboprops are commonly used on cargo planes and airliners.
Some aircraft, like fighter planes or experimental high speed aircraft, require very high excess thrust to accelerate quickly and to overcome the high drag associated with high speeds. For these airplanes, engine efficiency is not as important as very high thrust. Modern combat aircraft usually have an afterburner added to a low bypass turbofan. Future hypersonic aircraft may use some type of ramjet or rocket propulsion.
Ground
Ground propulsion is any mechanism for propelling solid bodies along the ground, usually for the purposes of transportation. The propulsion system often consists of a combination of an engine or motor, a gearbox and wheel and axles in standard applications.
Maglev
Maglev (derived from magnetic levitation) is a system of transportation that uses magnetic levitation to suspend, guide and propel vehicles with magnets rather than using mechanical methods, such as wheels, axles and bearings. With maglev a vehicle is levitated a short distance away from a guide way using magnets to create both lift and thrust. Maglev vehicles are claimed to move more smoothly and quietly and to require less maintenance than wheeled mass transit systems. It is claimed that non-reliance on friction also means that acceleration and deceleration can far surpass that of existing forms of transport. The power needed for levitation is not a particularly large percentage of the overall energy consumption; most of the power used is needed to overcome air resistance (drag), as with any other high-speed form of transport.
Marine
Marine propulsion is the mechanism or system used to generate thrust to move a ship or boat across water. While paddles and sails are still used on some smaller boats, most modern ships are propelled by mechanical systems consisting of a motor or engine turning a propeller, or less frequently, in jet drives, an impeller. Marine engineering is the discipline concerned with the design of marine propulsion systems.
Steam engines were the first mechanical engines used in marine propulsion, but have mostly been replaced by two-stroke or four-stroke diesel engines, outboard motors, and gas turbine engines on faster ships. Nuclear reactors producing steam are used to propel warships and icebreakers, and there have been attempts to utilize them to power commercial vessels. Electric motors have been used on submarines and electric boats and have been proposed for energy-efficient propulsion. Recent development in liquified natural gas (LNG) fueled engines are gaining recognition for their low emissions and cost advantages.
Space
Spacecraft propulsion is any method used to accelerate spacecraft and artificial satellites. There are many different methods. Each method has drawbacks and advantages, and spacecraft propulsion is an active area of research. However, most spacecraft today are propelled by forcing a gas from the back/rear of the vehicle at very high speed through a supersonic de Laval nozzle. This sort of engine is called a rocket engine.
All current spacecraft use chemical rockets (bipropellant or solid-fuel) for launch, though some (such as the Pegasus rocket and SpaceShipOne) have used air-breathing engines on their first stage. Most satellites have simple reliable chemical thrusters (often monopropellant rockets) or resistojet rockets for orbital station-keeping and some use momentum wheels for attitude control. Soviet bloc satellites have used electric propulsion for decades, and newer Western geo-orbiting spacecraft are starting to use them for north–south stationkeeping and orbit raising. Interplanetary vehicles mostly use chemical rockets as well, although a few have used ion thrusters and Hall-effect thrusters (two different types of electric propulsion) to great success.
Cable
A cable car is any of a variety of transportation systems relying on cables to pull vehicles along or lower them at a steady rate. The terminology also refers to the vehicles on these systems. The cable car vehicles are motor-less and engine-less and they are pulled by a cable that is rotated by a motor off-board.
Animal
Animal locomotion, which is the act of self-propulsion by an animal, has many manifestations, including running, swimming, jumping and flying. Animals move for a variety of reasons, such as to find food, a mate, or a suitable microhabitat, and to escape predators. For many animals the ability to move is essential to survival and, as a result, selective pressures have shaped the locomotion methods and mechanisms employed by moving organisms. For example, migratory animals that travel vast distances (such as the Arctic tern) typically have a locomotion mechanism that costs very little energy per unit distance, whereas non-migratory animals that must frequently move quickly to escape predators (such as frogs) are likely to have costly but very fast locomotion. The study of animal locomotion is typically considered to be a sub-field of biomechanics.
Locomotion requires energy to overcome friction, drag, inertia, and gravity, though in many circumstances some of these factors are negligible. In terrestrial environments gravity must be overcome, though the drag of air is much less of an issue. In aqueous environments however, friction (or drag) becomes the major challenge, with gravity being less of a concern. Although animals with natural buoyancy need not expend much energy maintaining vertical position, some will naturally sink and must expend energy to remain afloat. Drag may also present a problem in flight, and the aerodynamically efficient body shapes of birds highlight this point. Flight presents a different problem from movement in water however, as there is no way for a living organism to have lower density than air. Limbless organisms moving on land must often contend with surface friction, but do not usually need to expend significant energy to counteract gravity.
Newton's third law of motion is widely used in the study of animal locomotion: if at rest, to move forward an animal must push something backward. Terrestrial animals must push the solid ground; swimming and flying animals must push against a fluid (either water or air). The effect of forces during locomotion on the design of the skeletal system is also important, as is the interaction between locomotion and muscle physiology, in determining how the structures and effectors of locomotion enable or limit animal movement.
See also
Jetpack
Transport
References
External links
Vehicle technology | 0.77116 | 0.99119 | 0.764366 |
Beam-powered propulsion | Beam-powered propulsion, also known as directed energy propulsion, is a class of aircraft or spacecraft propulsion that uses energy beamed to the spacecraft from a remote power plant to provide energy. The beam is typically either a microwave or a laser beam, and it is either pulsed or continuous. A continuous beam lends itself to thermal rockets, photonic thrusters, and light sails. In contrast, a pulsed beam lends itself to ablative thrusters and pulse detonation engines.
The rule of thumb that is usually quoted is that it takes a megawatt of power beamed to a vehicle per kg of payload while it is being accelerated to permit it to reach low Earth orbit.
Other than launching to orbit, applications for moving around the world quickly have also been proposed.
Background
Rockets are momentum machines; they use mass ejected from the rocket to provide momentum to the rocket. Momentum is the product of mass and velocity, so rockets generally attempt to put as much velocity into their working mass as possible, thereby minimizing the needed working mass. To accelerate the working mass, energy is required. In a conventional rocket, the fuel is chemically combined to provide the energy, and the resulting fuel products, the ash or exhaust, are used as the working mass.
There is no particular reason why the same fuel has to be used for both energy and momentum. In the jet engine, for instance, the fuel is used only to produce energy, and the air provides the working mass the jet aircraft flies through. In modern jet engines, the amount of air propelled is much more significant than the amount used for energy. However, this is not a solution for the rockets as they quickly climb to altitudes where the air is too thin to be useful as a source of working mass.
Rockets can carry their working mass and use other energy sources. The problem is finding an energy source with a power-to-weight ratio that competes with chemical fuels. Small nuclear reactors can compete in this regard, and considerable work on nuclear thermal propulsion was carried out in the 1960s, but environmental concerns and rising costs led to the ending of most of these programs.
Further improvement can be made by removing the energy created by the spacecraft. If the nuclear reactor is left on the ground and its energy is transmitted to the spacecraft, its weight is also removed. The issue then is getting the energy into the spacecraft. This is the idea behind beamed power.
With beamed propulsion, one can leave the power source stationary on the ground and directly (or via a heat exchanger) heat propellant on the spacecraft with a maser or a laser beam from a fixed installation. This permits the spacecraft to leave its power source at home, saving significant amounts of mass and greatly improving performance.
Laser propulsion
Since a laser can heat propellant to extremely high temperatures, this potentially greatly improves the efficiency of a rocket, as exhaust velocity is proportional to the square root of the temperature. Normal chemical rockets have an exhaust speed limited by the fixed amount of energy in the propellants, but beamed propulsion systems have no particular theoretical limit (although, in practice, there are temperature limits).
Microwave propulsion
In microwave thermal propulsion, an external microwave beam is used to heat a refractory heat exchanger to >1,500 K, heating a propellant such as hydrogen, methane, or ammonia. This improves the propulsion system's specific impulse and thrust/weight ratio relative to conventional rocket propulsion. For example, hydrogen can provide a specific impulse of 700–900 seconds and a thrust/weight ratio of 50-150.
A variation, developed by brothers James Benford and Gregory Benford, is to use thermal desorption of propellant trapped in the material of a massive microwave sail. This produces a very high acceleration compared to microwave-pushed sails alone.
Electric propulsion
Some proposed spacecraft propulsion mechanisms use electrically powered spacecraft propulsion, in which electrical energy is used by an electrically powered rocket engine, such as an ion thruster or plasma propulsion engine. Usually, these schemes assume either solar panels or an onboard reactor. However, both power sources are heavy.
Beamed propulsion in the form of a laser can send power to a photovoltaic panel for Laser electric propulsion. In this system, if a high intensity is incident on the solar array, careful design of the panels is necessary to avoid a fall-off in conversion efficiency due to heating effects. John Brophy has analyzed the transmission of laser power to a photovoltaic array powering a high-efficiency electric propulsion system as a means of accomplishing high delta-V missions such as an interstellar precursor mission in a NASA Innovative Advanced Concepts project.
A microwave beam could be used to send power to a rectenna for microwave electric propulsion. Microwave broadcast power has been practically demonstrated several times (e.g., in Goldstone, California, in 1974). Rectennas are potentially lightweight and can handle high power at high conversion efficiency. However, rectennas must be huge for a significant amount of power to be captured.
Direct impulse
A beam could also provide impulse by directly "pushing" on the sail.
One example is using a solar sail to reflect a laser beam. This concept, called a laser-pushed lightsail, was initially proposed by G. Marx but first analyzed in detail, and elaborated on, by physicist Robert L. Forward in 1989 as a method of interstellar travel that would avoid extremely high mass ratios by not carrying fuel. Further analysis of the concept was done by Landis, Mallove and Matloff, Andrews Lubin, and others.
Forward proposed pushing a sail with a microwave beam in a later paper. This has the advantage that the sail need not be a continuous surface. Forward tagged his proposal for an ultralight sail "Starwisp". A later analysis by Landis suggested that the Starwisp concept as initially proposed by Forward would not work, but variations on the proposal might be implemented.
The beam has to have a large diameter so that only a small portion of the beam misses the sail due to diffraction, and the laser or microwave antenna has to have good pointing stability so that the craft can tilt its sails fast enough to follow the center of the beam. This gets more important when going from interplanetary travel to interstellar travel and when going from a fly-by mission to a landing mission to a return mission. The laser or the microwave sender would probably be a large phased array of small devices that get their energy directly from solar radiation. The size of the array negates the need for a lens or mirror.
Another beam-pushed concept would be to use a magnetic sail or MMPP sail to divert a beam of charged particles from a particle accelerator or plasma jet. Landis proposed a particle beam pushed sail in 1989, and analyzed in more detail in a 2004 paper. Jordin Kare has proposed a variant to this whereby a "beam" of small laser accelerated light sails would transfer momentum to a magsail vehicle.
Another beam-pushed concept uses pellets or projectiles of ordinary matter. A stream of pellets from a stationary mass-driver is "reflected" by the spacecraft, cf. mass driver. The spacecraft neither needs energy nor reaction mass for propulsion of its own.
Proposed systems
Lightcraft
A lightcraft is a vehicle currently under development that uses an external pulsed source of laser or maser energy to provide power for producing thrust.
The laser shines on a parabolic reflector on the vehicle's underside, concentrating the light to produce a region of extremely high temperature. The air in this region is heated and expands violently, producing thrust with each pulse of laser light. A lightcraft must provide this gas from onboard tanks or an ablative solid in space. By leaving the vehicle's power source on the ground and using the ambient atmosphere as reaction mass for much of its ascent, a lightcraft could deliver a substantial percentage of its launch mass to orbit. It could also potentially be very cheap to manufacture.
Testing
Early in the morning of 2 October 2000 at the High Energy Laser Systems Test Facility (HELSTF), Lightcraft Technologies, Inc. (LTI) with the help of Franklin B. Mead of the U.S. Air Force Research Laboratory and Leik Myrabo set a new world's altitude record of 233 feet (71 m) for its 4.8 inch (12.2 cm) diameter, , laser-boosted rocket in a flight lasting 12.7 seconds. Although much of the 8:35 am flight was spent hovering at 230+ feet, the Lightcraft earned a world record for the longest ever laser-powered free flight and the greatest "air time" (i.e., launch-to-landing/recovery) from a light-propelled object. This is comparable to Robert Goddard's first test flight of his rocket design. Increasing the laser power to 100 kilowatts will enable flights up to a 30-kilometer altitude. They aim to accelerate a one-kilogram microsatellite into low Earth orbit using a custom-built, one-megawatt ground-based laser. Such a system would use just about 20 dollars worth of electricity, placing launch costs per kilogram to many times less than current launch costs (which are measured in thousands of dollars).
Myrabo's "lightcraft" design is a reflective funnel-shaped craft that channels heat from the laser toward the center, using a reflective parabolic surface, causing the laser to explode the air underneath it, generating lift. Reflective surfaces in the craft focus the beam into a ring, where it heats air to a temperature nearly five times hotter than the surface of the Sun, causing the air to expand explosively for thrust.
Laser thermal rocket
A laser thermal rocket is a thermal rocket in which the propellant is heated by energy provided by an external laser beam.
In 1992, the late Jordin Kare proposed a simpler, nearer-term concept with a rocket containing liquid hydrogen. The propellant is heated in a heat exchanger that the laser beam shines on before leaving the vehicle via a conventional nozzle. This concept can use continuous beam lasers, and the semiconductor lasers are now cost-effective for this application.
Microwave thermal rocket
In 2002, Kevin L.G. Parkin proposed a similar system using microwaves. In May 2012, the DARPA/NASA Millimeter-wave Thermal Launch System (MTLS) Project began the first steps toward implementing this idea. The MTLS Project was the first to demonstrate a millimeter-wave absorbent refractory heat exchanger, subsequently integrating it into the propulsion system of a small rocket to produce the first millimeter-wave thermal rocket. Simultaneously, it developed the first high-power cooperative target millimeter-wave beam director and used it to attempt the first millimeter-wave thermal rocket launch. Several launches were attempted, but problems with the beam director could not be resolved before funding ran out in March 2014.
Economics
The motivation to develop beam-powered propulsion systems comes from the economic advantages gained due to improved propulsion performance. In the case of beam-powered launch vehicles, better propulsion performance enables some combination of increased payload fraction, increased structural margins, and fewer stages. JASON's 1977 study of laser propulsion, authored by Freeman Dyson, succinctly articulates the promise of beam-powered launch: "Laser propulsion as an idea that may produce a revolution in space technology. A single laser facility on the ground can in theory launch single-stage vehicles into low or high earth orbit. The payload can be 20% or 30% of the vehicle take-off weight. It is far more economical in the use of mass and energy than chemical propulsion, and it is far more flexible in putting identical vehicles into a variety of orbits."This promise was quantified in a 1978 Lockheed Study conducted for NASA:"The results of the study showed that, with advanced technology, laser rocket system with either a space- or ground-based laser transmitter could reduce the national budget allocated to space transportation by 10 to 345 billion dollars over a 10-year life cycle when compared to advanced chemical propulsion systems (LO2-LH2) of equal capability."
Beam director cost
The 1970s-era studies and others since have cited beam director cost as a possible impediment to beam-powered launch systems. A recent cost-benefit analysis estimates that microwave (or laser) thermal rockets would be economical once beam director cost falls below 20 $/Watt. The current cost of suitable lasers is <100 $/Watt and the cost of suitable microwave sources is <$5/Watt. Mass production has lowered the production cost of microwave oven magnetrons to <0.01 $/Watt and some medical lasers to <10 $/Watt, though these are considered unsuitable for beam directors.
Non-spacecraft applications
In 1964 William C. Brown demonstrated a miniature helicopter equipped with a combination antenna and rectifier device called a rectenna. The rectenna converted microwave power into electricity, allowing the helicopter to fly.
In 2002 a Japanese group propelled a tiny aluminium airplane by using a laser to vaporize a water droplet clinging to it, and in 2003 NASA researchers flew an 11-ounce (312 g) model airplane with a propeller powered with solar panels illuminated by a laser. It is possible that such beam-powered propulsion could be useful for long-duration high altitude uncrewed aircraft or balloons, perhaps designed to serve – like satellites do today – as communication relays, science platforms, or surveillance platforms.
A "laser broom" has been proposed to sweep space debris from Earth orbit. This is another proposed use of beam-powered propulsion, used on objects not designed to be propelled by it, for example, small pieces of scrap knocked off ("spalled") satellites. The technique works since the laser power ablates one side of the object, giving an impulse that changes the eccentricity of the object's orbit. The orbit would then intersect the atmosphere and burn up.
See also
Beam Power Challenge – one of the NASA Centennial Challenges
MagBeam
Thinned-array curse
List of laser articles
Project Forward (interstellar)
DEEP-IN
References
External links
Fine-Tuning the Interstellar Lightsail
How Stuff Works: light-propulsion
Spacecraft propulsion
Space access
Force lasers | 0.783332 | 0.975784 | 0.764363 |
Guiding center | In physics, the motion of an electrically charged particle such as an electron or ion in a plasma in a magnetic field can be treated as the superposition of a relatively fast circular motion around a point called the guiding center and a relatively slow drift of this point. The drift speeds may differ for various species depending on their charge states, masses, or temperatures, possibly resulting in electric currents or chemical separation.
Gyration
If the magnetic field is uniform and all other forces are absent, then the Lorentz force will cause a particle to undergo a constant acceleration perpendicular to both the particle velocity and the magnetic field. This does not affect particle motion parallel to the magnetic field, but results in circular motion at constant speed in the plane perpendicular to the magnetic field. This circular motion is known as the gyromotion. For a particle with mass and charge moving in a magnetic field with strength , it has a frequency, called the gyrofrequency or cyclotron frequency, of
For a speed perpendicular to the magnetic field of , the radius of the orbit, called the gyroradius or Larmor radius, is
Parallel motion
Since the magnetic Lorentz force is always perpendicular to the magnetic field, it has no influence (to lowest order) on the parallel motion. In a uniform field with no additional forces, a charged particle will gyrate around the magnetic field according to the perpendicular component of its velocity and drift parallel to the field according to its initial parallel velocity, resulting in a helical orbit. If there is a force with a parallel component, the particle and its guiding center will be correspondingly accelerated.
If the field has a parallel gradient, a particle with a finite Larmor radius will also experience a force in the direction away from the larger magnetic field. This effect is known as the magnetic mirror. While it is closely related to guiding center drifts in its physics and mathematics, it is nevertheless considered to be distinct from them.
General force drifts
Generally speaking, when there is a force on the particles perpendicular to the magnetic field, then they drift in a direction perpendicular to both the force and the field. If is the force on one particle, then the drift velocity is
These drifts, in contrast to the mirror effect and the non-uniform B drifts, do not depend on finite Larmor radius, but are also present in cold plasmas. This may seem counterintuitive. If a particle is stationary when a force is turned on, where does the motion perpendicular to the force come from and why doesn't the force produce a motion parallel to itself? The answer is the interaction with the magnetic field. The force initially results in an acceleration parallel to itself, but the magnetic field deflects the resulting motion in the drift direction. Once the particle is moving in the drift direction, the magnetic field deflects it back against the external force, so that the average acceleration in the direction of the force is zero. There is, however, a one-time displacement in the direction of the force equal to (f/m)ωc−2, which should be considered a consequence of the polarization drift (see below) while the force is being turned on. The resulting motion is a cycloid. More generally, the superposition of a gyration and a uniform perpendicular drift is a trochoid.
All drifts may be considered special cases of the force drift, although this is not always the most useful way to think about them. The obvious cases are electric and gravitational forces. The grad-B drift can be considered to result from the force on a magnetic dipole in a field gradient. The curvature, inertia, and polarisation drifts result from treating the acceleration of the particle as fictitious forces. The diamagnetic drift can be derived from the force due to a pressure gradient. Finally, other forces such as radiation pressure and collisions also result in drifts.
Gravitational field
A simple example of a force drift is a plasma in a gravitational field, e.g. the ionosphere. The drift velocity is
Because of the mass dependence, the gravitational drift for the electrons can normally be ignored.
The dependence on the charge of the particle implies that the drift direction is opposite for ions as for electrons, resulting in a current. In a fluid picture, it is this current crossed with the magnetic field that provides that force counteracting the applied force.
Electric field
This drift, often called the (E-cross-B) drift, is a special case because the electric force on a particle depends on its charge (as opposed, for example, to the gravitational force considered above). As a result, ions (of whatever mass and charge) and electrons both move in the same direction at the same speed, so there is no net current (assuming quasineutrality of the plasma). In the context of special relativity, in the frame moving with this velocity, the electric field vanishes. The value of the drift velocity is given by
Nonuniform E
If the electric field is not uniform, the above formula is modified to read
Nonuniform B
Guiding center drifts may also result not only from external forces but also from non-uniformities in the magnetic field. It is convenient to express these drifts in terms of the parallel and perpendicular kinetic energies
In that case, the explicit mass dependence is eliminated. If the ions and electrons have similar temperatures, then they also have similar, though oppositely directed, drift velocities.
Grad-B drift
When a particle moves into a larger magnetic field, the curvature of its orbit becomes tighter, transforming the otherwise circular orbit into a cycloid. The drift velocity is
Curvature drift
In order for a charged particle to follow a curved field line, it needs a drift velocity out of the plane of curvature to provide the necessary centripetal force. This velocity is
where is the radius of curvature pointing outwards, away from the center of the circular arc which best approximates the curve at that point.
where is the unit vector in the direction of the magnetic field. This drift can be decomposed into the sum of the curvature drift and the term
In the important limit of stationary magnetic field and weak electric field, the inertial drift is dominated by the curvature drift term.
Curved vacuum drift
In the limit of small plasma pressure, Maxwell's equations provide a relationship between gradient and curvature that allows the corresponding drifts to be combined as follows
For a species in thermal equilibrium, can be replaced by ( for and for ).
The expression for the grad-B drift above can be rewritten for the case when is due to the curvature.
This is most easily done by realizing that in a vacuum, Ampere's Law is
. In cylindrical coordinates chosen such that the azimuthal direction is parallel to the magnetic field and the radial direction is parallel to the gradient of the field, this becomes
Since is a constant, this implies that
and the grad-B drift velocity can be written
Polarization drift
A time-varying electric field also results in a drift given by
Obviously this drift is different from the others in that it cannot continue indefinitely. Normally an oscillatory electric field results in a polarization drift oscillating 90 degrees out of phase. Because of the mass dependence, this effect is also called the inertia drift. Normally the polarization drift can be neglected for electrons because of their relatively small mass.
Diamagnetic drift
The diamagnetic drift is not actually a guiding center drift. A pressure gradient does not cause any single particle to drift. Nevertheless, the fluid velocity is defined by counting the particles moving through a reference area, and a pressure gradient results in more particles in one direction than in the other. The net velocity of the fluid is given by
Drift Currents
With the important exception of the drift, the drift velocities of differently charged particles will be different. This difference in velocities results in a current, while the mass dependence of the drift velocity can result in chemical separation.
References
Plasma theory and modeling | 0.7805 | 0.979318 | 0.764357 |
Del in cylindrical and spherical coordinates | This is a list of some vector calculus formulae for working with common curvilinear coordinate systems.
Notes
This article uses the standard notation ISO 80000-2, which supersedes ISO 31-11, for spherical coordinates (other sources may reverse the definitions of θ and φ):
The polar angle is denoted by : it is the angle between the z-axis and the radial vector connecting the origin to the point in question.
The azimuthal angle is denoted by : it is the angle between the x-axis and the projection of the radial vector onto the xy-plane.
The function can be used instead of the mathematical function owing to its domain and image. The classical arctan function has an image of , whereas atan2 is defined to have an image of .
Coordinate conversions
Note that the operation must be interpreted as the two-argument inverse tangent, atan2.
Unit vector conversions
Del formula
This page uses for the polar angle and for the azimuthal angle, which is common notation in physics. The source that is used for these formulae uses for the azimuthal angle and for the polar angle, which is common mathematical notation. In order to get the mathematics formulae, switch and in the formulae shown in the table above.
Defined in Cartesian coordinates as . An alternative definition is .
Defined in Cartesian coordinates as . An alternative definition is .
Calculation rules
(Lagrange's formula for del)
(From )
Cartesian derivation
The expressions for and are found in the same way.
Cylindrical derivation
Spherical derivation
Unit vector conversion formula
The unit vector of a coordinate parameter u is defined in such a way that a small positive change in u causes the position vector to change in direction.
Therefore,
where is the arc length parameter.
For two sets of coordinate systems and , according to chain rule,
Now, we isolate the th component. For , let . Then divide on both sides by to get:
See also
Del
Orthogonal coordinates
Curvilinear coordinates
Vector fields in cylindrical and spherical coordinates
References
External links
Maxima Computer Algebra system scripts to generate some of these operators in cylindrical and spherical coordinates.
Vector calculus
Coordinate systems | 0.7652 | 0.998885 | 0.764347 |
Intermolecular force | An intermolecular force (IMF; also secondary force) is the force that mediates interaction between molecules, including the electromagnetic forces of attraction
or repulsion which act between atoms and other types of neighbouring particles, e.g. atoms or ions. Intermolecular forces are weak relative to intramolecular forces – the forces which hold a molecule together. For example, the covalent bond, involving sharing electron pairs between atoms, is much stronger than the forces present between neighboring molecules. Both sets of forces are essential parts of force fields frequently used in molecular mechanics.
The first reference to the nature of microscopic forces is found in Alexis Clairaut's work Théorie de la figure de la Terre, published in Paris in 1743. Other scientists who have contributed to the investigation of microscopic forces include: Laplace, Gauss, Maxwell, Boltzmann and Pauling.
Attractive intermolecular forces are categorized into the following types:
Hydrogen bonding
Ion–dipole forces and ion–induced dipole force
Cation–π, σ–π and π–π bonding
Van der Waals forces – Keesom force, Debye force, and London dispersion force
Cation–cation bonding
Salt bridge (protein and supramolecular)
Information on intermolecular forces is obtained by macroscopic measurements of properties like viscosity, pressure, volume, temperature (PVT) data. The link to microscopic aspects is given by virial coefficients and intermolecular pair potentials, such as the Mie potential, Buckingham potential or Lennard-Jones potential.
In the broadest sense, it can be understood as such interactions between any particles (molecules, atoms, ions and molecular ions) in which the formation of chemical, that is, ionic, covalent or metallic bonds does not occur. In other words, these interactions are significantly weaker than covalent ones and do not lead to a significant restructuring of the electronic structure of the interacting particles. (This is only partially true. For example, all enzymatic and catalytic reactions begin with a weak intermolecular interaction between a substrate and an enzyme or a molecule with a catalyst, but several such weak interactions with the required spatial configuration of the active center of the enzyme lead to significant restructuring changes the energy state of molecules or substrate, which ultimately leads to the breaking of some and the formation of other covalent chemical bonds. Strictly speaking, all enzymatic reactions begin with intermolecular interactions between the substrate and the enzyme, therefore the importance of these interactions is especially great in biochemistry and molecular biology, and is the basis of enzymology).
Hydrogen bonding
A hydrogen bond is an extreme form of dipole-dipole bonding, referring to the attraction between a hydrogen atom that is bonded to an element with high electronegativity, usually nitrogen, oxygen, or fluorine. The hydrogen bond is often described as a strong electrostatic dipole–dipole interaction. However, it also has some features of covalent bonding: it is directional, stronger than a van der Waals force interaction, produces interatomic distances shorter than the sum of their van der Waals radii, and usually involves a limited number of interaction partners, which can be interpreted as a kind of valence. The number of Hydrogen bonds formed between molecules is equal to the number of active pairs. The molecule which donates its hydrogen is termed the donor molecule, while the molecule containing lone pair participating in H bonding is termed the acceptor molecule. The number of active pairs is equal to the common number between number of hydrogens the donor has and the number of lone pairs the acceptor has.
Though both not depicted in the diagram, water molecules have four active bonds. The oxygen atom’s two lone pairs interact with a hydrogen each, forming two additional hydrogen bonds, and the second hydrogen atom also interacts with a neighbouring oxygen. Intermolecular hydrogen bonding is responsible for the high boiling point of water (100 °C) compared to the other group 16 hydrides, which have little capability to hydrogen bond. Intramolecular hydrogen bonding is partly responsible for the secondary, tertiary, and quaternary structures of proteins and nucleic acids. It also plays an important role in the structure of polymers, both synthetic and natural.
Salt bridge
The attraction between cationic and anionic sites is a noncovalent, or intermolecular interaction which is usually referred to as ion pairing or salt bridge.
It is essentially due to electrostatic forces, although in aqueous medium the association is driven by entropy and often even endothermic. Most salts form crystals with characteristic distances between the ions; in contrast to many other noncovalent interactions, salt bridges are not directional and show in the solid state usually contact determined only by the van der Waals radii of the ions.
Inorganic as well as organic ions display in water at moderate ionic strength I similar salt bridge as association ΔG values around 5 to 6 kJ/mol for a 1:1 combination of anion and cation, almost independent of the nature (size, polarizability, etc.) of the ions. The ΔG values are additive and approximately a linear function of the charges, the interaction of e.g. a doubly charged phosphate anion with a single charged ammonium cation accounts for about 2x5 = 10 kJ/mol. The ΔG values depend on the ionic strength I of the solution, as described by the Debye-Hückel equation, at zero ionic strength one observes ΔG = 8 kJ/mol.
Dipole–dipole and similar interactions
Dipole–dipole interactions (or Keesom interactions) are electrostatic interactions between molecules which have permanent dipoles. This interaction is stronger than the London forces but is weaker than ion-ion interaction because only partial charges are involved. These interactions tend to align the molecules to increase attraction (reducing potential energy). An example of a dipole–dipole interaction can be seen in hydrogen chloride (HCl): the positive end of a polar molecule will attract the negative end of the other molecule and influence its position. Polar molecules have a net attraction between them. Examples of polar molecules include hydrogen chloride (HCl) and chloroform (CHCl3).
Often molecules contain dipolar groups of atoms, but have no overall dipole moment on the molecule as a whole. This occurs if there is symmetry within the molecule that causes the dipoles to cancel each other out. This occurs in molecules such as tetrachloromethane and carbon dioxide. The dipole–dipole interaction between two individual atoms is usually zero, since atoms rarely carry a permanent dipole.
The Keesom interaction is a van der Waals force. It is discussed further in the section "Van der Waals forces".
Ion–dipole and ion–induced dipole forces
Ion–dipole and ion–induced dipole forces are similar to dipole–dipole and dipole–induced dipole interactions but involve ions, instead of only polar and non-polar molecules. Ion–dipole and ion–induced dipole forces are stronger than dipole–dipole interactions because the charge of any ion is much greater than the charge of a dipole moment. Ion–dipole bonding is stronger than hydrogen bonding.
An ion–dipole force consists of an ion and a polar molecule interacting. They align so that the positive and negative groups are next to one another, allowing maximum attraction. An important example of this interaction is hydration of ions in water which give rise to hydration enthalpy. The polar water molecules surround themselves around ions in water and the energy released during the process is known as hydration enthalpy. The interaction has its immense importance in justifying the stability of various ions (like Cu2+) in water.
An ion–induced dipole force consists of an ion and a non-polar molecule interacting. Like a dipole–induced dipole force, the charge of the ion causes distortion of the electron cloud on the non-polar molecule.
Van der Waals forces
The van der Waals forces arise from interaction between uncharged atoms or molecules, leading not only to such phenomena as the cohesion of condensed phases and physical absorption of gases, but also to a universal force of attraction between macroscopic bodies.
Keesom force (permanent dipole – permanent dipole)
The first contribution to van der Waals forces is due to electrostatic interactions between rotating permanent dipoles, quadrupoles (all molecules with symmetry lower than cubic), and multipoles. It is termed the Keesom interaction, named after Willem Hendrik Keesom. These forces originate from the attraction between permanent dipoles (dipolar molecules) and are temperature dependent.
They consist of attractive interactions between dipoles that are ensemble averaged over different rotational orientations of the dipoles. It is assumed that the molecules are constantly rotating and never get locked into place. This is a good assumption, but at some point molecules do get locked into place. The energy of a Keesom interaction depends on the inverse sixth power of the distance, unlike the interaction energy of two spatially fixed dipoles, which depends on the inverse third power of the distance. The Keesom interaction can only occur among molecules that possess permanent dipole moments, i.e., two polar molecules. Also Keesom interactions are very weak van der Waals interactions and do not occur in aqueous solutions that contain electrolytes. The angle averaged interaction is given by the following equation:
where d = electric dipole moment, = permittivity of free space, = dielectric constant of surrounding material, T = temperature, = Boltzmann constant, and r = distance between molecules.
Debye force (permanent dipoles–induced dipoles)
The second contribution is the induction (also termed polarization) or Debye force, arising from interactions between rotating permanent dipoles and from the polarizability of atoms and molecules (induced dipoles). These induced dipoles occur when one molecule with a permanent dipole repels another molecule's electrons. A molecule with permanent dipole can induce a dipole in a similar neighboring molecule and cause mutual attraction. Debye forces cannot occur between atoms. The forces between induced and permanent dipoles are not as temperature dependent as Keesom interactions because the induced dipole is free to shift and rotate around the polar molecule. The Debye induction effects and Keesom orientation effects are termed polar interactions.
The induced dipole forces appear from the induction (also termed polarization), which is the attractive interaction between a permanent multipole on one molecule with an induced (by the former di/multi-pole) 31 on another. This interaction is called the Debye force, named after Peter J. W. Debye.
One example of an induction interaction between permanent dipole and induced dipole is the interaction between HCl and Ar. In this system, Ar experiences a dipole as its electrons are attracted (to the H side of HCl) or repelled (from the Cl side) by HCl. The angle averaged interaction is given by the following equation:
where = polarizability.
This kind of interaction can be expected between any polar molecule and non-polar/symmetrical molecule. The induction-interaction force is far weaker than dipole–dipole interaction, but stronger than the London dispersion force.
London dispersion force (fluctuating dipole–induced dipole interaction)
The third and dominant contribution is the dispersion or London force (fluctuating dipole–induced dipole), which arises due to the non-zero instantaneous dipole moments of all atoms and molecules. Such polarization can be induced either by a polar molecule or by the repulsion of negatively charged electron clouds in non-polar molecules. Thus, London interactions are caused by random fluctuations of electron density in an electron cloud. An atom with a large number of electrons will have a greater associated London force than an atom with fewer electrons. The dispersion (London) force is the most important component because all materials are polarizable, whereas Keesom and Debye forces require permanent dipoles. The London interaction is universal and is present in atom-atom interactions as well. For various reasons, London interactions (dispersion) have been considered relevant for interactions between macroscopic bodies in condensed systems. Hamaker developed the theory of van der Waals between macroscopic bodies in 1937 and showed that the additivity of these interactions renders them considerably more long-range.
Relative strength of forces
This comparison is approximate. The actual relative strengths will vary depending on the molecules involved. For instance, the presence of water creates competing interactions that greatly weaken the strength of both ionic and hydrogen bonds. We may consider that for static systems, Ionic bonding and covalent bonding will always be stronger than intermolecular forces in any given substance. But it is not so for big moving systems like enzyme molecules interacting with substrate molecules. Here the numerous intramolecular (most often - hydrogen bonds) bonds form an active intermediate state where the intermolecular bonds cause some of the covalent bond to be broken, while the others are formed, in this way proceeding the thousands of enzymatic reactions, so important for living organisms.
Effect on the behavior of gases
Intermolecular forces are repulsive at short distances and attractive at long distances (see the Lennard-Jones potential). In a gas, the repulsive force chiefly has the effect of keeping two molecules from occupying the same volume. This gives a real gas a tendency to occupy a larger volume than an ideal gas at the same temperature and pressure. The attractive force draws molecules closer together and gives a real gas a tendency to occupy a smaller volume than an ideal gas. Which interaction is more important depends on temperature and pressure (see compressibility factor).
In a gas, the distances between molecules are generally large, so intermolecular forces have only a small effect. The attractive force is not overcome by the repulsive force, but by the thermal energy of the molecules. Temperature is the measure of thermal energy, so increasing temperature reduces the influence of the attractive force. In contrast, the influence of the repulsive force is essentially unaffected by temperature.
When a gas is compressed to increase its density, the influence of the attractive force increases. If the gas is made sufficiently dense, the attractions can become large enough to overcome the tendency of thermal motion to cause the molecules to disperse. Then the gas can condense to form a solid or liquid, i.e., a condensed phase. Lower temperature favors the formation of a condensed phase. In a condensed phase, there is very nearly a balance between the attractive and repulsive forces.
Quantum mechanical theories
Intermolecular forces observed between atoms and molecules can be described phenomenologically as occurring between permanent and instantaneous dipoles, as outlined above. Alternatively, one may seek a fundamental, unifying theory that is able to explain the various types of interactions such as hydrogen bonding, van der Waals force and dipole–dipole interactions. Typically, this is done by applying the ideas of quantum mechanics to molecules, and Rayleigh–Schrödinger perturbation theory has been especially effective in this regard. When applied to existing quantum chemistry methods, such a quantum mechanical explanation of intermolecular interactions provides an array of approximate methods that can be used to analyze intermolecular interactions. One of the most helpful methods to visualize this kind of intermolecular interactions, that we can find in quantum chemistry, is the non-covalent interaction index, which is based on the electron density of the system. London dispersion forces play a big role with this.
Concerning electron density topology, recent methods based on electron density gradient methods have emerged recently, notably with the development of IBSI (Intrinsic Bond Strength Index), relying on the IGM (Independent Gradient Model) methodology.
See also
Ionic bonding
Salt bridges
Coomber's relationship
Force field (chemistry)
Hydrophobic effect
Intramolecular force
Molecular solid
Polymer
Quantum chemistry computer programs
van der Waals force
Comparison of software for molecular mechanics modeling
Non-covalent interactions
Solvation
References
Intermolecular forces
Chemical bonding
Johannes Diderik van der Waals | 0.767274 | 0.996108 | 0.764288 |
Geometrized unit system | A geometrized unit system or geometrodynamic unit system is a system of natural units in which the base physical units are chosen so that the speed of light in vacuum, c, and the gravitational constant, G, are set equal to unity.
The geometrized unit system is not a completely defined system. Some systems are geometrized unit systems in the sense that they set these, in addition to other constants, to unity, for example Stoney units and Planck units.
This system is useful in physics, especially in the special and general theories of relativity. All physical quantities are identified with geometric quantities such as areas, lengths, dimensionless numbers, path curvatures, or sectional curvatures.
Many equations in relativistic physics appear simpler when expressed in geometric units, because all occurrences of G and of c drop out. For example, the Schwarzschild radius of a nonrotating uncharged black hole with mass m becomes . For this reason, many books and papers on relativistic physics use geometric units. An alternative system of geometrized units is often used in particle physics and cosmology, in which instead. This introduces an additional factor of 8π into Newton's law of universal gravitation but simplifies the Einstein field equations, the Einstein–Hilbert action, the Friedmann equations and the Newtonian Poisson equation by removing the corresponding factor.
Definition
Geometrized units were defined in the book Gravitation by Charles W. Misner, Kip S. Thorne, and John Archibald Wheeler with the speed of light, , the gravitational constant, , and Boltzmann constant, all set to . Some authors refer to these units as geometrodynamic units.
In geometric units, every time interval is interpreted as the distance travelled by light during that given time interval. That is, one second is interpreted as one light-second, so time has the geometric units of length. This is dimensionally consistent with the notion that, according to the kinematical laws of special relativity, time and distance are on an equal footing.
Energy and momentum are interpreted as components of the four-momentum vector, and mass is the magnitude of this vector, so in geometric units these must all have the dimension of length. We can convert a mass expressed in kilograms to the equivalent mass expressed in metres by multiplying by the conversion factor G/c2. For example, the Sun's mass of in SI units is equivalent to . This is half the Schwarzschild radius of a one solar mass black hole. All other conversion factors can be worked out by combining these two.
The small numerical size of the few conversion factors reflects the fact that relativistic effects are only noticeable when large masses or high speeds are considered.
Conversions
Listed below are all conversion factors that are useful to convert between all combinations of the SI base units, and if not possible, between them and their unique elements, because ampere is a dimensionless ratio of two lengths such as [C/s], and candela (1/683 [W/sr]) is a dimensionless ratio of two dimensionless ratios such as ratio of two volumes [kg⋅m2/s3] = [W] and ratio of two areas [m2/m2] = [sr], while mole is only a dimensionless Avogadro number of entities such as atoms or particles:
References
See Appendix F
External links
Conversion factors for energy equivalents
General relativity
Systems of units
Natural units | 0.784865 | 0.973779 | 0.764285 |
Affinity laws | The affinity laws (also known as the "Fan Laws" or "Pump Laws") for pumps/fans are used in hydraulics, hydronics and/or HVAC to express the relationship between variables involved in pump or fan performance (such as head, volumetric flow rate, shaft speed) and power. They apply to pumps, fans, and hydraulic turbines. In these rotary implements, the affinity laws apply both to centrifugal and axial flows.
The laws are derived using the Buckingham π theorem. The affinity laws are useful as they allow prediction of the head discharge characteristic of a pump or fan from a known characteristic measured at a different speed or impeller diameter. The only requirement is that the two pumps or fans are dynamically similar, that is, the ratios of the fluid forced are the same. It is also required that the two impellers' speed or diameter are running at the same efficiency.
Law 1. With impeller diameter (D) held constant:
Law 1a. Flow is proportional to shaft speed:
Law 1b. Pressure or Head is proportional to the square of shaft speed:
Law 1c. Power is proportional to the cube of shaft speed:
Law 2. With shaft speed (N) held constant:
Law 2a. Flow is proportional to the impeller diameter:
Law 2b. Pressure or Head is proportional to the square of the impeller diameter:
Law 2c. Power is proportional to the cube of the impeller diameter:
where
is the volumetric flow rate (e.g. CFM, GPM or L/s)
is the impeller diameter (e.g. in or mm)
is the shaft rotational speed (e.g. rpm)
is the pressure or head developed by the fan/pump (e.g. psi or Pascal)
is the shaft power (e.g. W).
These laws assume that the pump/fan efficiency remains constant i.e. , which is rarely exactly true, but can be a good approximation when used over appropriate frequency or diameter ranges (i.e., a fan will not move anywhere near 1000 times as much air when spun at 1000 times its designed operating speed, but the air movement may be increased by 99% when the operating speed is only doubled). The exact relationship between speed, diameter, and efficiency depends on the particulars of the individual fan or pump design. Product testing or computational fluid dynamics become necessary if the range of acceptability is unknown, or if a high level of accuracy is required in the calculation. Interpolation from accurate data is also more accurate than the affinity laws. When applied to pumps, the laws work well for constant diameter variable speed case (Law 1) but are less accurate for constant speed variable impeller diameter case (Law 2).
For radial flow centrifugal pumps, it is common industry practice to reduce the impeller diameter by "trimming", whereby the outer diameter of a particular impeller is reduced by machining to alter the performance of the pump. In this particular industry it is also common to refer to the mathematical approximations that relate the volumetric flow rate, trimmed impeller diameter, shaft rotational speed, developed head, and power as the "affinity laws". Because trimming an impeller changes the fundamental shape of the impeller (increasing the specific speed), the relationships shown in Law 2 cannot be utilized in this scenario. In this case, the industry looks to the following relationships, which is a better approximation of these variables when dealing with impeller trimming.
With shaft speed (N) held constant and for small variations in impeller diameter via trimming:
The volumetric flow rate varies directly with the trimmed impeller diameter:
The pump developed head (the total dynamic head) varies to the square of the trimmed impeller diameter:
The power varies to the cube of the trimmed impeller diameter:
where
is the volumetric flow rate (e.g. CFM, GPM or L/s)
is the impeller diameter (e.g. in or mm)
is the shaft rotational speed (e.g. rpm)
is the total dynamic head developed by the pump (e.g. m or ft)
is the shaft power (e.g. W or HP)
See also
Centripetal force
References
Hydraulics
Pumps
Ventilation fans
Turbines | 0.775211 | 0.985897 | 0.764279 |
Bouncing ball | The physics of a bouncing ball concerns the physical behaviour of bouncing balls, particularly its motion before, during, and after impact against the surface of another body. Several aspects of a bouncing ball's behaviour serve as an introduction to mechanics in high school or undergraduate level physics courses. However, the exact modelling of the behaviour is complex and of interest in sports engineering.
The motion of a ball is generally described by projectile motion (which can be affected by gravity, drag, the Magnus effect, and buoyancy), while its impact is usually characterized through the coefficient of restitution (which can be affected by the nature of the ball, the nature of the impacting surface, the impact velocity, rotation, and local conditions such as temperature and pressure). To ensure fair play, many sports governing bodies set limits on the bounciness of their ball and forbid tampering with the ball's aerodynamic properties. The bounciness of balls has been a feature of sports as ancient as the Mesoamerican ballgame.
Forces during flight and effect on motion
The motion of a bouncing ball obeys projectile motion. Many forces act on a real ball, namely the gravitational force (FG), the drag force due to air resistance (FD), the Magnus force due to the ball's spin (FM), and the buoyant force (FB). In general, one has to use Newton's second law taking all forces into account to analyze the ball's motion:
where m is the ball's mass. Here, a, v, r represent the ball's acceleration, velocity, and position over time t.
Gravity
The gravitational force is directed downwards and is equal to
where m is the mass of the ball, and g is the gravitational acceleration, which on Earth varies between and . Because the other forces are usually small, the motion is often idealized as being only under the influence of gravity. If only the force of gravity acts on the ball, the mechanical energy will be conserved during its flight. In this idealized case, the equations of motion are given by
where a, v, and r denote the acceleration, velocity, and position of the ball, and v0 and r0 are the initial velocity and position of the ball, respectively.
More specifically, if the ball is bounced at an angle θ with the ground, the motion in the x- and y-axes (representing horizontal and vertical motion, respectively) is described by
The equations imply that the maximum height (H) and range (R) and time of flight (T) of a ball bouncing on a flat surface are given by
Further refinements to the motion of the ball can be made by taking into account air resistance (and related effects such as drag and wind), the Magnus effect, and buoyancy. Because lighter balls accelerate more readily, their motion tends to be affected more by such forces.
Drag
Air flow around the ball can be either laminar or turbulent depending on the Reynolds number (Re), defined as:
where ρ is the density of air, μ the dynamic viscosity of air, D the diameter of the ball, and v the velocity of the ball through air. At a temperature of , and .
If the Reynolds number is very low (Re < 1), the drag force on the ball is described by Stokes' law:
where r is the radius of the ball. This force acts in opposition to the ball's direction (in the direction of ). For most sports balls, however, the Reynolds number will be between 104 and 105 and Stokes' law does not apply. At these higher values of the Reynolds number, the drag force on the ball is instead described by the drag equation:
where Cd is the drag coefficient, and A the cross-sectional area of the ball.
Drag will cause the ball to lose mechanical energy during its flight, and will reduce its range and height, while crosswinds will deflect it from its original path. Both effects have to be taken into account by players in sports such as golf.
Magnus effect
The spin of the ball will affect its trajectory through the Magnus effect. According to the Kutta–Joukowski theorem, for a spinning sphere with an inviscid flow of air, the Magnus force is equal to
where r is the radius of the ball, ω the angular velocity (or spin rate) of the ball, ρ the density of air, and v the velocity of the ball relative to air. This force is directed perpendicular to the motion and perpendicular to the axis of rotation (in the direction of ). The force is directed upwards for backspin and downwards for topspin. In reality, flow is never inviscid, and the Magnus lift is better described by
where ρ is the density of air, CL the lift coefficient, A the cross-sectional area of the ball, and v the velocity of the ball relative to air. The lift coefficient is a complex factor which depends amongst other things on the ratio rω/v, the Reynolds number, and surface roughness. In certain conditions, the lift coefficient can even be negative, changing the direction of the Magnus force (reverse Magnus effect).
In sports like tennis or volleyball, the player can use the Magnus effect to control the ball's trajectory (e.g. via topspin or backspin) during flight. In golf, the effect is responsible for slicing and hooking which are usually a detriment to the golfer, but also helps with increasing the range of a drive and other shots. In baseball, pitchers use the effect to create curveballs and other special pitches.
Ball tampering is often illegal, and is often at the centre of cricket controversies such as the one between England and Pakistan in August 2006. In baseball, the term 'spitball' refers to the illegal coating of the ball with spit or other substances to alter the aerodynamics of the ball.
Buoyancy
Any object immersed in a fluid such as water or air will experience an upwards buoyancy. According to Archimedes' principle, this buoyant force is equal to the weight of the fluid displaced by the object. In the case of a sphere, this force is equal to
The buoyant force is usually small compared to the drag and Magnus forces and can often be neglected. However, in the case of a basketball, the buoyant force can amount to about 1.5% of the ball's weight. Since buoyancy is directed upwards, it will act to increase the range and height of the ball.
Impact
When a ball impacts a surface, the surface recoils and vibrates, as does the ball, creating both sound and heat, and the ball loses kinetic energy. Additionally, the impact can impart some rotation to the ball, transferring some of its translational kinetic energy into rotational kinetic energy. This energy loss is usually characterized (indirectly) through the coefficient of restitution (or COR, denoted e):
where vf and vi are the final and initial velocities of the ball, and uf and ui are the final and initial velocities impacting surface, respectively. In the specific case where a ball impacts on an immovable surface, the COR simplifies to
For a ball dropped against a floor, the COR will therefore vary between 0 (no bounce, total loss of energy) and 1 (perfectly bouncy, no energy loss). A COR value below 0 or above 1 is theoretically possible, but would indicate that the ball went through the surface, or that the surface was not "relaxed" when the ball impacted it, like in the case of a ball landing on spring-loaded platform.
To analyze the vertical and horizontal components of the motion, the COR is sometimes split up into a normal COR (ey), and tangential COR (ex), defined as
where r and ω denote the radius and angular velocity of the ball, while R and Ω denote the radius and angular velocity the impacting surface (such as a baseball bat). In particular rω is the tangential velocity of the ball's surface, while RΩ is the tangential velocity of the impacting surface. These are especially of interest when the ball impacts the surface at an oblique angle, or when rotation is involved.
For a straight drop on the ground with no rotation, with only the force of gravity acting on the ball, the COR can be related to several other quantities by:
Here, K and U denote the kinetic and potential energy of the ball, H is the maximum height of the ball, and T is the time of flight of the ball. The 'i' and 'f' subscript refer to the initial (before impact) and final (after impact) states of the ball. Likewise, the energy loss at impact can be related to the COR by
The COR of a ball can be affected by several things, mainly
the nature of the impacting surface (e.g. grass, concrete, wire mesh)
the material of the ball (e.g. leather, rubber, plastic)
the pressure inside the ball (if hollow)
the amount of rotation induced in the ball at impact
the impact velocity
External conditions such as temperature can change the properties of the impacting surface or of the ball, making them either more flexible or more rigid. This will, in turn, affect the COR. In general, the ball will deform more at higher impact velocities and will accordingly lose more of its energy, decreasing its COR.
Spin and angle of impact
Upon impacting the ground, some translational kinetic energy can be converted to rotational kinetic energy and vice versa depending on the ball's impact angle and angular velocity. If the ball moves horizontally at impact, friction will have a "translational" component in the direction opposite to the ball's motion. In the figure, the ball is moving to the right, and thus it will have a translational component of friction pushing the ball to the left. Additionally, if the ball is spinning at impact, friction will have a "rotational" component in the direction opposite to the ball's rotation. On the figure, the ball is spinning clockwise, and the point impacting the ground is moving to the left with respect to the ball's center of mass. The rotational component of friction is therefore pushing the ball to the right. Unlike the normal force and the force of gravity, these frictional forces will exert a torque on the ball, and change its angular velocity (ω).
Three situations can arise:
If a ball is propelled forward with backspin, the translational and rotational friction will act in the same directions. The ball's angular velocity will be reduced after impact, as will its horizontal velocity, and the ball is propelled upwards, possibly even exceeding its original height. It is also possible for the ball to start spinning in the opposite direction, and even bounce backwards.
If a ball is propelled forward with topspin, the translational and rotational friction act will act in opposite directions. What exactly happens depends on which of the two components dominate.
If the ball is spinning much more rapidly than it was moving, rotational friction will dominate. The ball's angular velocity will be reduced after impact, but its horizontal velocity will be increased. The ball will be propelled forward but will not exceed its original height, and will keep spinning in the same direction.
If the ball is moving much more rapidly than it was spinning, translational friction will dominate. The ball's angular velocity will be increased after impact, but its horizontal velocity will be decreased. The ball will not exceed its original height and will keep spinning in the same direction.
If the surface is inclined by some amount θ, the entire diagram would be rotated by θ, but the force of gravity would remain pointing downwards (forming an angle θ with the surface). Gravity would then have a component parallel to the surface, which would contribute to friction, and thus contribute to rotation.
In racquet sports such as table tennis or racquetball, skilled players will use spin (including sidespin) to suddenly alter the ball's direction when it impacts surface, such as the ground or their opponent's racquet. Similarly, in cricket, there are various methods of spin bowling that can make the ball deviate significantly off the pitch.
Non-spherical balls
The bounce of an oval-shaped ball (such as those used in gridiron football or rugby football) is in general much less predictable than the bounce of a spherical ball. Depending on the ball's alignment at impact, the normal force can act ahead or behind the centre of mass of the ball, and friction from the ground will depend on the alignment of the ball, as well as its rotation, spin, and impact velocity. Where the forces act with respect to the centre of mass of the ball changes as the ball rolls on the ground, and all forces can exert a torque on the ball, including the normal force and the force of gravity. This can cause the ball to bounce forward, bounce back, or sideways. Because it is possible to transfer some rotational kinetic energy into translational kinetic energy, it is even possible for the COR to be greater than 1, or for the forward velocity of the ball to increase upon impact.
Multiple stacked balls
A popular demonstration involves the bounce of multiple stacked balls. If a tennis ball is stacked on top of a basketball, and the two of them are dropped at the same time, the tennis ball will bounce much higher than it would have if dropped on its own, even exceeding its original release height. The result is surprising as it apparently violates conservation of energy. However, upon closer inspection, the basketball does not bounce as high as it would have if the tennis ball had not been on top of it, and transferred some of its energy into the tennis ball, propelling it to a greater height.
The usual explanation involves considering two separate impacts: the basketball impacting with the floor, and then the basketball impacting with the tennis ball. Assuming perfectly elastic collisions, the basketball impacting the floor at 1 m/s would rebound at 1 m/s. The tennis ball going at 1 m/s would then have a relative impact velocity of 2 m/s, which means it would rebound at 2 m/s relative to the basketball, or 3 m/s relative to the floor, and triple its rebound velocity compared to impacting the floor on its own. This implies that the ball would bounce to 9 times its original height.
In reality, due to inelastic collisions, the tennis ball will increase its velocity and rebound height by a smaller factor, but still will bounce faster and higher than it would have on its own.
While the assumptions of separate impacts is not actually valid (the balls remain in close contact with each other during most of the impact), this model will nonetheless reproduce experimental results with good agreement, and is often used to understand more complex phenomena such as the core collapse of supernovae, or gravitational slingshot manoeuvres.
Sport regulations
Several sports governing bodies regulate the bounciness of a ball through various ways, some direct, some indirect.
AFL: Regulates the gauge pressure of the football to be between and .
FIBA: Regulates the gauge pressure so the basketball bounces between 1035 mm and 1085 mm (bottom of the ball) when it is dropped from a height of 1800 mm (bottom of the ball). This corresponds to a COR between 0.758 and 0.776.
FIFA: Regulates the gauge pressure of the soccer ball to be between of and at sea level (61 to 111 kPa).
FIVB: Regulates the gauge pressure of the volleyball to be between to (29.4 to 31.9 kPa) for indoor volleyball, and to (17.2 to 22.1 kPa) for beach volleyball.
ITF: Regulates the height of the tennis ball bounce when dropped on a "smooth, rigid and horizontal block of high mass". Different types of ball are allowed for different types of surfaces. When dropped from a height of , the bounce must be for Type 1 balls, for Type 2 and Type 3 balls, and for High Altitude balls. This roughly corresponds to a COR of 0.735–0.775 (Type 1 ball), 0.728–0.762 (Type 2 & 3 balls), and 0.693–0.728 (High Altitude balls) when dropped on the testing surface.
ITTF: Regulates the playing surface so that the table tennis ball bounces approximately 23 cm when dropped from a height of 30 cm. This roughly corresponds to a COR of about 0.876 against the playing surface.
NBA: Regulates the gauge pressure of the basketball to be between 7.5 and 8.5 psi (51.7 to 58.6 kPa).
NFL: Regulates the gauge pressure of the American football to be between 12.5 and 13.5 psi (86 to 93 kPa).
R&A/USGA: Limits the COR of the golf ball directly, which should not exceed 0.83 against a golf club.
The pressure of an American football was at the center of the deflategate controversy. Some sports do not regulate the bouncing properties of balls directly, but instead specify a construction method. In baseball, the introduction of a cork-based ball helped to end the dead-ball era and trigger the live-ball era.
See also
Bouncy ball
List of ball games
Quantum bouncing ball
Notes
References
Further reading
Balls
Sports rules and regulations
Classical mechanics
Kinematics
Dynamical systems
Motion (physics) | 0.770868 | 0.991451 | 0.764278 |
Quantum compass | The terminology quantum compass often relates to an instrument which measures relative position using the technique of atom interferometry. It includes an ensemble of accelerometers and gyroscope based on quantum technology to form an Inertial Navigation Unit.
Description
Work on quantum technology based inertial measurement units (IMUs), the instruments containing the gyroscopes and accelerometers, follows from early demonstrations of matter-wave based accelerometers and gyrometers. The first demonstration of onboard acceleration measurement was made on an Airbus A300 in 2011.
A quantum compass contains clouds of atoms frozen using lasers. By measuring the movement of these frozen particles over precise periods of time the motion of the device can be calculated. The device would then provide accurate position in circumstances where satellites are not available for satellite navigation, e.g. a fully submerged submarine.
Various defence agencies worldwide, such as the US DARPA and the United Kingdom Ministry of Defence have pushed the development of prototypes for future uses in submarines and aircraft.
In 2024, researchers from the Centre for Cold Matter of Imperial College, London, tested an experimental quantum compass on an underground train on London's District line.
References
Measuring instruments
Speed sensors
Vehicle parts
Vehicle technology | 0.777651 | 0.982801 | 0.764276 |
Electric generator | In electricity generation, a generator is a device that converts motion-based power (potential and kinetic energy) or fuel-based power (chemical energy) into electric power for use in an external circuit. Sources of mechanical energy include steam turbines, gas turbines, water turbines, internal combustion engines, wind turbines and even hand cranks. The first electromagnetic generator, the Faraday disk, was invented in 1831 by British scientist Michael Faraday. Generators provide nearly all the power for electrical grids.
In addition to electricity- and motion-based designs, photovoltaic and fuel cell powered generators use solar power and hydrogen-based fuels, respectively, to generate electrical output.
The reverse conversion of electrical energy into mechanical energy is done by an electric motor, and motors and generators are very similar. Many motors can generate electricity from mechanical energy.
Terminology
Electromagnetic generators fall into one of two broad categories, dynamos and alternators.
Dynamos generate pulsing direct current through the use of a commutator.
Alternators generate alternating current.
Mechanically, a generator consists of a rotating part and a stationary part which together form a magnetic circuit:
Rotor: The rotating part of an electrical machine.
Stator: The stationary part of an electrical machine, which surrounds the rotor.
One of these parts generates a magnetic field, the other has a wire winding in which the changing field induces an electric current:
Field winding or field (permanent) magnets: The magnetic field-producing component of an electrical machine. The magnetic field of the dynamo or alternator can be provided by either wire windings called field coils or permanent magnets. Electrically-excited generators include an excitation system to produce the field flux. A generator using permanent magnets (PMs) is sometimes called a magneto, or a permanent magnet synchronous generator (PMSG).
Armature: The power-producing component of an electrical machine. In a generator, alternator, or dynamo, the armature windings generate the electric current, which provides power to an external circuit.
The armature can be on either the rotor or the stator, depending on the design, with the field coil or magnet on the other part.
History
Before the connection between magnetism and electricity was discovered, electrostatic generators were invented. They operated on electrostatic principles, by using moving electrically charged belts, plates and disks that carried charge to a high potential electrode. The charge was generated using either of two mechanisms: electrostatic induction or the triboelectric effect. Such generators generated very high voltage and low current. Because of their inefficiency and the difficulty of insulating machines that produced very high voltages, electrostatic generators had low power ratings, and were never used for generation of commercially significant quantities of electric power. Their only practical applications were to power early X-ray tubes, and later in some atomic particle accelerators.
Faraday disk generator
The operating principle of electromagnetic generators was discovered in the years of 1831–1832 by Michael Faraday. The principle, later called Faraday's law, is that an electromotive force is generated in an electrical conductor which encircles a varying magnetic flux.
Faraday also built the first electromagnetic generator, called the Faraday disk; a type of homopolar generator, using a copper disc rotating between the poles of a horseshoe magnet. It produced a small DC voltage.
This design was inefficient, due to self-cancelling counterflows of current in regions of the disk that were not under the influence of the magnetic field. While current was induced directly underneath the magnet, the current would circulate backwards in regions that were outside the influence of the magnetic field. This counterflow limited the power output to the pickup wires and induced waste heating of the copper disc. Later homopolar generators would solve this problem by using an array of magnets arranged around the disc perimeter to maintain a steady field effect in one current-flow direction.
Another disadvantage was that the output voltage was very low, due to the single current path through the magnetic flux. Experimenters found that using multiple turns of wire in a coil could produce higher, more useful voltages. Since the output voltage is proportional to the number of turns, generators could be easily designed to produce any desired voltage by varying the number of turns. Wire windings became a basic feature of all subsequent generator designs.
Jedlik and the self-excitation phenomenon
Independently of Faraday, Ányos Jedlik started experimenting in 1827 with the electromagnetic rotating devices which he called electromagnetic self-rotors. In the prototype of the single-pole electric starter (finished between 1852 and 1854) both the stationary and the revolving parts were electromagnetic. It was also the discovery of the principle of dynamo self-excitation, which replaced permanent magnet designs. He also may have formulated the concept of the dynamo in 1861 (before Siemens and Wheatstone) but did not patent it as he thought he was not the first to realize this.
Direct current generators
A coil of wire rotating in a magnetic field produces a current which changes direction with each 180° rotation, an alternating current (AC). However many early uses of electricity required direct current (DC). In the first practical electric generators, called dynamos, the AC was converted into DC with a commutator, a set of rotating switch contacts on the armature shaft. The commutator reversed the connection of the armature winding to the circuit every 180° rotation of the shaft, creating a pulsing DC current. One of the first dynamos was built by Hippolyte Pixii in 1832.
The dynamo was the first electrical generator capable of delivering power for industry.
The Woolrich Electrical Generator of 1844, now in Thinktank, Birmingham Science Museum, is the earliest electrical generator used in an industrial process. It was used by the firm of Elkingtons for commercial electroplating.
The modern dynamo, fit for use in industrial applications, was invented independently by Sir Charles Wheatstone, Werner von Siemens and Samuel Alfred Varley. Varley took out a patent on 24 December 1866, while Siemens and Wheatstone both announced their discoveries on 17 January 1867, the latter delivering a paper on his discovery to the Royal Society.
The "dynamo-electric machine" employed self-powering electromagnetic field coils rather than permanent magnets to create the stator field. Wheatstone's design was similar to Siemens', with the difference that in the Siemens design the stator electromagnets were in series with the rotor, but in Wheatstone's design they were in parallel. The use of electromagnets rather than permanent magnets greatly increased the power output of a dynamo and enabled high power generation for the first time. This invention led directly to the first major industrial uses of electricity. For example, in the 1870s Siemens used electromagnetic dynamos to power electric arc furnaces for the production of metals and other materials.
The dynamo machine that was developed consisted of a stationary structure, which provides the magnetic field, and a set of rotating windings which turn within that field. On larger machines the constant magnetic field is provided by one or more electromagnets, which are usually called field coils.
Large power generation dynamos are now rarely seen due to the now nearly universal use of alternating current for power distribution. Before the adoption of AC, very large direct-current dynamos were the only means of power generation and distribution. AC has come to dominate due to the ability of AC to be easily transformed to and from very high voltages to permit low losses over large distances.
Synchronous generators (alternating current generators)
Through a series of discoveries, the dynamo was succeeded by many later inventions, especially the AC alternator, which was capable of generating alternating current. It is commonly known to be the Synchronous Generators (SGs). The synchronous machines are directly connected to the grid and need to be properly synchronized during startup. Moreover, they are excited with special control to enhance the stability of the power system.
Alternating current generating systems were known in simple forms from Michael Faraday's original discovery of the magnetic induction of electric current. Faraday himself built an early alternator. His machine was a "rotating rectangle", whose operation was heteropolar: each active conductor passed successively through regions where the magnetic field was in opposite directions.
Large two-phase alternating current generators were built by a British electrician, J. E. H. Gordon, in 1882. The first public demonstration of an "alternator system" was given by William Stanley Jr., an employee of Westinghouse Electric in 1886.
Sebastian Ziani de Ferranti established Ferranti, Thompson and Ince in 1882, to market his Ferranti-Thompson Alternator, invented with the help of renowned physicist Lord Kelvin. His early alternators produced frequencies between 100 and 300 Hz. Ferranti went on to design the Deptford Power Station for the London Electric Supply Corporation in 1887 using an alternating current system. On its completion in 1891, it was the first truly modern power station, supplying high-voltage AC power that was then "stepped down" for consumer use on each street. This basic system remains in use today around the world.
After 1891, polyphase alternators were introduced to supply currents of multiple differing phases. Later alternators were designed for varying alternating-current frequencies between sixteen and about one hundred hertz, for use with arc lighting, incandescent lighting and electric motors.
Self-excitation
As the requirements for larger scale power generation increased, a new limitation rose: the magnetic fields available from permanent magnets. Diverting a small amount of the power generated by the generator to an electromagnetic field coil allowed the generator to produce substantially more power. This concept was dubbed self-excitation.
The field coils are connected in series or parallel with the armature winding. When the generator first starts to turn, the small amount of remanent magnetism present in the iron core provides a magnetic field to get it started, generating a small current in the armature. This flows through the field coils, creating a larger magnetic field which generates a larger armature current. This "bootstrap" process continues until the magnetic field in the core levels off due to saturation and the generator reaches a steady state power output.
Very large power station generators often utilize a separate smaller generator to excite the field coils of the larger. In the event of a severe widespread power outage where islanding of power stations has occurred, the stations may need to perform a black start to excite the fields of their largest generators, in order to restore customer power service.
Specialised types of generator
Direct current (DC)
A dynamo uses commutators to produce direct current. It is self-excited, i.e. its field electromagnets are powered by the machine's own output. Other types of DC generators use a separate source of direct current to energise their field magnets.
Homopolar generator
A homopolar generator is a DC electrical generator comprising an electrically conductive disc or cylinder rotating in a plane perpendicular to a uniform static magnetic field. A potential difference is created between the center of the disc and the rim (or ends of the cylinder), the electrical polarity depending on the direction of rotation and the orientation of the field.
It is also known as a unipolar generator, acyclic generator, disk dynamo, or Faraday disc. The voltage is typically low, on the order of a few volts in the case of small demonstration models, but large research generators can produce hundreds of volts, and some systems have multiple generators in series to produce an even larger voltage. They are unusual in that they can produce tremendous electric current, some more than a million amperes, because the homopolar generator can be made to have very low internal resistance.
Magnetohydrodynamic (MHD) generator
A magnetohydrodynamic generator directly extracts electric power from moving hot gases through a magnetic field, without the use of rotating electromagnetic machinery. MHD generators were originally developed because the output of a plasma MHD generator is a flame, well able to heat the boilers of a steam power plant. The first practical design was the AVCO Mk. 25, developed in 1965. The U.S. government funded substantial development, culminating in a 25 MW demonstration plant in 1987. In the Soviet Union from 1972 until the late 1980s, the MHD plant U 25 was in regular utility operation on the Moscow power system with a rating of 25 MW, the largest MHD plant rating in the world at that time. MHD generators operated as a topping cycle are currently (2007) less efficient than combined cycle gas turbines.
Alternating current (AC)
Induction generator
Induction AC motors may be used as generators, turning mechanical energy into electric current. Induction generators operate by mechanically turning their rotor faster than the simultaneous speed, giving negative slip. A regular AC non-simultaneous motor usually can be used as a generator, without any changes to its parts. Induction generators are useful in applications like minihydro power plants, wind turbines, or in reducing high-pressure gas streams to lower pressure, because they can recover energy with relatively simple controls. They do not require another circuit to start working because the turning magnetic field is provided by induction from the one they have. They also do not require speed governor equipment as they inherently operate at the connected grid frequency.
An induction generator must be powered with a leading voltage; this is usually done by connection to an electrical grid, or by powering themselves with phase correcting capacitors.
Linear electric generator
In the simplest form of linear electric generator, a sliding magnet moves back and forth through a solenoid, a copper wire or a coil. An alternating current is induced in the wire, or loops of wire, by Faraday's law of induction each time the magnet slides through. This type of generator is used in the Faraday flashlight. Larger linear electricity generators are used in wave power schemes.
Variable-speed constant-frequency generators
Grid-connected generators deliver power at a constant frequency. For generators of the synchronous or induction type, the primer mover speed turning the generator shaft must be at a particular speed (or narrow range of speed) to deliver power at the required utility frequency. Mechanical speed-regulating devices may waste a significant fraction of the input energy to maintain a required fixed frequency.
Where it is impractical or undesired to tightly regulate the speed of the prime mover, doubly fed electric machines may be used as generators. With the assistance of power electronic devices, these can regulate the output frequency to a desired value over a wider range of generator shaft speeds. Alternatively, a standard generator can be used with no attempt to regulate frequency, and the resulting power converted to the desired output frequency with a rectifier and converter combination. Allowing a wider range of prime mover speeds can improve the overall energy production of an installation, at the cost of more complex generators and controls. For example, where a wind turbine operating at fixed frequency might be required to spill energy at high wind speeds, a variable speed system can allow recovery of energy contained during periods of high wind speed.
Common use cases
Power station
A power station, also known as a power plant or powerhouse and sometimes generating station or generating plant, is an industrial facility that generates electricity. Most power stations contain one or more generators, or spinning machines converting mechanical power into three-phase electrical power. The relative motion between a magnetic field and a conductor creates an electric current. The energy source harnessed to turn the generator varies widely. Most power stations in the world burn fossil fuels such as coal, oil, and natural gas to generate electricity. Cleaner sources include nuclear power, and increasingly use renewables such as the sun, wind, waves and running water.
Vehicular generators
Roadway vehicles
Motor vehicles require electrical energy to power their instrumentation, keep the engine itself operating, and recharge their batteries. Until about the 1960s motor vehicles tended to use DC generators (dynamos) with electromechanical regulators. Following the historical trend above and for many of the same reasons, these have now been replaced by alternators with built-in rectifier circuits.
Bicycles
Bicycles require energy to power running lights and other equipment. There are two common kinds of generator in use on bicycles: bottle dynamos which engage the bicycle's tire on an as-needed basis, and hub dynamos which are directly attached to the bicycle's drive train. The name is conventional as they are small permanent-magnet alternators, not self-excited DC machines as are dynamos. Some electric bicycles are capable of regenerative braking, where the drive motor is used as a generator to recover some energy during braking.
Sailboats
Sailing boats may use a water- or wind-powered generator to trickle-charge the batteries. A small propeller, wind turbine or turbine is connected to a low-power generator to supply currents at typical wind or cruising speeds.
Recreational vehicles
Recreational vehicles need an extra power supply to power their onboard accessories, including air conditioning units, and refrigerators. An RV power plug is connected to the electric generator to obtain a stable power supply.
Electric scooters
Electric scooters with regenerative braking have become popular all over the world. Engineers use kinetic energy recovery systems on the scooter to reduce energy consumption and increase its range up to 40-60% by simply recovering energy using the magnetic brake, which generates electric energy for further use. Modern vehicles reach speed up to 25–30 km/h and can run up to 35–40 km.
Genset
An engine-generator is the combination of an electrical generator and an engine (prime mover) mounted together to form a single piece of self-contained equipment. The engines used are usually piston engines, but gas turbines can also be used, and there are even hybrid diesel-gas units, called dual-fuel units. Many different versions of engine-generators are available - ranging from very small portable petrol powered sets to large turbine installations. The primary advantage of engine-generators is the ability to independently supply electricity, allowing the units to serve as backup power sources.
Human powered electrical generators
A generator can also be driven by human muscle power (for instance, in field radio station equipment).
Human powered electric generators are commercially available, and have been the project of some DIY enthusiasts. Typically operated by means of pedal power, a converted bicycle trainer, or a foot pump, such generators can be practically used to charge batteries, and in some cases are designed with an integral inverter. An average "healthy human" can produce a steady 75 watts (0.1 horsepower) for a full eight hour period, while a "first class athlete" can produce approximately 298 watts (0.4 horsepower) for a similar period, at the end of which an undetermined period of rest and recovery will be required. At 298 watts, the average "healthy human" becomes exhausted within 10 minutes. The net electrical power that can be produced will be less, due to the efficiency of the generator. Portable radio receivers with a crank are made to reduce battery purchase requirements, see clockwork radio. During the mid 20th century, pedal powered radios were used throughout the Australian outback, to provide schooling (School of the Air), medical and other needs in remote stations and towns.
Mechanical measurement
A tachogenerator is an electromechanical device which produces an output voltage proportional to its shaft speed. It may be used for a speed indicator or in a feedback speed control system. Tachogenerators are frequently used to power tachometers to measure the speeds of electric motors, engines, and the equipment they power. Generators generate voltage roughly proportional to shaft speed. With precise construction and design, generators can be built to produce very precise voltages for certain ranges of shaft speeds.
Equivalent circuit
An equivalent circuit of a generator and load is shown in the adjacent diagram. The generator is represented by an abstract generator consisting of an ideal voltage source and an internal impedance. The generator's and parameters can be determined by measuring the winding resistance (corrected to operating temperature), and measuring the open-circuit and loaded voltage for a defined current load.
This is the simplest model of a generator, further elements may need to be added for an accurate representation. In particular, inductance can be added to allow for the machine's windings and magnetic leakage flux, but a full representation can become much more complex than this.
See also
Diesel generator
Electricity generation
Electric motor
Engine-generator
Faraday's law of induction
Gas turbine
Generation expansion planning
Goodness factor
Hydropower
Steam generator (boiler)
Steam generator (railroad)
Steam turbine
Superconducting electric machine
Thermoelectric generator
Thermal power station
Tidal stream generator
References
English inventions
19th-century inventions | 0.765593 | 0.998258 | 0.76426 |
Relativity (M. C. Escher) | Relativity is a lithograph print by the Dutch artist M. C. Escher, first printed in December 1953. The first version of this work was a woodcut made earlier that same year.
It depicts a world in which the normal laws of gravity do not apply. The architectural structure seems to be the centre of an idyllic community, with most of its inhabitants casually going about their ordinary business, such as dining. There are windows and doorways leading to park-like outdoor settings. All of the figures are dressed in identical attire and have featureless bulb-shaped heads. Identical characters such as these can be found in many other Escher works.
In the world of Relativity, there are three sources of gravity, each being orthogonal to the two others. Each inhabitant lives in one of the gravity wells, where normal physical laws apply. There are sixteen characters, spread between each gravity source, six in one and five in each of the other two. The apparent confusion of the lithograph print comes from the fact that the three gravity sources are depicted in the same space.
The structure has seven stairways, and each stairway can be used by people who belong to two different gravity sources. This creates interesting phenomena, such as in the top stairway, where two inhabitants use the same stairway in the same direction and on the same side, but each using a different face of each step; thus, one descends the stairway as the other climbs it, even while moving in the same direction nearly side by side. In the other stairways, inhabitants are depicted as climbing the stairways upside-down, but based on their own gravity source, they are climbing normally.
Each of the three parks belongs to one of the gravity wells. All but one of the doors seem to lead to basements below the parks. Though metaphysically possible, such basements are unusual and add to the surreal effect of the picture.
In popular culture
Relativity is one of Escher's most popular works, and has been used in a variety of ways.
References
1953 prints
Mathematical artworks
Works by M. C. Escher | 0.766646 | 0.996889 | 0.76426 |
Four-velocity | In physics, in particular in special relativity and general relativity, a four-velocity is a four-vector in four-dimensional spacetime that represents the relativistic counterpart of velocity, which is a three-dimensional vector in space.
Physical events correspond to mathematical points in time and space, the set of all of them together forming a mathematical model of physical four-dimensional spacetime. The history of an object traces a curve in spacetime, called its world line. If the object has mass, so that its speed is necessarily less than the speed of light, the world line may be parametrized by the proper time of the object. The four-velocity is the rate of change of four-position with respect to the proper time along the curve. The velocity, in contrast, is the rate of change of the position in (three-dimensional) space of the object, as seen by an observer, with respect to the observer's time.
The value of the magnitude of an object's four-velocity, i.e. the quantity obtained by applying the metric tensor to the four-velocity , that is , is always equal to , where is the speed of light. Whether the plus or minus sign applies depends on the choice of metric signature. For an object at rest its four-velocity is parallel to the direction of the time coordinate with . A four-velocity is thus the normalized future-directed timelike tangent vector to a world line, and is a contravariant vector. Though it is a vector, addition of two four-velocities does not yield a four-velocity: the space of four-velocities is not itself a vector space.
Velocity
The path of an object in three-dimensional space (in an inertial frame) may be expressed in terms of three spatial coordinate functions of time , where is an index which takes values 1, 2, 3.
The three coordinates form the 3d position vector, written as a column vector
The components of the velocity (tangent to the curve) at any point on the world line are
Each component is simply written
Theory of relativity
In Einstein's theory of relativity, the path of an object moving relative to a particular frame of reference is defined by four coordinate functions , where is a spacetime index which takes the value 0 for the timelike component, and 1, 2, 3 for the spacelike coordinates. The zeroth component is defined as the time coordinate multiplied by ,
Each function depends on one parameter τ called its proper time. As a column vector,
Time dilation
From time dilation, the differentials in coordinate time and proper time are related by
where the Lorentz factor,
is a function of the Euclidean norm of the 3d velocity vector
Definition of the four-velocity
The four-velocity is the tangent four-vector of a timelike world line.
The four-velocity at any point of world line is defined as:
where is the four-position and is the proper time.
The four-velocity defined here using the proper time of an object does not exist for world lines for massless objects such as photons travelling at the speed of light; nor is it defined for tachyonic world lines, where the tangent vector is spacelike.
Components of the four-velocity
The relationship between the time and the coordinate time is defined by
Taking the derivative of this with respect to the proper time , we find the velocity component for :
and for the other 3 components to proper time we get the velocity component for :
where we have used the chain rule and the relationships
Thus, we find for the four-velocity
Written in standard four-vector notation this is:
where is the temporal component and is the spatial component.
In terms of the synchronized clocks and rulers associated with a particular slice of flat spacetime, the three spacelike components of four-velocity define a traveling object's proper velocity i.e. the rate at which distance is covered in the reference map frame per unit proper time elapsed on clocks traveling with the object.
Unlike most other four-vectors, the four-velocity has only 3 independent components instead of 4. The factor is a function of the three-dimensional velocity .
When certain Lorentz scalars are multiplied by the four-velocity, one then gets new physical four-vectors that have 4 independent components.
For example:
Four-momentum: where is the rest mass
Four-current density: where is the charge density
Effectively, the factor combines with the Lorentz scalar term to make the 4th independent component
and
Magnitude
Using the differential of the four-position in the rest frame, the magnitude of the four-velocity can be obtained by the Minkowski metric with signature :
in short, the magnitude of the four-velocity for any object is always a fixed constant:
In a moving frame, the same norm is:
so that:
which reduces to the definition of the Lorentz factor.
See also
Four-acceleration
Four-momentum
Four-force
Four-gradient
Algebra of physical space
Congruence (general relativity)
Hyperboloid model
Rapidity
Remarks
References
Four-vectors | 0.773535 | 0.988007 | 0.764258 |
Polytropic process | A polytropic process is a thermodynamic process that obeys the relation:
where p is the pressure, V is volume, n is the polytropic index, and C is a constant. The polytropic process equation describes expansion and compression processes which include heat transfer.
Particular cases
Some specific values of n correspond to particular cases:
for an isobaric process,
for an isochoric process.
In addition, when the ideal gas law applies:
for an isothermal process,
for an isentropic process.
Where is the ratio of the heat capacity at constant pressure to heat capacity at constant volume.
Equivalence between the polytropic coefficient and the ratio of energy transfers
For an ideal gas in a closed system undergoing a slow process with negligible changes in kinetic and potential energy the process is polytropic, such that
where C is a constant, , , and with the polytropic coefficient
Relationship to ideal processes
For certain values of the polytropic index, the process will be synonymous with other common processes. Some examples of the effects of varying index values are given in the following table.
When the index n is between any two of the former values (0, 1, γ, or ∞), it means that the polytropic curve will cut through (be bounded by) the curves of the two bounding indices.
For an ideal gas, 1 < γ < 5/3, since by Mayer's relation
Other
A solution to the Lane–Emden equation using a polytropic fluid is known as a polytrope.
See also
Adiabatic process
Compressor
Internal combustion engine
Isentropic process
Isobaric process
Isochoric process
Isothermal process
Polytrope
Quasistatic equilibrium
Thermodynamics
Vapor-compression refrigeration
References
Thermodynamic processes | 0.768473 | 0.994491 | 0.76424 |
Porter's five forces analysis | Porter's Five Forces Framework is a method of analysing the competitive environment of a business. It draws from industrial organization (IO) economics to derive five forces that determine the competitive intensity and, therefore, the attractiveness (or lack thereof) of an industry in terms of its profitability. An "unattractive" industry is one in which the effect of these five forces reduces overall profitability. The most unattractive industry would be one approaching "pure competition", in which available profits for all firms are driven to normal profit levels. The five-forces perspective is associated with its originator, Michael E. Porter of Harvard University. This framework was first published in Harvard Business Review in 1979.
Porter refers to these forces as the microenvironment, to contrast it with the more general term macroenvironment. They consist of those forces close to a company that affects its ability to serve its customers and make a profit. A change in any of the forces normally requires a business unit to re-assess the marketplace given the overall change in industry information. The overall industry attractiveness does not imply that every firm in the industry will return the same profitability. Firms are able to apply their core competencies, business model or network to achieve a profit above the industry average. A clear example of this is the airline industry. As an industry, profitability is low because the industry's underlying structure of high fixed costs and low variable costs afford enormous latitude in the price of airline travel. Airlines tend to compete on cost, and that drives down the profitability of individual carriers as well as the industry itself because it simplifies the decision by a customer to buy or not buy a ticket. This underscores the need for businesses to continuously evaluate their competitive landscape and adapt strategies in response to changes in industry dynamics, exemplified by the airline industry's struggle with profitability despite varying approaches to differentiation. A few carriers – Richard Branson's Virgin Atlantic is one – have tried, with limited success, to use sources of differentiation in order to increase profitability.
Porter's five forces include three forces from 'horizontal competition' – the threat of substitute products or services, the threat of established rivals, and the threat of new entrants – and two others from 'vertical' competition – the bargaining power of suppliers and the bargaining power of customers.
Porter developed his five forces framework in reaction to the then-popular SWOT analysis, which he found both lacking in rigor and ad hoc. Porter's five-forces framework is based on the structure–conduct–performance paradigm in industrial organizational economics. Other Porter's strategy tools include the value chain and generic competitive strategies.
Five forces that shape competition
Threat of new entrants
New entrants put pressure on current organizations within an industry through their desire to gain market share. This in turn puts pressure on prices, costs, and the rate of investment needed to sustain a business within the industry. The threat of new entrants is particularly intense if they are diversifying from another market as they can leverage existing expertise, cash flow, and brand identity which puts a strain on existing companies
profitability.
Barriers to entry restrict the threat of new entrants. If the barriers are high, the threat of new entrants is reduced, and conversely, if the barriers are low, the risk of new companies venturing into a given market is high. Barriers to entry are advantages that existing, established companies have over new entrants.
Michael E. Porter differentiates two factors that can have an effect on how much of a threat new entrants may pose:
Barriers to entry
The most attractive segment is one in which entry barriers are high and exit barriers are low. It is worth noting, however, that high barriers to entry almost always make exit more difficult.
Michael E. Porter lists 7 major sources of entry barriers:
Supply-side economies of scale – spreading the fixed costs over a larger volume of units thus reducing the cost per unit. This can discourage a new entrant because they either have to start trading at a smaller volume of units and accept a price disadvantage over larger companies, or risk coming into the market on a large scale in an attempt to displace the existing market leader.
Demand-side benefits of scale – this occurs when a buyer's willingness to purchase a particular product or service increases with other people's willingness to purchase it. Also known as the network effect, people tend to value being in a 'network' with a larger number of people who use the same company.
Customer switching costs – These are well illustrated by structural market characteristics such as supply chain integration but also can be created by firms. Airline frequent flyer programs are an example.
Capital requirements – clearly the Internet has influenced this factor dramatically. Websites and apps can be launched cheaply and easily as opposed to the brick-and-mortar industries of the past.
Incumbency advantages independent of size (e.g., customer loyalty and brand equity).
Unequal access to distribution channels – if there are a limited number of distribution channels for a certain product/service, new entrants may struggle to find a retail or wholesale channel to sell through as existing competitors will have a claim on them.
Government policy such as sanctioned monopolies, legal franchise requirements, patents, and regulatory requirements.
Expected retaliation
For example, a specific characteristic of oligopoly markets is that prices generally settle at an equilibrium because any price rises or cuts are easily matched by the competition.
Threat of substitutes
A substitute product uses a different technology to try to solve the same economic need. Examples of substitutes are meat, poultry, and fish; landlines and cellular telephones; airlines, automobiles, trains, and ships; beer and wine; and so on. For example, tap water is a substitute for Coke, but Pepsi is a product that uses the same technology (albeit different ingredients) to compete head-to-head with Coke, so it is not a substitute. Increased marketing for drinking tap water might "shrink the pie" for both Coke and Pepsi, whereas increased Pepsi advertising would likely "grow the pie" (increase consumption of all soft drinks), while giving Pepsi a larger market share at Coke's expense.
Potential factors:
Buyer propensity to substitute. This aspect incorporated both tangible and intangible factors. Brand loyalty can be very important as in the Coke and Pepsi example above; however, contractual and legal barriers are also effective.
Relative price performance of substitute
Buyer's switching costs. This factor is well illustrated by the mobility industry. Uber and its many competitors took advantage of the incumbent taxi industry's dependence on legal barriers to entry and when those fell away, it was trivial for customers to switch. There were no costs as every transaction was atomic, with no incentive for customers not to try another product.
Perceived level of product differentiation which is classic Michael Porter in the sense that there are only two basic mechanisms for competition – lowest price or differentiation. Developing multiple products for niche markets is one way to mitigate this factor.
Number of substitute products available in the market
Ease of substitution
Availability of close substitutes
Bargaining power of customers
The bargaining power of customers is also described as the market of outputs: the ability of customers to put the firm under pressure, which also affects the customer's sensitivity to price changes. Firms can take measures to reduce buyer power, such as implementing a loyalty program. Buyers' power is high if buyers have many alternatives. It is low if they have few choices.
Potential factors:
Buyer concentration to firm concentration ratio
Degree of dependency upon existing channels of distribution
Bargaining leverage, particularly in industries with high fixed costs
Buyer switching costs
Buyer information availability
Availability of existing substitute products
Buyer price sensitivity
Differential advantage (uniqueness) of industry products
RFM (customer value) Analysis
Bargaining power of suppliers
The bargaining power of suppliers is also described as the market of inputs. Suppliers of raw materials, components, labor, and services (such as expertise) to the firm can be a source of power over the firm when there are few substitutes. If you are making biscuits and there is only one person who sells flour, you have no alternative but to buy it from them. Suppliers may refuse to work with the firm or charge excessively high prices for unique resources.
Potential factors are:
Supplier switching costs relative to firm switching costs
Degree of differentiation of inputs
Impact of inputs on cost and differentiation
Presence of substitute inputs
Strength of the distribution channel
Supplier concentration to the firm concentration ratio
Employee solidarity (e.g. labor unions)
Supplier competition: the ability to forward vertically integrate and cut out the buyer.
Competitive rivalry
Competitive rivalry is a measure of the extent of competition among existing firms. Price cuts, increased advertising expenditures, or investing in service/product enhancements and innovation are all examples of competitive moves that might limit profitability and lead to competitive moves(Dhliwayo, Witness 2022). For most industries, the intensity of competitive rivalry is the biggest determinant of the competitiveness of the industry. Understanding industry rivals is vital to successfully marketing a product. Positioning depends on how the public perceives a product and distinguishes it from that of competitors. An organization must be aware of its competitors' marketing strategies and pricing and also be reactive to any changes made. Rivalry among competitors tends to be cutthroat and industry profitability is low while having the potential factors below:
Potential factors:
Sustainable competitive advantage through innovation
Competition between online and offline organizations
Level of advertising expense
Powerful competitive strategy which could potentially be realized by adhering to Porter's work on low cost versus differentiation.
Firm concentration ratio
Factors, not forces
Other factors below should also be considered as they can contribute in evaluating a firm's strategic position. These factors can commonly be mistaken for being the underlying structure of the firm; however, the underlying structure consists of the five factors above.
Industry growth rate
Sometimes bad strategy decisions can be made when a narrow focus is kept on the growth rate of an industry. While rapid growth in an industry can seem attractive, it can also attract new entrants especially if entry barriers are low and suppliers are powerful. Furthermore, profitability is not guaranteed if powerful substitutes become available to the customers.
For example, Blockbuster dominated the rental market throughout 1990s. In 1998, Reed Hastings founded Netflix and entered the market. Netflix's CEO was famously laughed out of the room. While Blockbuster was thriving and expanding rapidly, its key pitfall was ignoring its competitors and focusing on its growth in the industry.
Technology and innovation
Technology in itself is a rapidly growing industry. Regardless of the advanced growth, it presents its limitations; such as customers not being able to physically touch/test products. Technology stand alone cannot always provide a desirable experience for a customer. "Boring" companies that are in high entry barrier industries with high switching costs and price-sensitive buyers can be more profitable than "tech savvy" companies.
For example, quite commonly websites with menus and online booking options attract customers to a restaurant. But the restaurant experience cannot be delivered online with the use of technology. Food delivery companies like Uber Eats can deliver food to customers but cannot replace the restaurant's atmospheric experience.
Government
Government cannot be a standalone force as it is a factor that can affect the firms structure of five forces above. It is neither good or bad for the industry's profitability.
For instance,
patents can raise barriers to entry
supplier power can be raised by union favoritism from government policies
failing companies reorganizing due to bankruptcy laws
Complementary products and services
Similar to the government above, complementary products/services cannot be a standalone factor because it's not necessarily bad or good for the industry's profitability. Complements occur when a customer benefits from multiple products combined. Individually those standalone products can be redundant. For example, a car would be unusable without petrol/gas and a driver. Or for example, a computer is best used with computer software. This factor is controversial (as discussed below in Criticisms) as many believe it to be 6th Force. However, complements influence the forces more than they form the underlying structure of the market.
For instance, complements can
influence barriers of entry by either lowering or raising it e.g. Apple providing set of tools to develop apps, lowers barriers to entry;
make substitution easier e.g. Spotify replacing CDs
A strategy consultant's job is to identify complements and apply them to the forces above.
Usage
Strategy consultants occasionally use Porter's five forces framework when making a qualitative evaluation of a firm's strategic position. However, for most consultants, the framework is only a starting point and value chain analysis or another type of analysis may be used in conjunction with this model. Like all general frameworks, an analysis that uses it to the exclusion of specifics about a particular situation is considered naïve .
According to Porter, the five forces framework should be used at the line-of-business industry level; it is not designed to be used at the industry group or industry sector level. An industry is defined at a lower, more basic level: a market in which similar or closely related products and/or services are sold to buyers (see industry information). A firm that competes in a single industry should develop, at a minimum, one five forces analysis for its industry. Porter makes clear that for diversified companies, the primary issue in corporate strategy is the selection of industries (lines of business) in which the company will compete. The average Fortune Global 1,000 company competes in 52 industries.
Criticisms
Porter's framework has been challenged by other academics and strategists. For instance, Kevin P. Coyne and Somu Subramaniam claim that three dubious assumptions underlie the five forces:
That buyers, competitors, and suppliers are unrelated and do not interact and collude.
That the source of value is a structural advantage (creating barriers to entry).
That uncertainty is low, allowing participants in a market to plan for and respond to changes in competitive behavior.
An important extension to Porter's work came from Adam Brandenburger and Barry Nalebuff of Yale School of Management in the mid-1990s. Using game theory, they added the concept of complementors (also called "the 6th force") to try to explain the reasoning behind strategic alliances. Complementors are known as the impact of related products and services already in the market. The idea that complementors are the sixth force has often been credited to Andrew Grove, former CEO of Intel Corporation. Martyn Richard Jones, while consulting at Groupe Bull, developed an augmented five forces model in Scotland in 1993. It is based on Porter's Framework and includes Government (national and regional) as well as pressure groups as the notional 6th force. This model was the result of work carried out as part of Groupe Bull's Knowledge Asset Management Organisation initiative.
Porter indirectly rebutted the assertions of other forces, by referring to innovation, government, and complementary products and services as "factors" that affect the five forces.
It is also perhaps not feasible to evaluate the attractiveness of an industry independently of the resources that a firm brings to that industry. It is thus argued (Wernerfelt 1984) that this theory be combined with the resource-based view (RBV) in order for the firm to develop a sounder framework.
Other criticisms include:
It places too much weight on the macro-environment and doesn't assess more specific areas of the business that also impact competitiveness and profitability
It does not provide any actions to help deal with high or low force threats (e.g., what should management do if there is a high threat of substitution?)
See also
Coopetition
Economics of Strategy
Industry classification
Marketing Strategy
National Diamond
Strategic management
Porter's four corners model
Nonmarket forces
Value chain
Marketing management
Enshittification
References
Further reading
Coyne, K.P. and Sujit Balakrishnan (1996),Bringing discipline to strategy, The McKinsey Quarterly, No.4.
Porter, M.E. (March–April 1979) How Competitive Forces Shape Strategy, Harvard Business Review.
Porter, M.E. (1980) Competitive Strategy, Free Press, New York.
Porter, M.E. (January 2008) The Five Competitive Forces That Shape Strategy, Harvard Business Review.
Ireland, R. D., Hoskisson, R. and Hitt, M. (2008). Understanding business strategy: Concepts and cases. Cengage Learning.
Rainer R.K. and Turban E. (2009), Introduction to Information Systems (2nd edition), Wiley, pp 36–41.
Kotler P. (1997), Marketing Management, Prentice-Hall, Inc.
Mintzberg, H., Ahlstrand, B. and Lampel J. (1998) Strategy Safari, Simon & Schuster.
Michael Porter
Strategic management
Business planning
Corporate development | 0.765774 | 0.997993 | 0.764238 |
Harry Potter and the Methods of Rationality | Harry Potter and the Methods of Rationality (HPMOR) is a work of Harry Potter fan fiction by Eliezer Yudkowsky published on FanFiction.Net as a serial from February 28, 2010, to March 14, 2015, totaling 122 chapters and over 660,000 words. It adapts the story of Harry Potter to explain complex concepts in cognitive science, philosophy, and the scientific method. Yudkowsky's reimagining supposes that Harry's aunt Petunia Evans married an Oxford professor and homeschooled Harry in science and rational thinking, allowing Harry to enter the magical world with ideals from the Age of Enlightenment and an experimental spirit. The fan fiction spans one year, covering Harry's first year in Hogwarts. HPMOR has inspired other works of fan fiction, art, and poetry.
Plot
In this fan fiction's alternate universe to the Harry Potter series, Lily Potter magically made her sister Petunia Evans prettier, letting her marry Oxford professor Michael Verres. They adopt their orphaned nephew Harry James Potter as Harry James Potter-Evans-Verres and homeschool him in science and rationality. When Harry turns 11, Petunia and Professor McGonagall inform him and Michael about the wizarding world and Harry's defeat of Lord Voldemort. Harry becomes irritated over wizarding society's inconsistencies and backwardness. When boarding the Hogwarts Express, Harry befriends Draco Malfoy over Ron Weasley and teaches him science. Harry also befriends Hermione Granger over their scientific inclinations.
At Hogwarts, the Sorting Hat sends both Harry and Hermione to Ravenclaw and Draco to Slytherin. As school begins, Harry earns the trust of McGonagall, bonds with Professor Quirrell (who strives to resurrect the teaching of battle magic) and tests magic through the scientific method with Hermione. Harry invents partial transfiguration, which transmutes parts of wholes by applying timeless physics. Draco reluctantly accepts Harry's proof against the Malfoys' bigotry against muggle-borns and informs him that Dumbledore burned his innocent mother, Narcissa, alive.
After winter break, Quirrell procures a Dementor to teach students the Patronus charm. Though Hermione and Harry initially fail, Harry recognizes Dementors as shadows of death. He invents the True Patronus charm, destroying the Dementor. After Harry teaches him to cast a regular Patronus, Draco discovers Harry can speak Parseltongue. Quirrell reveals himself as a snake Animagus to Harry and convinces him to help spirit a supposedly manipulated Bellatrix Black from Azkaban, exposing Harry to the horrors of prisoners while Dumbledore believes that Voldemort is back. After a confrontation, he tells Harry that the Order of the Phoenix made him murder Narcissa to stop Voldemort from taking hostages.
Hermione establishes the organization S.P.H.E.W. to protest misogyny in heroism and fight bullies. This causes widespread chaos, and the group's activities are put on pause. She and Draco are manipulated into believing she attempted the murder of the latter, and Harry pays his fortune to Lucius Malfoy to save Hermione from Azkaban. A surprised Lucius accepts and withdraws Draco from Hogwarts. The wizarding world theorize that Quirrell is David Monroe, a long-missing opponent of Voldemort. A mountain troll enters Hogwarts and kills Hermione before Harry manages to kill it. Grieving, Harry vows to resurrect Hermione and preserves her body. Harry absolves the Malfoys of guilt in Hermione's murder in exchange for Lucius returning his money, exonerating Hermione, and returning Draco to Hogwarts.
Quirrell starts eating unicorns, supposedly to delay death from a disease. Near the end of the year, he captures Harry, revealing himself as Voldemort's spirit possessing Quirrell and how he framed and murdered Hermione by proxy. He coerces Harry into helping him steal the Philosopher's Stone, an artifact for performing true transmutation as transfiguration is otherwise temporary, by promising to resurrect Hermione. They succeed when Dumbledore appears and tries to seal Voldemort outside time. Voldemort endangers Harry, forcing Dumbledore to seal himself instead.
Voldemort's spirit abandons Quirrell and embodies using the Stone; he and Harry resurrect Hermione with the power of the Stone and Harry's True Patronus. Voldemort murders Quirrell as a human sacrifice for a ritual to give Hermione a Horcrux and the superpowers of a mountain troll and unicorn, rendering her near-immortal. Knowing Harry is prophesied to destroy the world, Voldemort holds Harry at gunpoint, strips him naked, summons his Death Eaters, forces Harry into a magical oath to never risk destroying the world, and orders his murder. Harry improvises a partial Transfiguration into carbon nanotubes that beheads every Death Eater and maims Voldemort. He stuns, memory-wipes, and transfigures him into his ring's jewel. Harry claims the Stone and stages a scene looking like "David Monroe" died defeating Voldemort and resurrected Hermione.
After the battle, Harry receives Dumbledore's letters, learning Dumbledore gambled the world's future on him due to prophecies and let Harry inherit his positions and assets. Harry helps a grieving Draco find his mother, Narcissa, and plans with the resurrected Hermione to overhaul wizarding society by destroying Azkaban with the True Patronus and using the Philosopher's Stone to grant everyone immortality.
History
Yudkowsky wrote Harry Potter and the Methods of Rationality to promote the rationality skills he advocates on his community blog LessWrong. According to him, "I'd been reading a lot of Harry Potter fan fiction at the time the plot of HPMOR spontaneously burped itself into existence inside my mind, so it came out as a Harry Potter story, [...] If I had to rationalize it afterward, I'd say the Potterverse is a very rich environment for a curious thinker, and there's a large number of potential readers who would enter at least moderately familiar with the Harry Potter universe."
Yudkowsky has used HPMOR to assist the launch of the Center for Applied Rationality, which teaches courses based on his work.
Yudkowsky refused a suggestion from David Whelan to sell HPMOR as an original story after rewriting it to remove the Harry Potter setting's elements from it to avoid copyright infringement like E. L. James did with Fifty Shades, which was originally a Twilight fan fiction, saying, "That's not possible in this case. HPMOR is fundamentally linked to, and can only be understood against the background of, the original Harry Potter novels. Numerous scenes are meant to be understood in the light of other scenes in the original HP."
After HPMOR concluded in 2015, Yudkowsky's readers held many worldwide wrap parties in celebration.
Reception
Critical response
Harry Potter and the Methods of Rationality is highly popular on FanFiction.Net, though it has also caused significant polarization among readers. In 2011, Daniel D. Snyder of The Atlantic recorded how HPMOR "caused uproar in the fan fiction community, drawing both condemnations and praise" on online message boards "for its blasphemous—or brilliant—treatment of the canon." In 2015, David Whelan of Vice described HPMOR as "the most popular Harry Potter book you've never heard of" and claimed, "Most people agree that it's brilliantly written, challenging, and—curiously—mind altering."
HPMOR has received positive mainstream reception. Hugo Award-winning science fiction author David Brin positively reviewed HPMOR for The Atlantic in 2010, saying, "It's a terrific series, subtle and dramatic and stimulating… I wish all Potter fans would go here, and try on a bigger, bolder and more challenging tale." In 2014, American politician Ben Wikler lauded HPMOR on The Guardian as "the #1 fan fiction series of all time," saying it was "told with enormous gusto, and with emotional insight into that kind of mind," and comparing Harry to his friend Aaron Swartz's skeptical attitude. Writing for The Washington Post, legal scholar William Baude praised HPMOR as "the best Harry Potter book ever written, though it is not written by J.K. Rowling" in 2014 and "one of my favorite books written this millennium" in 2015. In 2015, Vakasha Sachdev of Hindustan Times described HPMOR as "a thinking person's story about magic and heroism" and how "the conflict between good and evil is represented as a battle between knowledge and ignorance," eliciting his praise. In 2017, Carol Pinchefsky of Syfy lauded HPMOR as "something brilliant" and "a platform on which the writer bounces off complex ideas in a way that's accessible and downright fun." In a 2019 interview for The Sydney Morning Herald, young adult writer Lili Wilkinson said that she adores HPMOR; according to her, "It not only explains basically all scientific theory, from economics to astrophysics, but it also includes the greatest scene where Malfoy learns about DNA and has to confront his pureblood bigotry." Rhys McKay hailed HPMOR in a 2019 article for Who as "one of the best fanfics ever written" and "a familiar yet all-new take on the Wizarding world."
James D. Miller, an economics professor at Smith College and one of Yudkowsky's acquaintances, praised HPMOR in his 2012 book Singularity Rising as an "excellent marketing strategy" for Yudkowsky's "pseudoscientific-sounding" beliefs due to its carefully crafted lessons about rationality. Though he criticized Yudkowsky as "profoundly arrogant" for believing that making people more rational would make them more likely to agree with his ideas, he nonetheless agreed that such an effort would gain him more followers.
Accolades
The HPMOR fan audiobook was a Parsec Awards finalist in 2012 and 2015.
Translations
Russian
On July 17, 2018, Mikhail Samin, a former head of the Russian Pastafarian Church who had previously published The Gospel of the Flying Spaghetti Monster in Russian, launched a non-commercial crowdfunding campaign hosted on Planeta.ru alongside about 200 helpers to print a three-volume edition of the Russian translation of Harry Potter and the Methods of Rationality. Lin Lobaryov, the former lead editor of Mir Fantastiki, compiled the books. Samin's campaign reached its 1.086 million ₽ (approximately US$17 000) goal within 30 hours; it ended on September 30 with 11.4 million ₽ collected (approximately US$175 000), having involved 7,278 people, and became the biggest Russian crowdfunding project for a day before a fundraiser hosted on CrowdRepublic for the Russian translation for Gloomhaven surpassed it.
Though Samin originally planned to print 1000 copies of HPMOR, his campaign's unprecedented success led him to print twenty-one times more copies than that. Yudkowsky supported Samin's efforts and wrote an exclusive introduction for HPMOR's Russian printing, though the campaign's popularity surprised him. Samin's HPMOR publication project is the largest-scale effort on record, surpassing many previous low-circulation fan printings, and he sent some Russian copies of HPMOR to libraries and others to schools as prizes for Olympiad winners. J.K. Rowling and her agents refused Russian publishing house Eksmo's request for commercial publication of HPMOR.
Other
HPMOR has Czech, Chinese, French, German, Hebrew, Indonesian, Italian, Japanese, Norwegian, Spanish, Swedish, and Ukrainian translations.
See also
My Immortal and Hogwarts School of Prayer and Miracles, two near-universally condemned Harry Potter fan fictions
All the Young Dudes, a similarly praised Harry Potter fan fiction
References
External links
The Methods of Rationality Podcast (Full cast audiobook available as a podcast)
2010 works
Fiction about personifications of death
Harry Potter fan fiction
Literature first published in serial form
Philosophical fiction
Transhumanist books
Works set in the 1990s
Digital media works about philosophy | 0.767415 | 0.995856 | 0.764235 |
Anharmonicity | In classical mechanics, anharmonicity is the deviation of a system from being a harmonic oscillator. An oscillator that is not oscillating in harmonic motion is known as an anharmonic oscillator where the system can be approximated to a harmonic oscillator and the anharmonicity can be calculated using perturbation theory. If the anharmonicity is large, then other numerical techniques have to be used. In reality all oscillating systems are anharmonic, but most approximate the harmonic oscillator the smaller the amplitude of the oscillation is.
As a result, oscillations with frequencies and etc., where is the fundamental frequency of the oscillator, appear. Furthermore, the frequency deviates from the frequency of the harmonic oscillations. See also intermodulation and combination tones. As a first approximation, the frequency shift is proportional to the square of the oscillation amplitude :
In a system of oscillators with natural frequencies , , ... anharmonicity results in additional oscillations with frequencies .
Anharmonicity also modifies the energy profile of the resonance curve, leading to interesting phenomena such as the foldover effect and superharmonic resonance.
General principle
An oscillator is a physical system characterized by periodic motion, such as a pendulum, tuning fork, or vibrating diatomic molecule. Mathematically speaking, the essential feature of an oscillator is that for some coordinate of the system, a force whose magnitude depends on will push away from extreme values and back toward some central value , causing to oscillate between extremes. For example, may represent the displacement of a pendulum from its resting position . As the absolute value of increases, so does the restoring force acting on the pendulums weight that pushes it back towards its resting position.
In harmonic oscillators, the restoring force is proportional in magnitude (and opposite in direction) to the displacement of from its natural position . The resulting differential equation implies that must oscillate sinusoidally over time, with a period of oscillation that is inherent to the system. may oscillate with any amplitude, but will always have the same period.
Anharmonic oscillators, however, are characterized by the nonlinear dependence of the restorative force on the displacement x. Consequently, the anharmonic oscillator's period of oscillation may depend on its amplitude of oscillation.
As a result of the nonlinearity of anharmonic oscillators, the vibration frequency can change, depending upon the system's displacement. These changes in the vibration frequency result in energy being coupled from the fundamental vibration frequency to other frequencies through a process known as parametric coupling.
Treating the nonlinear restorative force as a function of the displacement of x from its natural position, we may replace by its linear approximation at zero displacement. The approximating function F1 is linear, so it will describe simple harmonic motion. Further, this function is accurate when is small. For this reason, anharmonic motion can be approximated as harmonic motion as long as the oscillations are small.
Examples in physics
There are many systems throughout the physical world that can be modeled as anharmonic oscillators in addition to the nonlinear mass-spring system. For example, an atom, which consists of a positively charged nucleus surrounded by a negatively charged electronic cloud, experiences a displacement between the center of mass of the nucleus and the electronic cloud when an electric field is present. The amount of that displacement, called the electric dipole moment, is related linearly to the applied field for small fields, but as the magnitude of the field is increased, the field-dipole moment relationship becomes nonlinear, just as in the mechanical system.
Further examples of anharmonic oscillators include the large-angle pendulum; nonequilibrium semiconductors that possess a large hot carrier population, which exhibit nonlinear behaviors of various types related to the effective mass of the carriers; and ionospheric plasmas, which also exhibit nonlinear behavior based on the anharmonicity of the plasma, transversal oscillating strings. In fact, virtually all oscillators become anharmonic when their pump amplitude increases beyond some threshold, and as a result it is necessary to use nonlinear equations of motion to describe their behavior.
Anharmonicity plays a role in lattice and molecular vibrations, in quantum oscillations, and in acoustics. The atoms in a molecule or a solid vibrate about their equilibrium positions. When these vibrations have small amplitudes they can be described by harmonic oscillators. However, when the vibrational amplitudes are large, for example at high temperatures, anharmonicity becomes important. An example of the effects of anharmonicity is the thermal expansion of solids, which is usually studied within the quasi-harmonic approximation. Studying vibrating anharmonic systems using quantum mechanics is a computationally demanding task because anharmonicity not only makes the potential experienced by each oscillator more complicated, but also introduces coupling between the oscillators. It is possible to use first-principles methods such as density-functional theory to map the anharmonic potential experienced by the atoms in both molecules and solids. Accurate anharmonic vibrational energies can then be obtained by solving the anharmonic vibrational equations for the atoms within a mean-field theory. Finally, it is possible to use Møller–Plesset perturbation theory to go beyond the mean-field formalism.
Period of oscillations
Consider a mass moving in a potential well . The oscillation period may be derived
where the extremes of the motion are given by and .
See also
Inharmonicity
Harmonic oscillator
Musical acoustics
Nonlinear resonance
Transmon
References
External links
Classical mechanics | 0.779246 | 0.980734 | 0.764233 |
Double pendulum | In physics and mathematics, in the area of dynamical systems, a double pendulum also known as a chaotic pendulum is a pendulum with another pendulum attached to its end, forming a simple physical system that exhibits rich dynamic behavior with a strong sensitivity to initial conditions. The motion of a double pendulum is governed by a set of coupled ordinary differential equations and is chaotic.
Analysis and interpretation
Several variants of the double pendulum may be considered; the two limbs may be of equal or unequal lengths and masses, they may be simple pendulums or compound pendulums (also called complex pendulums) and the motion may be in three dimensions or restricted to the vertical plane. In the following analysis, the limbs are taken to be identical compound pendulums of length and mass , and the motion is restricted to two dimensions.
In a compound pendulum, the mass is distributed along its length. If the double pendulum mass is evenly distributed, then the center of mass of each limb is at its midpoint, and the limb has a moment of inertia of about that point.
It is convenient to use the angles between each limb and the vertical as the generalized coordinates defining the configuration of the system. These angles are denoted and . The position of the center of mass of each rod may be written in terms of these two coordinates. If the origin of the Cartesian coordinate system is taken to be at the point of suspension of the first pendulum, then the center of mass of this pendulum is at:
and the center of mass of the second pendulum is at
This is enough information to write out the Lagrangian.
Lagrangian
The Lagrangian is
The first term is the linear kinetic energy of the center of mass of the bodies and the second term is the rotational kinetic energy around the center of mass of each rod. The last term is the potential energy of the bodies in a uniform gravitational field. The dot-notation indicates the time derivative of the variable in question.
Since (see Chain Rule and List of trigonometric identities)
and
substituting the coordinates above and rearranging the equation gives
The Euler-Lagrange equations then give the two following second-order, non-linear differential equations in :
No closed form solutions for and as functions of time are known, therefore solving the system can only be done numerically, using the Runge Kutta method or similar techniques.
Chaotic motion
The double pendulum undergoes chaotic motion, and clearly shows a sensitive dependence on initial conditions. The image to the right shows the amount of elapsed time before the pendulum flips over, as a function of initial position when released at rest. Here, the initial value of ranges along the -direction from −3.14 to 3.14. The initial value ranges along the -direction, from −3.14 to 3.14. The colour of each pixel indicates whether either pendulum flips within:
(black)
(red)
(green)
(blue) or
(purple).
Initial conditions that do not lead to a flip within are plotted white.
The boundary of the central white region is defined in part by energy conservation with the following curve:
Within the region defined by this curve, that is if
then it is energetically impossible for either pendulum to flip. Outside this region, the pendulum can flip, but it is a complex question to determine when it will flip. Similar behavior is observed for a double pendulum composed of two point masses rather than two rods with distributed mass.
The lack of a natural excitation frequency has led to the use of double pendulum systems in seismic resistance designs in buildings, where the building itself is the primary inverted pendulum, and a secondary mass is connected to complete the double pendulum.
See also
Double inverted pendulum
Pendulum (mechanics)
Trebuchet
Bolas
Mass damper
Mid-20th century physics textbooks use the term "double pendulum" to mean a single bob suspended from a string which is in turn suspended from a V-shaped string. This type of pendulum, which produces Lissajous curves, is now referred to as a Blackburn pendulum.
Notes
References
Eric W. Weisstein, Double pendulum (2005), ScienceWorld (contains details of the complicated equations involved) and "Double Pendulum" by Rob Morris, Wolfram Demonstrations Project, 2007 (animations of those equations).
Peter Lynch, Double Pendulum, (2001). (Java applet simulation.)
Northwestern University, Double Pendulum , (Java applet simulation.)
Theoretical High-Energy Astrophysics Group at UBC, Double pendulum, (2005).
External links
Animations and explanations of a double pendulum and a physical double pendulum (two square plates) by Mike Wheatland (Univ. Sydney)
Interactive Open Source Physics JavaScript simulation with detailed equations double pendulum
Interactive Javascript simulation of a double pendulum
Double pendulum physics simulation from www.myphysicslab.com using open source JavaScript code
Simulation, equations and explanation of Rott's pendulum
Double Pendulum Simulator - An open source simulator written in C++ using the Qt toolkit.
Online Java simulator of the Imaginary exhibition.
Chaotic maps
Dynamical systems
Mathematical physics
Pendulums | 0.767974 | 0.995125 | 0.764231 |
Sagnac effect | The Sagnac effect, also called Sagnac interference, named after French physicist Georges Sagnac, is a phenomenon encountered in interferometry that is elicited by rotation. The Sagnac effect manifests itself in a setup called a ring interferometer or Sagnac interferometer. A beam of light is split and the two beams are made to follow the same path but in opposite directions. On return to the point of entry the two light beams are allowed to exit the ring and undergo interference. The relative phases of the two exiting beams, and thus the position of the interference fringes, are shifted according to the angular velocity of the apparatus. In other words, when the interferometer is at rest with respect to a nonrotating frame, the light takes the same amount of time to traverse the ring in either direction. However, when the interferometer system is spun, one beam of light has a longer path to travel than the other in order to complete one circuit of the mechanical frame, and so takes longer, resulting in a phase difference between the two beams. Georges Sagnac set up this experiment in 1913 in an attempt to prove the existence of the aether that Einstein's theory of special relativity makes superfluous.
A gimbal mounted mechanical gyroscope remains pointing in the same direction after spinning up, and thus can be used as a rotational reference for an inertial navigation system. With the development of so-called laser gyroscopes and fiber optic gyroscopes based on the Sagnac effect, bulky mechanical gyroscopes can be replaced by those with no moving parts in many modern inertial navigation systems. A conventional gyroscope relies on the principle of conservation of angular momentum whereas the sensitivity of the ring interferometer to rotation arises from the invariance of the speed of light for all inertial frames of reference.
Description and operation
Typically three or more mirrors are used, so that counter-propagating light beams follow a closed path such as a triangle or square (Fig. 1). Alternatively fiber optics can be employed to guide the light through a closed path (Fig. 2). If the platform on which the ring interferometer is mounted is rotating, the interference fringes are displaced compared to their position when the platform is not rotating. The amount of displacement is proportional to the angular velocity of the rotating platform. The axis of rotation does not have to be inside the enclosed area. The phase shift of the interference fringes is proportional to the platform's angular frequency and is given by a formula originally derived by Sagnac:where is the oriented area of the loop and the wavelength of light.
The effect is a consequence of the different times it takes right and left moving light beams to complete a full round trip in the interferometer ring. The difference in travel times, when multiplied by the optical frequency , determines the phase difference .
The rotation thus measured is an absolute rotation, that is, the platform's rotation with respect to an inertial reference frame.
History
The Michelson–Morley experiment of 1887 had suggested that the hypothetical luminiferous aether, if it existed, was completely dragged by the Earth. To test this hypothesis, Oliver Lodge in 1897 proposed that a giant ring interferometer be constructed to measure the rotation of the Earth; a similar suggestion was made by Albert Abraham Michelson in 1904. They hoped that with such an interferometer, it would be possible to decide between a stationary aether, versus aethers which are partially or completely dragged by the Earth. That is, if the hypothetical aether were carried along by the Earth (or by the interferometer) the result would be negative, while a stationary aether would give a positive result.
The first description of the Sagnac effect in the framework of special relativity was done by Max von Laue in 1911, two years before Sagnac conducted his experiment. By continuing the theoretical work of Michelson (1904), von Laue confined himself to an inertial frame of reference (which he called a "valid" reference frame), and in a footnote he wrote "a system which rotates in respect to a valid system is not valid". Assuming constant light speed , and setting the rotational velocity as , he computed the propagation time of one ray and of the counter-propagating ray, and consequently obtained the time difference . He concluded that this interferometer experiment would indeed produce (when restricted to terms of first order in ) the same positive result for both special relativity and the stationary aether (the latter he called "absolute theory" in reference to the 1895-theory of Lorentz). He also concluded that only complete-aether-drag models (such as the ones of Stokes or Hertz) would give a negative result.
The first interferometry experiment aimed at observing the correlation of angular velocity and phase-shift was performed by the French scientist Georges Sagnac in 1913. Its purpose was to detect "the effect of the relative motion of the ether". Sagnac believed that his results constituted proof of the existence of a stationary aether. However, as explained above, von Laue already showed in 1911 that this effect is consistent with special relativity. Unlike the carefully prepared Michelson–Morley experiment which was set up to prove an aether wind caused by earth drag, the Sagnac experiment could not prove this type of aether wind because a universal aether would affect all parts of the rotating light equally.
Einstein was aware of the phenomenon of the Sagnac effect through the earlier experiments of Franz Harress in 1911. Harress' experiment had been aimed at making measurements of the Fresnel drag of light propagating through moving glass. Not aware of the Sagnac effect, Harress had realized the presence of an "unexpected bias" in his measurements, but was unable to explain its cause. Harress' analysis of the results contained an error, and they were reanalyzed in 1914 by Paul Harzer, who claimed the results were at odds with special relativity. This was rebutted by Einstein. Harress himself died during the First World War, and his results were not publicly available until von Laue persuaded Otto Knopf, whose assistant Harress had been, to publish them in 1920.
Harress' results were published together with an analysis by von Laue, who showed the role of the Sagnac effect in the experiment. Laue said that in the Harress experiment there was a calculable difference in time due to both the dragging of light (which follows from the relativistic velocity addition in moving media, i.e. in moving glass) and "the fact that every part of the rotating apparatus runs away from one ray, while it approaches the other one", i.e. the Sagnac effect. He acknowledged that this latter effect alone could cause the time variance and, therefore, "the accelerations connected with the rotation in no way influence the speed of light".
While Laue's explanation is based on inertial frames, Paul Langevin (1921, 1937) and others described the same effect when viewed from rotating reference frames (in both special and general relativity, see Born coordinates). So when the Sagnac effect should be described from the viewpoint of a corotating frame, one can use ordinary rotating cylindrical coordinates and apply them to the Minkowski metric, which results into the so-called Born metric or Langevin metric. From these coordinates, one can derive the different arrival times of counter-propagating rays, an effect which was shown by Paul Langevin (1921). Or when these coordinates are used to compute the global speed of light in rotating frames, different apparent light speeds are derived depending on the orientation, an effect which was shown by Langevin in another paper (1937).
This does not contradict special relativity and the above explanation by von Laue that the speed of light is not affected by accelerations. Because this apparent variable light speed in rotating frames only arises if rotating coordinates are used, whereas if the Sagnac effect is described from the viewpoint of an external inertial coordinate frame the speed of light of course remains constant – so the Sagnac effect arises no matter whether one uses inertial coordinates (see the formulas in section below) or rotating coordinates (see the formulas in section below). That is, special relativity in its original formulation was adapted to inertial coordinate frames, not rotating frames. Albert Einstein in his paper introducing special relativity stated, "light is always propagated in empty space with a definite velocity c which is independent of the state of motion of the emitting body". Einstein specifically stated that light speed is only constant in the vacuum of empty space, using equations that only held in linear and parallel inertial frames. However, when Einstein started to investigate accelerated reference frames, he noticed that "the principle of the constancy of light must be modified" for accelerating frames of reference.
Max von Laue in his 1920 paper gave serious consideration to the effect of General Relativity on the Sagnac effect stating, "General relativity would of course be capable of giving some statements about it, and we want to show at first that no noticeable influences of acceleration are expected according to it." He makes a footnote regarding discussions with German physicist, Wilhelm Wien. The reason for looking at General Relativity is because Einstein's Theory of General Relativity predicted that light would slow down in a gravitational field which is why it could predict the curvature of light around a massive body. Under General Relativity, there is the equivalence principle which states that gravity and acceleration are equivalent. Spinning or accelerating an interferometer creates a gravitational effect. "There are, however, two different types of such [non-inertial] motion; it may for instance be acceleration in a straight line, or circular motion with constant speed." Also, Irwin Shapiro in 1964 explained General Relativity saying, "the speed of a light wave depends on the strength of the gravitational potential along its path". This is called the Shapiro delay. However, since the gravitational field would have to be significant, Laue (1920) concluded it is more likely that the effect is a result of changing the distance of the path by its movement through space. "The beam traveling around the loop in the direction of rotation will have farther to go than the beam traveling counter to the direction of rotation, because during the period of travel the mirrors and detector will all move (slightly) toward the counter-rotating beam and away from the co-rotating beam. Consequently the beams will reach the detector at slightly different times, and slightly out of phase, producing optical interference 'fringes' that can be observed and measured."
In 1926, an ambitious ring interferometry experiment was set up by Albert Michelson and Henry Gale. The aim was to find out whether the rotation of the Earth has an effect on the propagation of light in the vicinity of the Earth. The Michelson–Gale–Pearson experiment was a very large ring interferometer, (a perimeter of 1.9 kilometer), large enough to detect the angular velocity of the Earth. The outcome of the experiment was that the angular velocity of the Earth as measured by astronomy was confirmed to within measuring accuracy. The ring interferometer of the Michelson–Gale experiment was not calibrated by comparison with an outside reference (which was not possible, because the setup was fixed to the Earth). From its design it could be deduced where the central interference fringe ought to be if there would be zero shift. The measured shift was 230 parts in 1000, with an accuracy of 5 parts in 1000. The predicted shift was 237 parts in 1000.
The Sagnac effect has stimulated a century long debate on its meaning and interpretation, much of this debate being surprising since the effect is perfectly well understood in the context of special relativity.
Theory
Basic case
The shift in interference fringes in a ring interferometer can be viewed intuitively as a consequence of the different distances that light travels due to the rotation of the ring.(Fig. 3) The simplest derivation is for a circular ring of radius R, with a refractive index of one, rotating at an angular velocity of , but the result is general for loop geometries with other shapes. If a light source emits in both directions from one point on the rotating ring, light traveling in the same direction as the rotation direction needs to travel more than one circumference around the ring before it catches up with the light source from behind. The time that it takes to catch up with the light source is given by:
is the distance (black bold arrow in Fig. 3) that the mirror has moved in that same time:
Eliminating from the two equations above we get:
Likewise, the light traveling in the opposite direction of the rotation will travel less than one circumference before hitting the light source on the front side. So the time for this direction of light to reach the moving source again is:
The time difference is
For , this reduces to
where A is the area of the ring.
Although this simple derivation is for a circular ring with an index of refraction of one, the result holds true for any shape of rotating loop with area A.(Fig. 4)
For more complicated shapes, or other refractive index values, the same result can be derived by calculating the optical phase shift in each direction using Fermat's principle and taking into account the different phase velocities for the different propagation directions in an inertial laboratory frame, which can be calculated using relativistic addition of velocities.
We imagine a screen for viewing fringes placed at the light source (or we use a beamsplitter to send light from the source point to the screen). Given a steady light source, interference fringes will form on the screen with a fringe displacement proportional to the time differences required for the two counter-rotating beams to traverse the circuit. The phase shift is , which causes fringes to shift in proportion to and .
At non-relativistic speeds, the Sagnac effect is a simple consequence of the source independence of the speed of light. In other words, the Sagnac experiment does not distinguish between pre-relativistic physics and relativistic physics.
When light propagates in fibre optic cable, the setup is effectively a combination of a Sagnac experiment and the Fizeau experiment. In glass the speed of light is slower than in vacuum, and the optical cable is the moving medium. In that case the relativistic velocity addition rule applies. Pre-relativistic theories of light propagation cannot account for the Fizeau effect. (By 1900 Lorentz could account for the Fizeau effect, but by that time his theory had evolved to a form where in effect it was mathematically equivalent to special relativity.)
Since emitter and detector are traveling at the same speeds, Doppler effects cancel out, so the Sagnac effect does not involve the Doppler effect. In the case of ring laser interferometry, it is important to be aware of this. When the ring laser setup is rotating, the counterpropagating beams undergo frequency shifts in opposite directions. This frequency shift is not a Doppler shift, but is rather an optical cavity resonance effect, as explained below in Ring lasers.
The Sagnac effect is well understood in the context of special relativity where from the rotating light source's point of view the phase difference is due to the line of simultaneity along the light path not forming a closed loop in spacetime.
Generalized formula
Modified versions of the experiment have been proposed with the light source allowed to move along a (not necessarily circular) light path. This configuration introduces another reason for the phase difference: according to the light source the two signals now follow different paths in space. Some authors refer to this effect as Sagnac effect although in this case the discrepancy need not be due to the lines of simultaneity not forming closed loops.
An example of the modified configuration is shown in Fig. 5, the measured phase difference in both a standard fibre optic gyroscope, shown on the left, and a modified fibre optic conveyor, shown on the right, conform to the equation Δt = 2vL/c2, whose derivation is based on the constant speed of light. It is evident from this formula that the total time delay is equal to the cumulative time delays along the entire length of fibre, regardless whether the fibre is in a rotating section of the conveyor, or a straight section.
This equation is invalid, however, if the light source's path in space does not follow that of the light signals, for example in the standard rotating platform case (FOG) but with a non-circular light path. In this case the phase difference formula necessarily involves the area enclosed by the light path due to Stokes' theorem.
Consider a ring interferometer where two counter-propagating light beams share a common optical path determined by a loop of an optical fiber, see Figure 4. The loop may have an arbitrary shape, and can move arbitrarily in space. The only restriction is that it is not allowed to stretch. (The case of a circular ring interferometer rotating about its center in free space is recovered by taking the index of refraction of the fiber to be 1.)
Consider a small segment of the fiber, whose length in its rest frame is . The time intervals, , it takes the left and right moving light rays to traverse the segment in the rest frame coincide and are given byLet be the length of this small segment in the lab frame. By the relativistic length contraction formula, correct to first order in the velocity of the segment. The time intervals for traversing the segment in the lab frame are given by Lorentz transformation as:correct to first order in the velocity . In general, the two beams will visit a given segment at slightly different times, but, in the absence of stretching, the length is the same for both beams.
It follows that the time difference for completing a cycle for the two beams isRemarkably, the time difference is independent of the refraction index and the velocity of light in the fiber.
Imagine a screen for viewing fringes placed at the light source (alternatively, use a beamsplitter to send light from the source point to the screen). Given a steady light source, interference fringes will form on the screen with a fringe displacement given by where the first factor is the frequency of light. This gives the generalized Sagnac formulaIn the special case that the fiber moves like a rigid body with angular frequency , the velocity is and the line integral can be computed in terms of the area of the loop:This gives Sagnac formula for ring interferometers of arbitrary shape and geometryIf one also allows for stretching one recovers the Fizeau interference formula.
Applications
A relay of pulses that circumnavigates the Earth, verifying precise synchronization, is also recognized as a case requiring correction for the Sagnac effect. In 1984 a verification was set up that involved three ground stations and several GPS satellites, with relays of signals both going eastward and westward around the world. In the case of a Sagnac interferometer a measure of difference in arrival time is obtained by producing interference fringes, and observing the fringe shift. In the case of a relay of pulses around the world the difference in arrival time is obtained directly from the actual arrival time of the pulses. In both cases the mechanism of the difference in arrival time is the same: the Sagnac effect.
The Hafele–Keating experiment is also recognized as a counterpart to Sagnac effect physics. In the actual Hafele–Keating experiment the mode of transport (long-distance flights) gave rise to time dilation effects of its own, and calculations were needed to separate the various contributions. For the (theoretical) case of clocks that are transported so slowly that time dilation effects arising from the transport are negligible the amount of time difference between the clocks when they arrive back at the starting point will be equal to the time difference that is found for a relay of pulses that travels around the world: 207 nanoseconds.
Practical uses
The Sagnac effect is employed in current technology. One use is in inertial guidance systems. Ring laser gyroscopes are extremely sensitive to rotations, which need to be accounted for if an inertial guidance system is to return accurate results. The ring laser also can detect the sidereal day, which can also be termed "mode 1". Global navigation satellite systems (GNSSs), such as GPS, GLONASS, COMPASS or Galileo, need to take the rotation of the Earth into account in the procedures of using radio signals to synchronize clocks.
Ring lasers
Fibre optic gyroscopes are sometimes referred to as 'passive ring interferometers'. A passive ring interferometer uses light entering the setup from outside. The interference pattern that is obtained is a fringe pattern, and what is measured is a phase shift.
It is also possible to construct a ring interferometer that is self-contained, based on a completely different arrangement. This is called a ring laser or ring laser gyroscope. The light is generated and sustained by incorporating laser excitation in the path of the light.
To understand what happens in a ring laser cavity, it is helpful to discuss the physics of the laser process in a laser setup with continuous generation of light. As the laser excitation is started, the molecules inside the cavity emit photons, but since the molecules have a thermal velocity, the light inside the laser cavity is at first a range of frequencies, corresponding to the statistical distribution of velocities. The process of stimulated emission makes one frequency quickly outcompete other frequencies, and after that the light is very close to monochromatic.
For the sake of simplicity, assume that all emitted photons are emitted in a direction parallel to the ring. Fig. 7 illustrates the effect of the ring laser's rotation. In a linear laser, an integer multiple of the wavelength fits the length of the laser cavity. This means that in traveling back and forth the laser light goes through an integer number of cycles of its frequency. In the case of a ring laser the same applies: the number of cycles of the laser light's frequency is the same in both directions. This quality of the same number of cycles in both directions is preserved when the ring laser setup is rotating. The image illustrates that there is wavelength shift (hence a frequency shift) in such a way that the number of cycles is the same in both directions of propagation.
By bringing the two frequencies of laser light to interference a beat frequency can be obtained; the beat frequency is the difference between the two frequencies. This beat frequency can be thought of as an interference pattern in time. (The more familiar interference fringes of interferometry are a spatial pattern). The period of this beat frequency is linearly proportional to the angular velocity of the ring laser with respect to inertial space. This is the principle of the ring laser gyroscope, widely used in modern inertial navigation systems.
Zero point calibration
In passive ring interferometers, the fringe displacement is proportional to the first derivative of angular position; careful calibration is required to determine the fringe displacement that corresponds to zero angular velocity of the ring interferometer setup. On the other hand, ring laser interferometers do not require calibration to determine the output that corresponds to zero angular velocity. Ring laser interferometers are self-calibrating. The beat frequency will be zero if and only if the ring laser setup is non-rotating with respect to inertial space.
Fig. 8 illustrates the physical property that makes the ring laser interferometer self-calibrating. The grey dots represent molecules in the laser cavity that act as resonators. Along every section of the ring cavity, the speed of light is the same in both directions. When the ring laser device is rotating, then it rotates with respect to that background. In other words: invariance of the speed of light provides the reference for the self-calibrating property of the ring laser interferometer.
Lock-in
Ring laser gyroscopes suffer from an effect known as "lock-in" at low rotation rates (less than 100°/h). At very low rotation rates, the frequencies of the counter-propagating laser modes become almost identical. In this case, crosstalk between the counter-propagating beams can result in injection locking, so that the standing wave "gets stuck" in a preferred phase, locking the frequency of each beam to each other rather than responding to gradual rotation. By rotationally dithering the laser cavity back and forth through a small angle at a rapid rate (hundreds of hertz), lock-in will only occur during the brief instances where the rotational velocity is close to zero; the errors thereby induced approximately cancel each other between alternating dead periods.
Fibre optic gyroscopes versus ring laser gyroscopes
Fibre optic gyros (FOGs) and ring laser gyros (RLGs) both operate by monitoring the difference in propagation time between beams of light traveling in clockwise and counterclockwise directions about a closed optical path. They differ considerably in various cost, reliability, size, weight, power, and other performance characteristics that need to be considered when evaluating these distinct technologies for a particular application.
RLGs require accurate machining, use of precision mirrors, and assembly under clean room conditions. Their mechanical dithering assemblies add somewhat to their weight but not appreciably. RLGs are capable of logging in excess of 100,000 hours of operation in near-room temperature conditions. Their lasers have relatively high power requirements.
Interferometric FOGs are purely solid-state, require no mechanical dithering components, do not require precision machining, have a flexible geometry, and can be made very small. They use many standard components from the telecom industry. In addition, the major optical components of FOGs have proven performance in the telecom industry, with lifespans measured in decades. However, the assembly of multiple optical components into a precision gyro instrument is costly. Analog FOGs offer the lowest possible cost but are limited in performance; digital FOGs offer the wide dynamic ranges and accurate scale factor corrections required for stringent applications. Use of longer and larger coils increases sensitivity at the cost of greater sensitivity to temperature variations and vibrations.
Zero-area Sagnac interferometer and gravitational wave detection
The Sagnac topology was actually first described by Michelson in 1886, who employed an even-reflection variant of this interferometer in a repetition of the Fizeau experiment. Michelson noted the extreme stability of the fringes produced by this form of interferometer: White-light fringes were observed immediately upon alignment of the mirrors. In dual-path interferometers, white-light fringes are difficult to obtain since the two path lengths must be matched to within a couple of micrometers (the coherence length of the white light). However, being a common-path interferometer, the Sagnac configuration inherently matches the two path lengths. Likewise Michelson observed that the fringe pattern would remain stable even while holding a lighted match below the optical path; in most interferometers the fringes would shift wildly due to the refractive index fluctuations from the warm air above the match. Sagnac interferometers are almost completely insensitive to displacements of the mirrors or beam-splitter. This characteristic of the Sagnac topology has led to their use in applications requiring exceptionally high stability.
The fringe shift in a Sagnac interferometer due to rotation has a magnitude proportional to the enclosed area of the light path, and this area must be specified in relation to the axis of rotation. Thus the sign of the area of a loop is reversed when the loop is wound in the opposite direction (clockwise or anti-clockwise). A light path that includes loops in both directions, therefore, has a net area given by the difference between the areas of the clockwise and anti-clockwise loops. The special case of two equal but opposite loops is called a zero-area Sagnac interferometer. The result is an interferometer that exhibits the stability of the Sagnac topology while being insensitive to rotation.
The Laser Interferometer Gravitational-Wave Observatory (LIGO) consisted of two 4-km Michelson–Fabry–Pérot interferometers, and operated at a power level of about 100 watts of laser power at the beam splitter. After an upgrade to Advanced LIGO several kilowatts of laser power are required.
A variety of competing optical systems are being explored for third generation enhancements beyond Advanced LIGO. One of these competing proposals is based on the zero-area Sagnac design. With a light path consisting of two loops of the same area, but in opposite directions, an effective area of zero is obtained thus canceling the Sagnac effect in its usual sense. Although insensitive to low frequency mirror drift, laser frequency variation, reflectivity imbalance between the arms, and thermally induced birefringence, this configuration is nevertheless sensitive to passing gravitational waves at frequencies of astronomical interest. However, many considerations are involved in the choice of an optical system, and despite the zero-area Sagnac's superiority in certain areas, there is as yet no consensus choice of optical system for third generation LIGO.
See also
Born coordinates
Fiber optic gyroscope
Ring laser gyroscope
References
External links
Mathpages: The Sagnac Effect
Ring-laser tests of fundamental physics and geophysics (Extensive review by G E Stedman. PDF-file, 1.5 MB)
Physics experiments
Interferometry
Theory of relativity
Rotation | 0.769889 | 0.992635 | 0.764218 |
Lawrence Berkeley National Laboratory | Lawrence Berkeley National Laboratory (LBNL, Berkeley Lab) is a federally funded research and development center in the hills of Berkeley, California, United States. Established in 1931 by the University of California (UC), the laboratory is sponsored by the United States Department of Energy and administered by the UC system. Ernest Lawrence, who won the Nobel prize for inventing the cyclotron, founded the lab and served as its director until his death in 1958. Located in the Berkeley Hills, the lab overlooks the campus of the University of California, Berkeley.
Scientific research
The mission of Berkeley Lab is to bring science solutions to the world. The research at Berkeley Lab has four main themes: discovery science, clean energy, healthy earth and ecological systems, and the future of science. The Laboratory's 22 scientific divisions are organized within six areas of research: Computing Sciences, Physical Sciences, Earth and Environmental Sciences, Biosciences, Energy Sciences, and Energy Technologies. Lab founder Ernest Lawrence believed that scientific research is best done through teams of individuals with different fields of expertise, working together, and his laboratory still considers that a guiding principle today.
Research impact
Berkeley Lab scientists have won fifteen Nobel prizes in physics and chemistry, and each one has a street named after them on the Lab campus. 23 Berkeley Lab employees were contributors to reports by the United Nations' Intergovernmental Panel on Climate Change, which shared the Nobel Peace Prize. Fifteen Lab scientists have also won the National Medal of Science, and two have won the National Medal of Technology and Innovation. 82 Berkeley Lab researchers have been elected to membership in the National Academy of Sciences or the National Academy of Engineering.
Berkeley Lab has the greatest research publication impact of any single government laboratory in the world in physical sciences and chemistry, as measured by Nature Index. The only institutions with higher ranking are the entire national government research agencies for China, France, and Italy, each of which is comparable to the complete network of 17 United States Department of Energy National Laboratories. Using the same metric, the Lab is the second-ranking laboratory in the area of earth and environmental sciences.
Scientific user facilities
Much of Berkeley Lab's research impact is built on the capabilities of its unique research facilities.
The laboratory manages five national scientific user facilities, which are part of the network of 28 such facilities operated by the DOE Office of Science. These facilities and the expertise of the scientists and engineers who operate them are made available to 14,000 researchers from universities, industry, and government laboratories.
Berkeley Lab operates five major National User Facilities for the DOE Office of Science:
The Advanced Light Source (ALS) is a synchrotron light source with 41 beamlines providing ultraviolet, soft x-ray, and hard x-ray light to scientific experiments in a wide variety of fields, including materials science, biology, chemistry, physics, and the environmental sciences. The ALS is supported by the DOE Office of Basic Energy Sciences.
The Joint Genome Institute (JGI) is a scientific user facility for integrative genomic science, with particular emphasis on the DOE missions of energy and the environment. The JGI provides over 2,000 scientific users with access to the latest generation of genome sequencing and analysis capabilities.
The Molecular Foundry is a multidisciplinary nanoscience research facility. Its seven research facilities focus on Imaging and Manipulation of Nanostructures, Nanofabrication, Theory of Nanostructured Materials, Inorganic Nanostructures, Biological Nanostructures, Organic and Macromolecular Synthesis, and Electron Microscopy.
The National Energy Research Scientific Computing Center (NERSC) is the scientific computing facility that provides high performance computing for over 9,000 scientists working on the basic and applied research programs supported by the DOE. The Perlmutter system at NERSC is the 8th-ranked supercomputer system in the Top500 rankings from November 2022.
The Energy Sciences Network (ESnet) is a high-speed research network serving DOE scientists with their experimental facilities and collaborators worldwide. The upgraded network infrastructure launched in 2022 is optimized for very large scientific data flows, and the network transports roughly 35 petabytes of traffic each month.
Team science
Much of the research at Berkeley Lab is done by researchers from several disciplines and multiple institutions working together as a large team focused on shared scientific goals. Berkeley is either the lead partner or one of the leads in several research institutes and hubs, including the following:
The Joint BioEnergy Institute (JBEI). JBEI's mission is to establish the scientific knowledge and new technologies needed to transform the maximum amount of carbon available in bioenergy crops into biofuels and bioproducts. JBEI is one of four U.S. Department of Energy (DOE) Bioenergy Research Centers (BRCs). In 2023, the DOE announced the commitment of $590M to support the BRCs for the next five years.
The National Alliance for Water Innovation (NAWI). NAWI aims to secure an affordable, energy-efficient, and resilient water supply for the US economy through decentralized, fit-for-purpose processing. NAWI is supported primarily by the DOE Office of Energy Efficiency and Renewable Energy, partnering with the California Department of Water Resources, the California State Water Resources Control Board. Berkeley Lab is the lead partner, with founding partners Oak Ridge National Laboratory (ORNL) and the National Renewable Energy Laboratory (NREL).
The Liquid Sunlight Alliance (LiSA). LiSA's Mission is to establish the science principles by which durable coupled microenvironments can be co-designed to efficiently and selectively generate liquid fuels from sunlight, water, carbon dioxide, and nitrogen. The lead institution for LiSA is the California Institute of Technology and Berkeley Lab is a major partner.
The Joint Center for Energy Storage Research (JCESR). JCESR's mission is to deliver transformational new concepts and materials for electrodes, electrolytes and interfaces that will enable a diversity of high performance next-generation batteries for transportation and the grid. Argonne National Laboratory leads JCESR and Berkeley Lab is a major partner.
Cyclotron Road
Cyclotron Road is a fellowship program for technology innovators, supporting entrepreneurial scientists as they advance their own technology projects. The core support for the program comes from the Department of Energy's Office of Energy Efficiency and Renewable Energy, through the Lab-Embedded Entrepreneurship Program. Berkeley Lab manages the program in close partnership with Activate, a nonprofit organization established to scale the Cyclotron Road fellowship model to a greater number of innovators around the U.S. and the world. Cyclotron Road fellows receive two years of stipend, $100,000 of research support, intensive mentorship and a startup curriculum, and access to the expertise and facilities of Berkeley Lab. Since members of the first cohort completed their fellowships in 2017, companies founded by Cyclotron Road Fellows have raised about $1 billion in follow-on funding.
Notable scientists
Nobel laureates
Fifteen Berkeley Lab scientists have received the Nobel Prize in physics or chemistry.
National Medals
Fifteen Berkeley Lab scientists have received the National Medal of Science.
Arthur Rosenfeld received the National Medal of Technology and Innovation in 2011.
History
From 1931 to 1945: cyclotrons and team science
The laboratory was founded on August 26, 1931, by Ernest Lawrence, as the Radiation Laboratory of the University of California, Berkeley, associated with the Physics Department. It centered physics research around his new instrument, the cyclotron, a type of particle accelerator for which he was awarded the Nobel Prize in Physics in 1939. Throughout the 1930s, Lawrence pushed to create larger and larger machines for physics research, courting private philanthropists for funding. He was the first to develop a large team to build big projects to make discoveries in basic research. Eventually these machines grew too large to be held on the university grounds, and in 1940 the lab moved to its current site atop the hill above campus. Part of the team put together during this period includes two other young scientists who went on to direct large laboratories: J. Robert Oppenheimer, who directed Los Alamos Laboratory, and Robert Wilson, who directed Fermilab.
Leslie Groves visited Lawrence's Radiation Laboratory in late 1942 as he was organizing the Manhattan Project, meeting J. Robert Oppenheimer for the first time. Oppenheimer was tasked with organizing the nuclear bomb development effort and founded today's Los Alamos National Laboratory to help keep the work secret. At the RadLab, Lawrence and his colleagues developed the technique of electromagnetic enrichment of uranium using their experience with cyclotrons. The calutrons (named after the University) became the basic unit of the massive Y-12 facility in Oak Ridge, Tennessee. Lawrence's lab helped contribute to what have been judged to be the three most valuable technology developments of the war (the atomic bomb, proximity fuze, and radar). The cyclotron, whose construction was stalled during the war, was finished in November 1946. The Manhattan Project shut down two months later.
From 1946 to 1972: discovering the antiproton and new elements
After the war, the Radiation Laboratory became one of the first laboratories to be incorporated into the Atomic Energy Commission (AEC) (now Department of Energy, DOE). In 1952, the Laboratory established a branch in Livermore focused on nuclear security work, which developed into Lawrence Livermore National Laboratory. Some classified research continued at Berkeley Lab until the 1970s, when it became a laboratory dedicated only to unclassified scientific research. Much of the Laboratory's scientific leadership during this period were also faculty members in the Physics and Chemistry Departments at the University of California, Berkeley.
The scientists and engineers at Berkeley Lab continued to build ambitious large projects to accelerate the advance of science. Lawrence's original cyclotron design did not work for particles near the speed of light, so a new approach was needed. Edwin McMillan co-invented the synchrotron with Vladimir Veksler to address the problem. McMillan built an electron synchrotron capable of accelerating electrons to 300 million electron volts (300 MeV), which was operated from 1948 to 1960.
The Berkeley accelerator team built the Bevatron, a proton synchrotron capable of accelerating protons to an energy of 6.5 gigaelectronvolts (GeV), an energy chosen to be just above the threshold for producing antiprotons. In 1955, during the Bevatron's first full year of operation, Physicists Emilio Segrè and Owen Chamberlain won the competition to observe the antiprotons for the first time. They won the Nobel Prize for Physics in 1959 for this discovery. The Bevatron remained the highest energy accelerator until the CERN Proton Synchrotron started accelerating protons to 25 GeV in 1959.
Luis Alvarez led the design and construction of several liquid hydrogen bubble chambers, which were used to discover a large number of new elementary particles using Bevatron beams. His group also developed measuring systems to record the millions of photographs of particle tracks in the bubble chamber and computer systems to analyze the data. Alvarez won the Nobel Prize for Physics in 1968 for the discovery of many elementary particles using this technique.
The Alvarez Physics Memos are a set of informal working papers of the large group of physicists, engineers, computer programmers, and technicians led by Luis W. Alvarez from the early 1950s until his death in 1988. Over 1700 memos are available on-line, hosted by the Laboratory.
Berkeley Lab is credited with the discovery of 16 elements on the periodic table, more than any other institution, over the period 1940 to 1974. The American Chemical Society has established a National Historical Chemical Landmark at the Lab to memorialize this accomplishment.
Glenn Seaborg was personally involved in discovering nine of these new elements, and he won the Nobel Prize for Chemistry in 1951 with McMillan.
Founding Laboratory Director Lawrence died in 1958 at the age of 57. McMillan became the second Director, serving in that role until 1972.
From 1973 to 1989: new capabilities in energy and environmental research
The University of California appointed Andrew Sessler as the Laboratory Director in 1973, during the 1973 oil crisis. He established the Energy and Environment Division at the Lab, expanding for the first time into applied research that addressed the energy and environmental challenges the country faced. Sessler also joined with other Berkeley physicists to form an organization called Scientists for Sakharov, Orlov, Sharansky (SOS), which led an international protest movement calling attention to the plight of three Soviet scientists who were being persecuted by the U.S.S.R. government.
Arthur Rosenfeld led the campaign to build up applied energy research at Berkeley Lab. He became widely known as the father of energy efficiency and the person who convinced the nation to adopt energy standards for appliances and buildings. Inspired by the 1973 oil crisis, he started up large team efforts that developed several technologies that radically improved energy efficiency. These included compact fluorescent lamps, low-energy refrigerators, and windows that trap heat. He developed the first energy-efficiency standards for buildings and appliances in California, which helped the state to sustain constant electricity use per capita from 1973 to 2006, while it rose by 50% in the rest of the country. This phenomenon is called the Rosenfeld Effect.
By 1980, George Smoot had built up a strong experimental group in Berkeley, building instruments to measure the cosmic microwave background (CMB) in order to study the early universe. He became the principal investigator for the Differential Microwave Radiometer (DMR) instrument that was launched in 1989 as part of the Cosmic Background Explorer (COBE) mission. The full sky maps taken by the DMR made it possible for COBE scientists to discover the anisotropy of the CMB, and Smoot shared the Nobel Prize for Physics in 2006 with John Mather.
From 1990 to 2004: new facilities for chemistry and materials, nanotechnology, scientific computing, and genomics
Charles V. Shank left Bell Labs to become Director of Berkeley Lab in 1989, a position he held for 15 years. During his tenure, four of the five national scientific user facilities started operations at Berkeley, and the fifth started construction.
On October 5, 1993, the new Advanced Light Source produced its first beams of x-ray light. David Shirley had proposed in the early 1990s building this new synchrotron source specializing in imaging materials using extreme ultraviolet to soft x-rays. In fall 2001, a major upgrade added "superbends" to produce harder x-rays for beamlines devoted to protein crystallography.
In 1996, both the National Energy Research Scientific Computing Center (NERSC) and the Energy Sciences Network (ESnet) were moved from Lawrence Livermore National Laboratory to their new home at Berkeley Lab.
To reestablish NERSC at Berkeley required moving a Cray C90, a first-generation vector processor supercomputer of 1991 vintage, and installing a new Cray T3E, the second-generation (1995) model. The NERSC computing capacity was 350 GFlop/s, representing 1/200,000 of the Perlmutter's speed in 2022. Horst Simon was brought to Berkeley as the first Director of NERSC, and he soon became one of the co-editors who managed the Top500 list of supercomputers, a position he has held ever since.
The Joint Genome Institute (JGI) was created in 1997 to unite the expertise and resources in genome mapping, DNA sequencing, technology development, and information sciences that had developed at the DOE genome centers at Berkeley Lab, Lawrence Livermore National Laboratory (LLNL) and Los Alamos National Laboratory (LANL). The JGI was originally established to work on the Human Genome Project (HGP), and generated the complete sequences of Chromosomes 5, 16 and 19. In 2004, the JGI established itself as a national user facility managed by Berkeley Lab, focusing on the broad genomic needs of biology and biotechnology, especially those related to the environment and carbon management.
Laboratory Director Shank brought Daniel Chemla from Bell Labs to Berkeley Lab in 1991 to lead the newly formed Division of Materials Science and Engineering. In 1998 Chemla was appointed director of the Advanced Light Source to build it into a world-class scientific user facility.
In 2001, Chemla proposed the establishment of the Molecular Foundry, to make cutting-edge instruments and expertise for nanotechnology accessible to a broad research community. Paul Alivisatos as founding director, and the founding directors of the facilities were Carolyn Bertozzi, Jean Frechet, Steven Gwon Sheng Louie, Jeffrey Bokor, and Miquel Salmeron. The Molecular Foundry building was dedicated in 2006, with Bertozzi as Foundry Director and Steven Chu as Laboratory Director.
In the 1990s, Saul Perlmutter led the Supernova Cosmology Project (SCP), which used a certain type of supernovas as standard candles to study the expansion of the universe. The SCP team co-discovered the accelerating expansion of the universe, leading to the concept of dark energy, an unknown form of energy that drives this acceleration. Perlmutter shared the Nobel Prize in Physics in 2011 for this discovery.
From 2005 to 2015: addressing climate change and the future of energy
On August 1, 2004, Nobel-winning physicist Steven Chu was named the sixth Director of Berkeley Lab. The DOE was preparing to compete the management and operations (M&O) contract for Berkeley Lab for the first time, and Chu's first task was to lead the University of California's team that successfully bid for that contract. The initial term of the contract was from June 1, 2005, to May 31, 2010, with possible phased extensions for superior management performance up to a total contract term of 20 years.
In 2007, Berkeley Lab launched the Joint BioEnergy Institute, one of three Bioenergy Research Centers to receive funding from the Genomic Science Program of DOE's Office for Biological and Environmental Research (BER).
JBEI's Chief Executive Officer is Jay Keasling, who was elected a member of the National Academy of Engineering for developing synthetic biology tools needed to engineer the antimalarial drug artemisinin. The DOE Office of Science named Keasling a Distinguished Scientist Fellow in 2021 for advancing the DOE's strategy in renewable energy.
On December 15, 2008, newly elected President Barack Obama nominated Steven Chu to be the Secretary of Energy. The University of California chose the Lab's Deputy Director, Paul Alivisatos, as the new director. Alivisatos is a materials chemist who won the National Medal of Science for his pioneering work in developing nanomaterials. He continued the Lab's focus on renewable energy and climate change.
The DOE established the Joint Center for Artificial Photosynthesis (JCAP) as an Energy Innovation Hub in 2010,
with California Institute of Technology as the lead institution and Berkeley Lab as the lead partner. The Lab built a new facility to house the JCAP laboratories and collaborative research space, and it was dedicated as Chu Hall in 2015. After JCAP operated for ten years, in 2020 the Berkeley team became a major partner in a new Energy Innovation Hub, the Liquid Sunlight Alliance (LiSA), with the vision of establishing the science needed to generate liquid fuels economically from sunlight, water, carbon dioxide and nitrogen.
The Lab also is a major partner on a second Energy Innovation Hub, the Joint Center for Energy Storage Research (JCESR) which was started in 2013,
with Argonne National Laboratory as the lead institution. The Lab built a new facility, the General Purpose Laboratory, to house energy storage laboratories and associated research space, which Secretary of Energy Ernest Moniz inaugurated in 2014. The mission of JCESR is to deliver transformational new concepts and materials that will enable a diversity of high performance next-generation batteries for transportation and the grid.
On November 12, 2015, Laboratory Director Paul Alivisatos and Deputy Director Horst Simon were joined by University of California President Janet Napolitano, UC Berkeley Chancellor Nicholas Dirks, and the head of DOE's ASCR program Barb Helland to dedicate a Shyh Wang Hall, a facility designed to host the NERSC supercomputers and staff, the ESnet staff, and the research divisions in the Computing Sciences area. The building was designed with a novel seismic floor for the 20,000 square foot machine room in addition to features that take advantage of the coastal climate to provide energy-efficient air conditioning for the computing systems.
From 2016 to the present: building new facilities and accelerating decarbonization
In 2015 Paul Alivisatos announced that he was stepping down from his role as Laboratory Director. He took two leadership positions at the University of California, Berkeley, before becoming President of the University of Chicago in 2021. The University of California selected Michael Witherell, formerly the Director of Fermilab and Vice Chancellor for Research at the University of California, Santa Barbara as the eighth director of Berkeley Lab starting on March 1, 2016. In 2016, the Laboratory entered a period of intensive modernization: an unprecedented number of major projects to upgrade existing scientific facilities and to build new ones.
Berkeley Lab physicists led the construction of the Dark Energy Spectroscopic Instrument, which is designed to create three-dimensional maps of the distribution of matter covering an unprecedented volume of the universe with unparalleled detail. The new instrument was installed on the retrofitted Nicholas U. Mayall 4-meter Telescope at Kitt Peak National Observatory in 2019. The five-year mission started in 2021, and the map assembled with data taken in the first seven months already included more galaxies than any previous survey.
On September 27, 2016, The DOE gave approval of the mission need for ALS-U, a major project to upgrade the Advanced Light Source that includes constructing a new storage ring and an accumulator ring. The horizontal size of the electron beam in ALS will shrink from 100 micrometers to a few micrometers, which will improve the ability to image novel materials needed for next-generation batteries and electronics. With a total project cost of $590 million, this is the largest construction project at the Lab since the ALS was built in 1993.
How the Lab's name evolved
Shortly after the death of Lawrence in August 1958, the UC Radiation Laboratory (UCRL), including both the Berkeley and Livermore sites, was renamed Lawrence Radiation Laboratory. The Berkeley location became Lawrence Berkeley Laboratory in 1971, although many continued to call it the RadLab. Gradually, another shortened form came into common usage, LBL. Its formal name was amended to Ernest Orlando Lawrence Berkeley National Laboratory in 1995, when "National" was added to the names of all DOE labs. "Ernest Orlando" was later dropped to shorten the name. Today, the lab is commonly referred to as Berkeley Lab.
Laboratory directors
(1931–1958): Ernest Lawrence
(1958–1972): Edwin McMillan
(1973–1980): Andrew Sessler
(1980–1989): David Shirley
(1989–2004): Charles V. Shank
(2004–2008): Steven Chu
(2009–2016): Paul Alivisatos
(2016–present): Michael Witherell
Operations and governance
The University of California operates Lawrence Berkeley National Laboratory under a contract with the Department of Energy. The site consists of 76 buildings (owned by the U.S. Department of Energy) located on owned by the university in the Berkeley Hills. Altogether, the Lab has 3,663 UC employees, of whom about 800 are students or postdocs, and each year it hosts more than 3,000 participating guest scientists. There are approximately two dozen DOE employees stationed at the laboratory to provide federal oversight of Berkeley Lab's work for the DOE. The laboratory director, Michael Witherell, is appointed by the university regents and reports to the university president. Although Berkeley Lab is governed by UC independently of the Berkeley campus, the two entities are closely interconnected: more than 200 Berkeley Lab researchers hold joint appointments as UC Berkeley faculty.
The laboratory budget was $1.495 billion dollars in fiscal year 2023, while the total obligations were $1.395 billion.
See also
Lawrence Livermore National Laboratory
References
External links
1931 establishments in California
Berkeley Hills
Ernest Lawrence
Federally Funded Research and Development Centers
Historic American Engineering Record in California
Laboratories in California
Manhattan Project sites
Nuclear research institutes
Research institutes established in 1931
Research institutes in the San Francisco Bay Area
Science and technology in the San Francisco Bay Area
United States Department of Energy national laboratories
University and college laboratories in the United States
University of California, Berkeley
University of California, Berkeley buildings | 0.767141 | 0.996181 | 0.764211 |
Rigor mortis | Rigor mortis, or postmortem rigidity, is the fourth stage of death. It is one of the recognizable signs of death, characterized by stiffening of the limbs of the corpse caused by chemical changes in the muscles postmortem (mainly calcium). In humans, rigor mortis can occur as soon as four hours after death. Contrary to folklore and common belief, rigor mortis is not permanent and begins to pass within hours of onset. Typically, it lasts no longer than eight hours at "room temperature".
Physiology
After death, aerobic respiration in an organism ceases, depleting the source of oxygen used in the making of adenosine triphosphate (ATP). ATP is required to cause separation of the actin-myosin cross-bridges during relaxation of muscle. When oxygen is no longer present, the body may continue to produce ATP via anaerobic glycolysis. When the body's glycogen is depleted, the ATP concentration diminishes, and the body enters rigor mortis because it is unable to break those bridges.
Calcium enters the cytosol after death. Calcium is released into the cytosol due to the deterioration of the sarcoplasmic reticulum. Also, the breakdown of the sarcolemma causes additional calcium to enter the cytosol. The calcium activates the formation of actin-myosin cross-bridging. Once calcium is introduced into the cytosol, it binds to the troponin of thin filaments, which causes the troponin-tropomyosin complex to change shape and allow the myosin heads to bind to the active sites of actin proteins. In rigor mortis, myosin heads continue binding with the active sites of actin proteins via adenosine diphosphate (ADP), and the muscle is unable to relax until further enzyme activity degrades the complex. Normal relaxation would occur by replacing ADP with ATP, which would destabilize the myosin-actin bond and break the cross-bridge. However, as ATP is absent, there must be a breakdown of muscle tissue by enzymes (endogenous or bacterial) during decomposition. As part of the process of decomposition, the myosin heads are degraded by the enzymes, allowing the muscle contraction to release and the body to relax.
Decomposition of the myofilaments occurs between 48 and 60 hours after the peak of rigor mortis, which occurs approximately 13 hours after death.
Applications in meat industry
Rigor mortis is very important in the meat industry. The onset of rigor mortis and its resolution partially determines the tenderness of meat. If the post-slaughter meat is immediately chilled to 15 °C (59 °F), a phenomenon known as cold shortening occurs, whereby the muscle sarcomeres shrink to a third of their original length.
Cold shortening is caused by the release of stored calcium ions from the sarcoplasmic reticulum of muscle fibers, in response to the cold stimulus. The calcium ions trigger powerful muscle contraction aided by ATP molecules. To prevent cold shortening, a process known as electrical stimulation is carried out, especially in beef carcasses, immediately after slaughter and skinning. In this process, the carcass is stimulated with alternating current, causing it to contract and relax, which depletes the ATP reserve from the carcass and prevents cold shortening.
Application in forensic pathology
The degree of rigor mortis may be used in forensic pathology to determine the approximate time of death. A dead body holds its position as rigor mortis sets in. If the body is moved after death, but before rigor mortis begins, forensic techniques such as livor mortis can be applied. Rigor mortis is known as transient evidence, as the degree to which it affects a body degrades over time.
See also
Cadaveric spasm
Notes
References
Bibliography
Bear, Mark F; Connors, Barry W.; Paradiso, Michael A., Neuroscience, Exploring the Brain, Philadelphia : Lippincott Williams & Wilkins; Third Edition (1 February 2006).
Robert G. Mayer, "Embalming: history, theory, and practice", McGraw-Hill Professional, 2005,
"Rigor Mortis and Other Postmortem Changes - Burial, Body, Life, Cause, Time, Person, Human, Putrefaction." Encyclopedia of Death and Dying. 2011. Web. 4 December 2011. <Rigor Mortis and Other Postmortem Changes - burial, body, life, cause, time, person, human, Putrefaction>.
Saladin, Kenneth. Anatomy and Physiology: The Unity of Form and Function, 6th ed. McGraw-Hill. New York, 2012.
Signs of death
Latin medical words and phrases
Forensic pathology | 0.764635 | 0.99944 | 0.764206 |
G-force | The g-force or gravitational force equivalent is mass-specific force (force per unit mass), expressed in units of standard gravity (symbol g or g0, not to be confused with "g", the symbol for grams).
It is used for sustained accelerations, that cause a perception of weight. For example, an object at rest on Earth's surface is subject to 1 g, equaling the conventional value of gravitational acceleration on Earth, about .
More transient acceleration, accompanied with significant jerk, is called shock.
When the g-force is produced by the surface of one object being pushed by the surface of another object, the reaction force to this push produces an equal and opposite force for every unit of each object's mass. The types of forces involved are transmitted through objects by interior mechanical stresses. Gravitational acceleration is one cause of an object's acceleration in relation to free fall.
The g-force experienced by an object is due to the vector sum of all gravitational and non-gravitational forces acting on an object's freedom to move. In practice, as noted, these are surface-contact forces between objects. Such forces cause stresses and strains on objects, since they must be transmitted from an object surface. Because of these strains, large g-forces may be destructive.
For example, a force of 1 g on an object sitting on the Earth's surface is caused by the mechanical force exerted in the upward direction by the ground, keeping the object from going into free fall. The upward contact force from the ground ensures that an object at rest on the Earth's surface is accelerating relative to the free-fall condition. (Free fall is the path that the object would follow when falling freely toward the Earth's center). Stress inside the object is ensured from the fact that the ground contact forces are transmitted only from the point of contact with the ground.
Objects allowed to free-fall in an inertial trajectory, under the influence of gravitation only, feel no g-force – a condition known as weightlessness. Being in free fall in an inertial trajectory is colloquially called "zero-g", which is short for "zero g-force". Zero g-force conditions would occur inside an elevator falling freely toward the Earth's center (in vacuum), or (to good approximation) inside a spacecraft in Earth orbit. These are examples of coordinate acceleration (a change in velocity) without a sensation of weight.
In the absence of gravitational fields, or in directions at right angles to them, proper and coordinate accelerations are the same, and any coordinate acceleration must be produced by a corresponding g-force acceleration. An example of this is a rocket in free space: when the engines produce simple changes in velocity, those changes cause g-forces on the rocket and the passengers.
Unit and measurement
The unit of measure of acceleration in the International System of Units (SI) is m/s2. However, to distinguish acceleration relative to free fall from simple acceleration (rate of change of velocity), the unit g is often used. One g is the force per unit mass due to gravity at the Earth's surface and is the standard gravity (symbol: gn), defined as metres per second squared, or equivalently newtons of force per kilogram of mass. The unit definition does not vary with location—the g-force when standing on the Moon is almost exactly that on Earth.
The unit g is not one of the SI units, which uses "g" for gram. Also, "g" should not be confused with "G", which is the standard symbol for the gravitational constant. This notation is commonly used in aviation, especially in aerobatic or combat military aviation, to describe the increased forces that must be overcome by pilots in order to remain conscious and not g-LOC (g-induced loss of consciousness).
Measurement of g-force is typically achieved using an accelerometer (see discussion below in section #Measurement using an accelerometer). In certain cases, g-forces may be measured using suitably calibrated scales.
Acceleration and forces
The term g-"force" is technically incorrect as it is a measure of acceleration, not force. While acceleration is a vector quantity, g-force accelerations ("g-forces" for short) are often expressed as a scalar, based on the vector magnitude, with positive g-forces pointing downward (indicating upward acceleration), and negative g-forces pointing upward. Thus, a g-force is a vector of acceleration. It is an acceleration that must be produced by a mechanical force, and cannot be produced by simple gravitation. Objects acted upon only by gravitation experience (or "feel") no g-force, and are weightless.
g-forces, when multiplied by a mass upon which they act, are associated with a certain type of mechanical force in the correct sense of the term "force", and this force produces compressive stress and tensile stress. Such forces result in the operational sensation of weight, but the equation carries a sign change due to the definition of positive weight in the direction downward, so the direction of weight-force is opposite to the direction of g-force acceleration:
Weight = mass × −g-force
The reason for the minus sign is that the actual force (i.e., measured weight) on an object produced by a g-force is in the opposite direction to the sign of the g-force, since in physics, weight is not the force that produces the acceleration, but rather the equal-and-opposite reaction force to it. If the direction upward is taken as positive (the normal cartesian convention) then positive g-force (an acceleration vector that points upward) produces a force/weight on any mass, that acts downward (an example is positive-g acceleration of a rocket launch, producing downward weight). In the same way, a negative-g force is an acceleration vector downward (the negative direction on the y axis), and this acceleration downward produces a weight-force in a direction upward (thus pulling a pilot upward out of the seat, and forcing blood toward the head of a normally oriented pilot).
If a g-force (acceleration) is vertically upward and is applied by the ground (which is accelerating through space-time) or applied by the floor of an elevator to a standing person, most of the body experiences compressive stress which at any height, if multiplied by the area, is the related mechanical force, which is the product of the g-force and the supported mass (the mass above the level of support, including arms hanging down from above that level). At the same time, the arms themselves experience a tensile stress, which at any height, if multiplied by the area, is again the related mechanical force, which is the product of the g-force and the mass hanging below the point of mechanical support. The mechanical resistive force spreads from points of contact with the floor or supporting structure, and gradually decreases toward zero at the unsupported ends (the top in the case of support from below, such as a seat or the floor, the bottom for a hanging part of the body or object). With compressive force counted as negative tensile force, the rate of change of the tensile force in the direction of the g-force, per unit mass (the change between parts of the object such that the slice of the object between them has unit mass), is equal to the g-force plus the non-gravitational external forces on the slice, if any (counted positive in the direction opposite to the g-force).
For a given g-force the stresses are the same, regardless of whether this g-force is caused by mechanical resistance to gravity, or by a coordinate-acceleration (change in velocity) caused by a mechanical force, or by a combination of these. Hence, for people all mechanical forces feels exactly the same whether they cause coordinate acceleration or not. For objects likewise, the question of whether they can withstand the mechanical g-force without damage is the same for any type of g-force. For example, upward acceleration (e.g., increase of speed when going up or decrease of speed when going down) on Earth feels the same as being stationary on a celestial body with a higher surface gravity. Gravitation acting alone does not produce any g-force; g-force is only produced from mechanical pushes and pulls. For a free body (one that is free to move in space) such g-forces only arise as the "inertial" path that is the natural effect of gravitation, or the natural effect of the inertia of mass, is modified. Such modification may only arise from influences other than gravitation.
Examples of important situations involving g-forces include:
The g-force acting on a stationary object resting on the Earth's surface is 1 g (upwards) and results from the resisting reaction of the Earth's surface bearing upwards equal to an acceleration of 1 g, and is equal and opposite to gravity. The number 1 is approximate, depending on location.
The g-force acting on an object in any weightless environment such as free-fall in a vacuum is 0 g.
The g-force acting on an object under acceleration can be much greater than 1 g, for example, the dragster pictured at top right can exert a horizontal g-force of 5.3 when accelerating.
The g-force acting on an object under acceleration may be downwards, for example when cresting a sharp hill on a roller coaster.
If there are no other external forces than gravity, the g-force in a rocket is the thrust per unit mass. Its magnitude is equal to the thrust-to-weight ratio times g, and to the consumption of delta-v per unit time.
In the case of a shock, e.g., a collision, the g-force can be very large during a short time.
A classic example of negative g-force is in a fully inverted roller coaster which is accelerating (changing velocity) toward the ground. In this case, the roller coaster riders are accelerated toward the ground faster than gravity would accelerate them, and are thus pinned upside down in their seats. In this case, the mechanical force exerted by the seat causes the g-force by altering the path of the passenger downward in a way that differs from gravitational acceleration. The difference in downward motion, now faster than gravity would provide, is caused by the push of the seat, and it results in a g-force toward the ground.
All "coordinate accelerations" (or lack of them), are described by Newton's laws of motion as follows:
The Second Law of Motion, the law of acceleration, states that meaning that a force F acting on a body is equal to the mass m of the body times its acceleration a.
The Third Law of Motion, the law of reciprocal actions, states that all forces occur in pairs, and these two forces are equal in magnitude and opposite in direction. Newton's third law of motion means that not only does gravity behave as a force acting downwards on, say, a rock held in your hand but also that the rock exerts a force on the Earth, equal in magnitude and opposite in direction.
In an airplane, the pilot's seat can be thought of as the hand holding the rock, the pilot as the rock. When flying straight and level at 1 g, the pilot is acted upon by the force of gravity. His weight (a downward force) is . In accordance with Newton's third law, the plane and the seat underneath the pilot provides an equal and opposite force pushing upwards with a force of 725 N. This mechanical force provides the 1.0 g upward proper acceleration on the pilot, even though this velocity in the upward direction does not change (this is similar to the situation of a person standing on the ground, where the ground provides this force and this g-force).
If the pilot were suddenly to pull back on the stick and make his plane accelerate upwards at 9.8 m/s2, the total g‑force on his body is 2 g, half of which comes from the seat pushing the pilot to resist gravity, and half from the seat pushing the pilot to cause his upward acceleration—a change in velocity which also is a proper acceleration because it also differs from a free fall trajectory. Considered in the frame of reference of the plane his body is now generating a force of downwards into his seat and the seat is simultaneously pushing upwards with an equal force of 1450 N.
Unopposed acceleration due to mechanical forces, and consequentially g-force, is experienced whenever anyone rides in a vehicle because it always causes a proper acceleration, and (in the absence of gravity) also always a coordinate acceleration (where velocity changes). Whenever the vehicle changes either direction or speed, the occupants feel lateral (side to side) or longitudinal (forward and backwards) forces produced by the mechanical push of their seats.
The expression means that for every second that elapses, velocity changes metres per second. This rate of change in velocity can also be denoted as (metres per second) per second, or For example: An acceleration of 1 g equates to a rate of change in velocity of approximately for each second that elapses. Therefore, if an automobile is capable of braking at 1 g and is traveling at 35 km/h, it can brake to a standstill in one second and the driver will experience a deceleration of 1 g. The automobile traveling at three times this speed, , can brake to a standstill in three seconds.
In the case of an increase in speed from 0 to v with constant acceleration within a distance of s this acceleration is v2/(2s).
Preparing an object for g-tolerance (not getting damaged when subjected to a high g-force) is called g-hardening. This may apply to, e.g., instruments in a projectile shot by a gun.
Human tolerance
Human tolerances depend on the magnitude of the gravitational force, the length of time it is applied, the direction it acts, the location of application, and the posture of the body.
The human body is flexible and deformable, particularly the softer tissues. A hard slap on the face may briefly impose hundreds of g locally but not produce any real damage; a constant 16 g for a minute, however, may be deadly. When vibration is experienced, relatively low peak g-force levels can be severely damaging if they are at the resonant frequency of organs or connective tissues.
To some degree, g-tolerance can be trainable, and there is also considerable variation in innate ability between individuals. In addition, some illnesses, particularly cardiovascular problems, reduce g-tolerance.
Vertical
Aircraft pilots (in particular) sustain g-forces along the axis aligned with the spine. This causes significant variation in blood pressure along the length of the subject's body, which limits the maximum g-forces that can be tolerated.
Positive, or "upward" g-force, drives blood downward to the feet of a seated or standing person (more naturally, the feet and body may be seen as being driven by the upward force of the floor and seat, upward around the blood). Resistance to positive g-force varies. A typical person can handle about (meaning some people might pass out when riding a higher-g roller coaster, which in some cases exceeds this point) before losing consciousness, but through the combination of special g-suits and efforts to strain muscles—both of which act to force blood back into the brain—modern pilots can typically handle a sustained (see High-G training).
In aircraft particularly, vertical g-forces are often positive (force blood towards the feet and away from the head); this causes problems with the eyes and brain in particular. As positive vertical g-force is progressively increased (such as in a centrifuge) the following symptoms may be experienced:
Grey-out, where the vision loses hue, easily reversible on levelling out
Tunnel vision, where peripheral vision is progressively lost
Blackout, a loss of vision while consciousness is maintained, caused by a lack of blood flow to the head
G-LOC, a g-force induced loss of consciousness
Death, if g-forces are not quickly reduced
Resistance to "negative" or "downward" g, which drives blood to the head, is much lower. This limit is typically in the range. This condition is sometimes referred to as red out where vision is literally reddened due to the blood-laden lower eyelid being pulled into the field of vision. Negative g-force is generally unpleasant and can cause damage. Blood vessels in the eyes or brain may swell or burst under the increased blood pressure, resulting in degraded sight or even blindness.
Horizontal
The human body is better at surviving g-forces that are perpendicular to the spine. In general when the acceleration is forwards (subject essentially lying on their back, colloquially known as "eyeballs in"), a much higher tolerance is shown than when the acceleration is backwards (lying on their front, "eyeballs out") since blood vessels in the retina appear more sensitive in the latter direction.
Early experiments showed that untrained humans were able to tolerate a range of accelerations depending on the time of exposure. This ranged from as much as for less than 10 seconds, to for 1 minute, and for 10 minutes for both eyeballs in and out. These forces were endured with cognitive facilities intact, as subjects were able to perform simple physical and communication tasks. The tests were determined not to cause long- or short-term harm although tolerance was quite subjective, with only the most motivated non-pilots capable of completing tests. The record for peak experimental horizontal g-force tolerance is held by acceleration pioneer John Stapp, in a series of rocket sled deceleration experiments culminating in a late 1954 test in which he was clocked in a little over a second from a land speed of Mach 0.9. He survived a peak "eyeballs-out" acceleration of 46.2 times the acceleration of gravity, and more than for 1.1 seconds, proving that the human body is capable of this. Stapp lived another 45 years to age 89 without any ill effects.
The highest recorded g-force experienced by a human who survived was during the 2003 IndyCar Series finale at Texas Motor Speedway on 12 October 2003, in the 2003 Chevy 500 when the car driven by Kenny Bräck made wheel-to-wheel contact with Tomas Scheckter's car. This immediately resulted in Bräck's car impacting the catch fence that would record a peak of .
Short duration shock, impact, and jerk
Impact and mechanical shock are usually used to describe a high-kinetic-energy, short-term excitation. A shock pulse is often measured by its peak acceleration in ·s and the pulse duration. Vibration is a periodic oscillation which can also be measured in ·s as well as frequency. The dynamics of these phenomena are what distinguish them from the g-forces caused by a relatively longer-term accelerations.
After a free fall from a height followed by deceleration over a distance during an impact, the shock on an object is · . For example, a stiff and compact object dropped from 1 m that impacts over a distance of 1 mm is subjected to a 1000 deceleration.
Jerk is the rate of change of acceleration. In SI units, jerk is expressed as m/s3; it can also be expressed in standard gravity per second (/s; 1 /s ≈ 9.81 m/s3).
Other biological responses
Recent research carried out on extremophiles in Japan involved a variety of bacteria (including E. coli as a non-extremophile control) being subject to conditions of extreme gravity. The bacteria were cultivated while being rotated in an ultracentrifuge at high speeds corresponding to 403,627 g. Paracoccus denitrificans was one of the bacteria that displayed not only survival but also robust cellular growth under these conditions of hyperacceleration, which are usually only to be found in cosmic environments, such as on very massive stars or in the shock waves of supernovas. Analysis showed that the small size of prokaryotic cells is essential for successful growth under hypergravity. Notably, two multicellular species, the nematodes Panagrolaimus superbus and Caenorhabditis elegans were shown to be able to tolerate 400,000 × g for 1 hour.
The research has implications on the feasibility of panspermia.
Typical examples
Measurement using an accelerometer
An accelerometer, in its simplest form, is a damped mass on the end of a spring, with some way of measuring how far the mass has moved on the spring in a particular direction, called an 'axis'.
Accelerometers are often calibrated to measure g-force along one or more axes. If a stationary, single-axis accelerometer is oriented so that its measuring axis is horizontal, its output will be 0 g, and it will continue to be 0 g if mounted in an automobile traveling at a constant velocity on a level road. When the driver presses on the brake or gas pedal, the accelerometer will register positive or negative acceleration.
If the accelerometer is rotated by 90° so that it is vertical, it will read +1 g upwards even though stationary. In that situation, the accelerometer is subject to two forces: the gravitational force and the ground reaction force of the surface it is resting on. Only the latter force can be measured by the accelerometer, due to mechanical interaction between the accelerometer and the ground. The reading is the acceleration the instrument would have if it were exclusively subject to that force.
A three-axis accelerometer will output zero‑g on all three axes if it is dropped or otherwise put into a ballistic trajectory (also known as an inertial trajectory), so that it experiences "free fall", as do astronauts in orbit (astronauts experience small tidal accelerations called microgravity, which are neglected for the sake of discussion here). Some amusement park rides can provide several seconds at near-zero g. Riding NASA's "Vomit Comet" provides near-zero g-force for about 25 seconds at a time.
See also
Artificial gravity
Earth's gravity
Euthanasia Coaster
Gravitational acceleration
Gravitational interaction
Hypergravity
Load factor (aeronautics)
Peak ground acceleration – g-force of earthquakes
Prone pilot
Relation between g-force and apparent weight
Shock and vibration data logger
Shock detector
Supine cockpit
Notes and references
Further reading
External links
"How Many Gs Can a Flyer Take?", October 1944, Popular Science—one of the first detailed public articles explaining this subject
Enduring a human centrifuge at the NASA Ames Research Center at Wired
HUMAN CAPABILITIES IN THE PRONE AND SUPINE POSITIONS. AN ANNOTATED BIBLIOGRAPHY
Acceleration
Gravimetry
Units of acceleration | 0.764944 | 0.999031 | 0.764203 |
Piezoelectricity | Piezoelectricity (, ) is the electric charge that accumulates in certain solid materials—such as crystals, certain ceramics, and biological matter such as bone, DNA, and various proteins—in response to applied mechanical stress. The word piezoelectricity means electricity resulting from pressure and latent heat. It is derived (an ancient source of static electricity). The German form of the word (Piezoelektricität) was coined in 1881 by the German physicist Wilhelm Gottlieb Hankel; the English word was coined in 1883.
The piezoelectric effect results from the linear electromechanical interaction between the mechanical and electrical states in crystalline materials with no inversion symmetry. The piezoelectric effect is a reversible process: materials exhibiting the piezoelectric effect also exhibit the reverse piezoelectric effect, the internal generation of a mechanical strain resulting from an applied electric field. For example, lead zirconate titanate crystals will generate measurable piezoelectricity when their static structure is deformed by about 0.1% of the original dimension. Conversely, those same crystals will change about 0.1% of their static dimension when an external electric field is applied. The inverse piezoelectric effect is used in the production of ultrasound waves.
French physicists Jacques and Pierre Curie discovered piezoelectricity in 1880. The piezoelectric effect has been exploited in many useful applications, including the production and detection of sound, piezoelectric inkjet printing, generation of high voltage electricity, as a clock generator in electronic devices, in microbalances, to drive an ultrasonic nozzle, and in ultrafine focusing of optical assemblies. It forms the basis for scanning probe microscopes that resolve images at the scale of atoms. It is used in the pickups of some electronically amplified guitars and as triggers in most modern electronic drums. The piezoelectric effect also finds everyday uses, such as generating sparks to ignite gas cooking and heating devices, torches, and cigarette lighters.
History
Discovery and early research
The pyroelectric effect, by which a material generates an electric potential in response to a temperature change, was studied by Carl Linnaeus and Franz Aepinus in the mid-18th century. Drawing on this knowledge, both René Just Haüy and Antoine César Becquerel posited a relationship between mechanical stress and electric charge; however, experiments by both proved inconclusive.
The first demonstration of the direct piezoelectric effect was in 1880 by the brothers Pierre Curie and Jacques Curie. They combined their knowledge of pyroelectricity with their understanding of the underlying crystal structures that gave rise to pyroelectricity to predict crystal behavior, and demonstrated the effect using crystals of tourmaline, quartz, topaz, cane sugar, and Rochelle salt (sodium potassium tartrate tetrahydrate). Quartz and Rochelle salt exhibited the most piezoelectricity.
The Curies, however, did not predict the converse piezoelectric effect. The converse effect was mathematically deduced from fundamental thermodynamic principles by Gabriel Lippmann in 1881. The Curies immediately confirmed the existence of the converse effect, and went on to obtain quantitative proof of the complete reversibility of electro-elasto-mechanical deformations in piezoelectric crystals.
For the next few decades, piezoelectricity remained something of a laboratory curiosity, though it was a vital tool in the discovery of polonium and radium by Pierre and Marie Curie in 1898. More work was done to explore and define the crystal structures that exhibited piezoelectricity. This culminated in 1910 with the publication of Woldemar Voigt's Lehrbuch der Kristallphysik (Textbook on Crystal Physics), which described the 20 natural crystal classes capable of piezoelectricity, and rigorously defined the piezoelectric constants using tensor analysis.
World War I and inter-war years
The first practical application for piezoelectric devices was sonar, first developed during World War I. The superior performance of piezoelectric devices, operating at ultrasonic frequencies, superseded the earlier Fessenden oscillator. In France in 1917, Paul Langevin and his coworkers developed an ultrasonic submarine detector. The detector consisted of a transducer, made of thin quartz crystals carefully glued between two steel plates, and a hydrophone to detect the returned echo. By emitting a high-frequency pulse from the transducer, and measuring the amount of time it takes to hear an echo from the sound waves bouncing off an object, one can calculate the distance to that object.
The use of piezoelectricity in sonar, and the success of that project, created intense development interest in piezoelectric devices. Over the next few decades, new piezoelectric materials and new applications for those materials were explored and developed.
Piezoelectric devices found homes in many fields. Ceramic phonograph cartridges simplified player design, were cheap and accurate, and made record players cheaper to maintain and easier to build. The development of the ultrasonic transducer allowed for easy measurement of viscosity and elasticity in fluids and solids, resulting in huge advances in materials research. Ultrasonic time-domain reflectometers (which send an ultrasonic pulse through a material and measure reflections from discontinuities) could find flaws inside cast metal and stone objects, improving structural safety.
World War II and post-war
During World War II, independent research groups in the United States, USSR, and Japan discovered a new class of synthetic materials, called ferroelectrics, which exhibited piezoelectric constants many times higher than natural materials. This led to intense research to develop barium titanate and later lead zirconate titanate materials with specific properties for particular applications.
One significant example of the use of piezoelectric crystals was developed by Bell Telephone Laboratories. Following World War I, Frederick R. Lack, working in radio telephony in the engineering department, developed the "AT cut" crystal, a crystal that operated through a wide range of temperatures. Lack's crystal did not need the heavy accessories previous crystal used, facilitating its use on the aircraft. This development allowed Allied air forces to engage in coordinated mass attacks through the use of aviation radio.
Development of piezoelectric devices and materials in the United States was kept within the companies doing the development, mostly due to the wartime beginnings of the field, and in the interests of securing profitable patents. New materials were the first to be developed—quartz crystals were the first commercially exploited piezoelectric material, but scientists searched for higher-performance materials. Despite the advances in materials and the maturation of manufacturing processes, the United States market did not grow as quickly as Japan's did. Without many new applications, the growth of the United States' piezoelectric industry suffered.
In contrast, Japanese manufacturers shared their information, quickly overcoming technical and manufacturing challenges and creating new markets. In Japan, a temperature stable crystal cut was developed by Issac Koga. Japanese efforts in materials research created piezoceramic materials competitive to the United States materials but free of expensive patent restrictions. Major Japanese piezoelectric developments included new designs of piezoceramic filters for radios and televisions, piezo buzzers and audio transducers that can connect directly to electronic circuits, and the piezoelectric igniter, which generates sparks for small engine ignition systems and gas-grill lighters, by compressing a ceramic disc. Ultrasonic transducers that transmit sound waves through air had existed for quite some time but first saw major commercial use in early television remote controls. These transducers now are mounted on several car models as an echolocation device, helping the driver determine the distance from the car to any objects that may be in its path.
Mechanism
The nature of the piezoelectric effect is closely related to the occurrence of electric dipole moments in solids. The latter may either be induced for ions on crystal lattice sites with asymmetric charge surroundings (as in BaTiO3 and PZTs) or may directly be carried by molecular groups (as in cane sugar). The dipole density or polarization (dimensionality [C·m/m3] ) may easily be calculated for crystals by summing up the dipole moments per volume of the crystallographic unit cell. As every dipole is a vector, the dipole density P is a vector field. Dipoles near each other tend to be aligned in regions called Weiss domains. The domains are usually randomly oriented, but can be aligned using the process of poling (not the same as magnetic poling), a process by which a strong electric field is applied across the material, usually at elevated temperatures. Not all piezoelectric materials can be poled.
Of decisive importance for the piezoelectric effect is the change of polarization P when applying a mechanical stress. This might either be caused by a reconfiguration of the dipole-inducing surrounding or by re-orientation of molecular dipole moments under the influence of the external stress. Piezoelectricity may then manifest in a variation of the polarization strength, its direction or both, with the details depending on: 1. the orientation of P within the crystal; 2. crystal symmetry; and 3. the applied mechanical stress. The change in P appears as a variation of surface charge density upon the crystal faces, i.e. as a variation of the electric field extending between the faces caused by a change in dipole density in the bulk. For example, a 1 cm3 cube of quartz with 2 kN (500 lbf) of correctly applied force can produce a voltage of 12500 V.
Piezoelectric materials also show the opposite effect, called the converse piezoelectric effect, where the application of an electrical field creates mechanical deformation in the crystal.
Mathematical description
Linear piezoelectricity is the combined effect of
The linear electrical behavior of the material:
where D is the electric flux density (electric displacement), ε is the permittivity (free-body dielectric constant), E is the electric field strength, and , .
Hooke's law for linear elastic materials:
where S is the linearized strain, s is compliance under short-circuit conditions, T is stress, and
where u is the displacement vector.
These may be combined into so-called coupled equations, of which the strain-charge form is:
where is the piezoelectric tensor and the superscript t stands for its transpose. Due to the symmetry of , .
In matrix form,
where [d] is the matrix for the direct piezoelectric effect and [d] is the matrix for the converse piezoelectric effect. The superscript E indicates a zero, or constant, electric field; the superscript T indicates a zero, or constant, stress field; and the superscript t stands for transposition of a matrix.
Notice that the third order tensor maps vectors into symmetric matrices. There are no non-trivial rotation-invariant tensors that have this property, which is why there are no isotropic piezoelectric materials.
The strain-charge for a material of the 4mm (C4v) crystal class (such as a poled piezoelectric ceramic such as tetragonal PZT or BaTiO3) as well as the 6mm crystal class may also be written as (ANSI IEEE 176):
where the first equation represents the relationship for the converse piezoelectric effect and the latter for the direct piezoelectric effect.
Although the above equations are the most used form in literature, some comments about the notation are necessary. Generally, D and E are vectors, that is, Cartesian tensors of rank 1; and permittivity ε is a Cartesian tensor of rank 2. Strain and stress are, in principle, also rank-2 tensors. But conventionally, because strain and stress are all symmetric tensors, the subscript of strain and stress can be relabeled in the following fashion: 11 → 1; 22 → 2; 33 → 3; 23 → 4; 13 → 5; 12 → 6. (Different conventions may be used by different authors in literature. For example, some use 12 → 4; 23 → 5; 31 → 6 instead.) That is why S and T appear to have the "vector form" of six components. Consequently, s appears to be a 6-by-6 matrix instead of a rank-3 tensor. Such a relabeled notation is often called Voigt notation. Whether the shear strain components S4, S5, S6 are tensor components or engineering strains is another question. In the equation above, they must be engineering strains for the 6,6 coefficient of the compliance matrix to be written as shown, i.e., 2(s − s). Engineering shear strains are double the value of the corresponding tensor shear, such as S6 = 2S12 and so on. This also means that s66 = , where G12 is the shear modulus.
In total, there are four piezoelectric coefficients, dij, eij, gij, and hij defined as follows:
where the first set of four terms corresponds to the direct piezoelectric effect and the second set of four terms corresponds to the converse piezoelectric effect. The equality between the direct piezoelectric tensor and the transpose of the converse piezoelectric tensor originates from the Maxwell relations of thermodynamics. For those piezoelectric crystals for which the polarization is of the crystal-field induced type, a formalism has been worked out that allows for the calculation of piezoelectrical coefficients dij from electrostatic lattice constants or higher-order Madelung constants.
Crystal classes
Of the 32 crystal classes, 21 are non-centrosymmetric (not having a centre of symmetry), and of these, 20 exhibit direct piezoelectricity (the 21st is the cubic class 432). Ten of these represent the polar crystal classes, which show a spontaneous polarization without mechanical stress due to a non-vanishing electric dipole moment associated with their unit cell, and which exhibit pyroelectricity. If the dipole moment can be reversed by applying an external electric field, the material is said to be ferroelectric.
The 10 polar (pyroelectric) crystal classes: 1, 2, m, mm2, 4, , 3, 3m, 6, .
The other 10 piezoelectric crystal classes: 222, , 422, 2m, 32, , 622, 2m, 23, 3m.
For polar crystals, for which P ≠ 0 holds without applying a mechanical load, the piezoelectric effect manifests itself by changing the magnitude or the direction of P or both.
For the nonpolar but piezoelectric crystals, on the other hand, a polarization P different from zero is only elicited by applying a mechanical load. For them the stress can be imagined to transform the material from a nonpolar crystal class (P = 0) to a polar one, having P ≠ 0.
Materials
Many materials exhibit piezoelectricity.
Crystalline materials
Langasite (La3Ga5SiO14) – a quartz-analogous crystal
Gallium orthophosphate (GaPO4) – a quartz-analogous crystal
Lithium niobate (LiNbO3)
Lithium tantalate (LiTaO3)
Quartz
Berlinite (AlPO4) – a rare phosphate mineral that is structurally identical to quartz
Rochelle salt
Topaz – piezoelectricity in topaz can probably be attributed to ordering of the (F,OH) in its lattice, which is otherwise centrosymmetric: orthorhombic bipyramidal (mmm). Topaz has anomalous optical properties, which are attributed to such ordering.
Tourmaline-group minerals
Lead titanate (PbTiO3) – although it occurs in nature as mineral macedonite, it is synthesized for research and applications.
Ceramics
Ceramics with randomly oriented grains must be ferroelectric to exhibit piezoelectricity. The occurrence of abnormal grain growth (AGG) in sintered polycrystalline piezoelectric ceramics has detrimental effects on the piezoelectric performance in such systems and should be avoided, as the microstructure in piezoceramics exhibiting AGG tends to consist of few abnormally large elongated grains in a matrix of randomly oriented finer grains. Macroscopic piezoelectricity is possible in textured polycrystalline non-ferroelectric piezoelectric materials, such as AlN and ZnO.
The families of ceramics with perovskite, tungsten-bronze, and related structures exhibit piezoelectricity:
Lead zirconate titanate ( with 0 ≤ x ≤ 1) – more commonly known as PZT, the most common piezoelectric ceramic in use today.
Potassium niobate (KNbO3)
Sodium tungstate (Na2WO3)
Ba2NaNb5O5
Pb2KNb5O15
Zinc oxide (ZnO) – Wurtzite structure. While single crystals of ZnO are piezoelectric and pyroelectric, polycrystalline (ceramic) ZnO with randomly oriented grains exhibits neither piezoelectric nor pyroelectric effect. Not being ferroelectric, polycrystalline ZnO cannot be poled like barium titanate or PZT. Ceramics and polycrystalline thin films of ZnO may exhibit macroscopic piezoelectricity and pyroelectricity only if they are textured (grains are preferentially oriented), such that the piezoelectric and pyroelectric responses of all individual grains do not cancel. This is readily accomplished in polycrystalline thin films.
Lead-free piezoceramics
Sodium potassium niobate ((K,Na)NbO3). This material is also known as NKN or KNN. In 2004, a group of Japanese researchers led by Yasuyoshi Saito discovered a sodium potassium niobate composition with properties close to those of PZT, including a high TC. Certain compositions of this material have been shown to retain a high mechanical quality factor (Qm ≈ 900) with increasing vibration levels, whereas the mechanical quality factor of hard PZT degrades in such conditions. This fact makes NKN a promising replacement for high power resonance applications, such as piezoelectric transformers.
Bismuth ferrite (BiFeO3) – a promising candidate for the replacement of lead-based ceramics.
Sodium niobate (NaNbO3)
Barium titanate (BaTiO3) – Barium titanate was the first piezoelectric ceramic discovered.
Bismuth titanate (Bi4Ti3O12)
Sodium bismuth titanate (NaBi(TiO3)2)
The fabrication of lead-free piezoceramics pose multiple challenges, from an environmental standpoint and their ability to replicate the properties of their lead-based counterparts. By removing the lead component of the piezoceramic, the risk of toxicity to humans decreases, but the mining and extraction of the materials can be harmful to the environment. Analysis of the environmental profile of PZT versus sodium potassium niobate (NKN or KNN) shows that across the four indicators considered (primary energy consumption, toxicological footprint, eco-indicator 99, and input-output upstream greenhouse gas emissions), KNN is actually more harmful to the environment. Most of the concerns with KNN, specifically its Nb2O5 component, are in the early phase of its life cycle before it reaches manufacturers. Since the harmful impacts are focused on these early phases, some actions can be taken to minimize the effects. Returning the land as close to its original form after Nb2O5 mining via dam deconstruction or replacing a stockpile of utilizable soil are known aids for any extraction event. For minimizing air quality effects, modeling and simulation still needs to occur to fully understand what mitigation methods are required. The extraction of lead-free piezoceramic components has not grown to a significant scale at this time, but from early analysis, experts encourage caution when it comes to environmental effects.
Fabricating lead-free piezoceramics faces the challenge of maintaining the performance and stability of their lead-based counterparts. In general, the main fabrication challenge is creating the "morphotropic phase boundaries (MPBs)" that provide the materials with their stable piezoelectric properties without introducing the "polymorphic phase boundaries (PPBs)" that decrease the temperature stability of the material. New phase boundaries are created by varying additive concentrations so that the phase transition temperatures converge at room temperature. The introduction of the MPB improves piezoelectric properties, but if a PPB is introduced, the material becomes negatively affected by temperature. Research is ongoing to control the type of phase boundaries that are introduced through phase engineering, diffusing phase transitions, domain engineering, and chemical modification.
III–V and II–VI semiconductors
A piezoelectric potential can be created in any bulk or nanostructured semiconductor crystal having non central symmetry, such as the Group III–V and II–VI materials, due to polarization of ions under applied stress and strain. This property is common to both the zincblende and wurtzite crystal structures. To first order, there is only one independent piezoelectric coefficient in zincblende, called e14, coupled to shear components of the strain. In wurtzite, there are instead three independent piezoelectric coefficients: e31, e33 and e15.
The semiconductors where the strongest piezoelectricity is observed are those commonly found in the wurtzite structure, i.e. GaN, InN, AlN and ZnO (see piezotronics).
Since 2006, there have also been a number of reports of strong non linear piezoelectric effects in polar semiconductors.
Such effects are generally recognized to be at least important if not of the same order of magnitude as the first order approximation.
Polymers
The piezo-response of polymers is not as high as the response for ceramics; however, polymers hold properties that ceramics do not. Over the last few decades, non-toxic, piezoelectric polymers have been studied and applied due to their flexibility and smaller acoustical impedance. Other properties that make these materials significant include their biocompatibility, biodegradability, low cost, and low power consumption compared to other piezo-materials (ceramics, etc.). Piezoelectric polymers and non-toxic polymer composites can be used given their different physical properties.
Piezoelectric polymers can be classified by bulk polymers, voided charged polymers ("piezoelectrets"), and polymer composites. A piezo-response observed by bulk polymers is mostly due to its molecular structure. There are two types of bulk polymers: amorphous and semi-crystalline. Examples of semi-crystalline polymers are polyvinylidene fluoride (PVDF) and its copolymers, polyamides, and parylene-C. Non-crystalline polymers, such as polyimide and polyvinylidene chloride (PVDC), fall under amorphous bulk polymers. Voided charged polymers exhibit the piezoelectric effect due to charge induced by poling of a porous polymeric film. Under an electric field, charges form on the surface of the voids forming dipoles. Electric responses can be caused by any deformation of these voids. The piezoelectric effect can also be observed in polymer composites by integrating piezoelectric ceramic particles into a polymer film. A polymer does not have to be piezo-active to be an effective material for a polymer composite. In this case, a material could be made up of an inert matrix with a separate piezo-active component.
PVDF exhibits piezoelectricity several times greater than quartz. The piezo-response observed from PVDF is about 20–30 pC/N. That is an order of 5–50 times less than that of piezoelectric ceramic lead zirconate titanate (PZT). The thermal stability of the piezoelectric effect of polymers in the PVDF family (i.e. vinylidene fluoride co-poly trifluoroethylene) goes up to 125 °C. Some applications of PVDF are pressure sensors, hydrophones, and shock wave sensors.
Due to their flexibility, piezoelectric composites have been proposed as energy harvesters and nanogenerators. In 2018, it was reported by Zhu et al. that a piezoelectric response of about 17 pC/N could be obtained from PDMS/PZT nanocomposite at 60% porosity. Another PDMS nanocomposite was reported in 2017, in which BaTiO3 was integrated into PDMS to make a stretchable, transparent nanogenerator for self-powered physiological monitoring. In 2016, polar molecules were introduced into a polyurethane foam in which high responses of up to 244 pC/N were reported.
Other materials
Most materials exhibit at least weak piezoelectric responses. Trivial examples include sucrose (table sugar), DNA, viral proteins, including those from bacteriophage. An actuator based on wood fibers, called cellulose fibers, has been reported. D33 responses for cellular polypropylene are around 200 pC/N. Some applications of cellular polypropylene are musical key pads, microphones, and ultrasound-based echolocation systems. Recently, single amino acid such as β-glycine also displayed high piezoelectric (178 pmV−1) as compared to other biological materials.
Ionic liquids were recently identified as the first piezoelectric liquid.
Application
High voltage and power sources
Direct piezoelectricity of some substances, like quartz, can generate potential differences of thousands of volts.
The best-known application is the electric cigarette lighter: pressing the button causes a spring-loaded hammer to hit a piezoelectric crystal, producing a sufficiently high-voltage electric current that flows across a small spark gap, thus heating and igniting the gas. The portable sparkers used to ignite gas stoves work the same way, and many types of gas burners now have built-in piezo-based ignition systems.
A similar idea is being researched by DARPA in the United States in a project called energy harvesting, which includes an attempt to power battlefield equipment by piezoelectric generators embedded in soldiers' boots. However, these energy harvesting sources by association affect the body. DARPA's effort to harness 1–2 watts from continuous shoe impact while walking were abandoned due to the impracticality and the discomfort from the additional energy expended by a person wearing the shoes. Other energy harvesting ideas include Crowd Farm, harvesting the energy from human movements in train stations or other public places and converting a dance floor to generate electricity. Vibrations from industrial machinery can also be harvested by piezoelectric materials to charge batteries for backup supplies or to power low-power microprocessors and wireless radios.
A piezoelectric transformer is a type of AC voltage multiplier. Unlike a conventional transformer, which uses magnetic coupling between input and output, the piezoelectric transformer uses acoustic coupling. An input voltage is applied across a short length of a bar of piezoceramic material such as PZT, creating an alternating stress in the bar by the inverse piezoelectric effect and causing the whole bar to vibrate. The vibration frequency is chosen to be the resonant frequency of the block, typically in the 100 kilohertz to 1 megahertz range. A higher output voltage is then generated across another section of the bar by the piezoelectric effect. Step-up ratios of more than 1,000:1 have been demonstrated. An extra feature of this transformer is that, by operating it above its resonant frequency, it can be made to appear as an inductive load, which is useful in circuits that require a controlled soft start. These devices can be used in DC–AC inverters to drive cold cathode fluorescent lamps. Piezo transformers are some of the most compact high voltage sources.
Sensors
The principle of operation of a piezoelectric sensor is that a physical dimension, transformed into a force, acts on two opposing faces of the sensing element. Depending on the design of a sensor, different "modes" to load the piezoelectric element can be used: longitudinal, transversal and shear.
Detection of pressure variations in the form of sound is the most common sensor application, e.g. piezoelectric microphones (sound waves bend the piezoelectric material, creating a changing voltage) and piezoelectric pickups for acoustic-electric guitars. A piezo sensor attached to the body of an instrument is known as a contact microphone.
Piezoelectric sensors especially are used with high frequency sound in ultrasonic transducers for medical imaging and also industrial nondestructive testing (NDT).
For many sensing techniques, the sensor can act as both a sensor and an actuator—often the term transducer is preferred when the device acts in this dual capacity, but most piezo devices have this property of reversibility whether it is used or not. Ultrasonic transducers, for example, can inject ultrasound waves into the body, receive the returned wave, and convert it to an electrical signal (a voltage). Most medical ultrasound transducers are piezoelectric.
In addition to those mentioned above, various sensor and transducer applications include:
Piezoelectric elements are also used in the detection and generation of sonar waves.
Piezoelectric materials are used in single-axis and dual-axis tilt sensing.
Power monitoring in high power applications (e.g. medical treatment, sonochemistry and industrial processing).
Piezoelectric microbalances are used as very sensitive chemical and biological sensors.
Piezoelectrics are sometimes used in strain gauges. More commonly however, a Piezoresistive effect element is used.
A piezoelectric transducer was used in the penetrometer instrument on the Huygens Probe.
Piezoelectric transducers are used in electronic drum pads to detect the impact of the drummer's sticks, and to detect muscle movements in medical acceleromyography.
Automotive engine management systems use piezoelectric transducers to detect Engine knock (Knock Sensor, KS), also known as detonation, at certain hertz frequencies. A piezoelectric transducer is also used in fuel injection systems to measure manifold absolute pressure (MAP sensor) to determine engine load, and ultimately the fuel injectors milliseconds of on time.
Ultrasonic piezo sensors are used in the detection of acoustic emissions in acoustic emission testing.
Piezoelectric transducers can be used in transit-time ultrasonic flow meters.
Actuators
As very high electric fields correspond to only tiny changes in the width of the crystal, this width can be changed with better-than-μm precision, making piezo crystals the most important tool for positioning objects with extreme accuracy—thus their use in actuators.
Multilayer ceramics, using layers thinner than , allow reaching high electric fields with voltage lower than . These ceramics are used within two kinds of actuators: direct piezo actuators and amplified piezoelectric actuators. While direct actuator's stroke is generally lower than , amplified piezo actuators can reach millimeter strokes.
Loudspeakers: Voltage is converted to mechanical movement of a metallic diaphragm.
Ultrasonic cleaning usually uses piezoelectric elements to produce intense sound waves in liquid.
Piezoelectric motors: Piezoelectric elements apply a directional force to an axle, causing it to rotate. Due to the extremely small distances involved, the piezo motor is viewed as a high-precision replacement for the stepper motor.
Piezoelectric elements can be used in laser mirror alignment, where their ability to move a large mass (the mirror mount) over microscopic distances is exploited to electronically align some laser mirrors. By precisely controlling the distance between mirrors, the laser electronics can accurately maintain optical conditions inside the laser cavity to optimize the beam output.
A related application is the acousto-optic modulator, a device that scatters light off soundwaves in a crystal, generated by piezoelectric elements. This is useful for fine-tuning a laser's frequency.
Atomic force microscopes and scanning tunneling microscopes employ converse piezoelectricity to keep the sensing needle close to the specimen.
Inkjet printers: On many inkjet printers, piezoelectric crystals are used to drive the ejection of ink from the inkjet print head towards the paper.
Diesel engines: High-performance common rail diesel engines use piezoelectric fuel injectors, first developed by Robert Bosch GmbH, instead of the more common solenoid valve devices.
Active vibration control using amplified actuators.
X-ray shutters.
XY stages for micro scanning used in infrared cameras.
Moving the patient precisely inside active CT and MRI scanners where the strong radiation or magnetism precludes electric motors.
Crystal earpieces are sometimes used in old or low power radios.
High-intensity focused ultrasound for localized heating or creating a localized cavitation can be achieved, for example, in patient's body or in an industrial chemical process.
Refreshable braille display. A small crystal is expanded by applying a current that moves a lever to raise individual braille cells.
Piezoelectric actuator. A single crystal or a number of crystals are expanded by applying a voltage for moving and controlling a mechanism or system.
Piezoelectric actuators are used for fine servo positioning in hard disc drives.
Frequency standard
The piezoelectrical properties of quartz are useful as a standard of frequency.
Quartz clocks employ a crystal oscillator made from a quartz crystal that uses a combination of both direct and converse piezoelectricity to generate a regularly timed series of electrical pulses that is used to mark time. The quartz crystal (like any elastic material) has a precisely defined natural frequency (caused by its shape and size) at which it prefers to oscillate, and this is used to stabilize the frequency of a periodic voltage applied to the crystal.
The same principle is used in some radio transmitters and receivers, and in computers where it creates a clock pulse. Both of these usually use a frequency multiplier to reach gigahertz ranges.
Piezoelectric motors
Types of piezoelectric motor include:
The ultrasonic motor used for auto-focus in reflex cameras
Inchworm motors for linear motion
Rectangular four-quadrant motors with high power density (2.5 W/cm3) and speed ranging from 10 nm/s to 800 mm/s.
Stepping piezo motor, using stick-slip effect.
Aside from the stepping stick-slip motor, all these motors work on the same principle. Driven by dual orthogonal vibration modes with a phase difference of 90°, the contact point between two surfaces vibrates in an elliptical path, producing a frictional force between the surfaces. Usually, one surface is fixed, causing the other to move. In most piezoelectric motors, the piezoelectric crystal is excited by a sine wave signal at the resonant frequency of the motor. Using the resonance effect, a much lower voltage can be used to produce a high vibration amplitude.
A stick-slip motor works using the inertia of a mass and the friction of a clamp. Such motors can be very small. Some are used for camera sensor displacement, thus allowing an anti-shake function.
Reduction of vibrations and noise
Different teams of researchers have been investigating ways to reduce vibrations in materials by attaching piezo elements to the material. When the material is bent by a vibration in one direction, the vibration-reduction system responds to the bend and sends electric power to the piezo element to bend in the other direction. Future applications of this technology are expected in cars and houses to reduce noise. Further applications to flexible structures, such as shells and plates, have also been studied for nearly three decades.
In a demonstration at the Material Vision Fair in Frankfurt in November 2005, a team from TU Darmstadt in Germany showed several panels that were hit with a rubber mallet, and the panel with the piezo element immediately stopped swinging.
Piezoelectric ceramic fiber technology is being used as an electronic damping system on some HEAD tennis rackets.
All piezo transducers have a fundamental resonant frequency and many harmonic frequencies. Piezo driven Drop-On-Demand fluid systems are sensitive to extra vibrations in the piezo structure that must be reduced or eliminated. One inkjet company, Howtek, Inc solved this problem by replacing glass(rigid) inkjet nozzles with Tefzel (soft) inkjet nozzles. This novel idea popularized single nozzle inkjets and they are now used in 3D Inkjet printers that run for years if kept clean inside and not overheated (Tefzel creeps under pressure at very high temperatures)
Infertility treatment
In people with previous total fertilization failure, piezoelectric activation of oocytes together with intracytoplasmic sperm injection (ICSI) seems to improve fertilization outcomes.
Surgery
Piezosurgery is a minimally invasive technique that aims to cut a target tissue with little damage to neighboring tissues. For example, Hoigne et al. uses frequencies in the range 25–29 kHz, causing microvibrations of 60–210 μm. It has the ability to cut mineralized tissue without cutting neurovascular tissue and other soft tissue, thereby maintaining a blood-free operating area, better visibility and greater precision.
Potential applications
In 2015, Cambridge University researchers working in conjunction with researchers from the National Physical Laboratory and Cambridge-based dielectric antenna company Antenova Ltd, using thin films of piezoelectric materials found that at a certain frequency, these materials become not only efficient resonators, but efficient radiators as well, meaning that they can potentially be used as antennas. The researchers found that by subjecting the piezoelectric thin films to an asymmetric excitation, the symmetry of the system is similarly broken, resulting in a corresponding symmetry breaking of the electric field, and the generation of electromagnetic radiation.
Several attempts at the macro-scale application of the piezoelectric technology have emerged to harvest kinetic energy from walking pedestrians.
In this case, locating high traffic areas is critical for optimization of the energy harvesting efficiency, as well as the orientation of the tile pavement significantly affects the total amount of the harvested energy. A density flow evaluation is recommended to qualitatively evaluate the piezoelectric power harvesting potential of the considered area based on the number of pedestrian crossings per unit time. In X. Li's study, the potential application of a commercial piezoelectric energy harvester in a central hub building at Macquarie University in Sydney, Australia is examined and discussed. Optimization of the piezoelectric tile deployment is presented according to the frequency of pedestrian mobility and a model is developed where 3.1% of the total floor area with the highest pedestrian mobility is paved with piezoelectric tiles. The modelling results indicate that the total annual energy harvesting potential for the proposed optimized tile pavement model is estimated at 1.1 MWh/year, which would be sufficient to meet close to 0.5% of the annual energy needs of the building. In Israel, there is a company which has installed piezoelectric materials under a busy highway. The energy generated is enough to power street lights, billboards, and signs.
Tire company Goodyear has plans to develop an electricity generating tire which has piezoelectric material lined inside it. As the tire moves, it deforms and thus electricity is generated.
The efficiency of a hybrid photovoltaic cell that contains piezoelectric materials can be increased simply by placing it near a source of ambient noise or vibration. The effect was demonstrated with organic cells using zinc oxide nanotubes. The electricity generated by the piezoelectric effect itself is a negligible percentage of the overall output. Sound levels as low as 75 decibels improved efficiency by up to 50%. Efficiency peaked at 10 kHz, the resonant frequency of the nanotubes. The electrical field set up by the vibrating nanotubes interacts with electrons migrating from the organic polymer layer. This process decreases the likelihood of recombination, in which electrons are energized but settle back into a hole instead of migrating to the electron-accepting ZnO layer.
See also
Charge amplifier
Electret
Electronic component
Electrostriction
Flexoelectricity
Magnetostriction
Photoelectric effect
Piezoelectric speaker
Piezoluminescence
Piezomagnetism
Piezoresistive effect
Piezosurgical
Quartz crystal microbalance
Sonomicrometry
Surface acoustic wave
Thermoelectric generator
Triboluminescence
References
Further reading
EN 50324 (2002) Piezoelectric properties of ceramic materials and components (3 parts)
ANSI-IEEE 176 (1987) Standard on Piezoelectricity
IEEE 177 (1976) Standard Definitions & Methods of Measurement for Piezoelectric Vibrators
IEC 444 (1973) Basic method for the measurement of resonance freq & equiv series resistance of quartz crystal units by zero-phase technique in a pi-network
IEC 302 (1969) Standard Definitions & Methods of Measurement for Piezoelectric Vibrators Operating over the Freq Range up to 30 MHz
External links
Piezoelectric cellular polymer films: Fabrication, properties and applications
Piezo motor based microdrive for neural signal recording
Research on new Piezoelectric materials
Piezo Equations
Piezo in Medical Design
Video demonstration of Piezoelectricity
DoITPoMS Teaching and Learning Package – Piezoelectric Materials
PiezoMat.org – Online database for piezoelectric materials, their properties, and applications
Piezo Motor Types
Piezo-Theory & Applications
Condensed matter physics
Electrical phenomena
Energy conversion
Transducers
Energy harvesting
1880s neologisms | 0.765049 | 0.998886 | 0.764197 |
International Standard Atmosphere | The International Standard Atmosphere (ISA) is a static atmospheric model of how the pressure, temperature, density, and viscosity of the Earth's atmosphere change over a wide range of altitudes or elevations. It has been established to provide a common reference for temperature and pressure and consists of tables of values at various altitudes, plus some formulas by which those values were derived. The International Organization for Standardization (ISO) publishes the ISA as an international standard, ISO 2533:1975. Other standards organizations, such as the International Civil Aviation Organization (ICAO) and the United States Government, publish extensions or subsets of the same atmospheric model under their own standards-making authority.
Description
The ISA mathematical model divides the atmosphere into layers with an assumed linear distribution of absolute temperature T against geopotential altitude h. The other two values (pressure P and density ρ) are computed by simultaneously solving the equations resulting from:
the vertical pressure gradient resulting from hydrostatic balance, which relates the rate of change of pressure with geopotential altitude:
, and
the ideal gas law in molar form, which relates pressure, density, and temperature:
at each geopotential altitude, where g is the standard acceleration of gravity, and Rspecific is the specific gas constant for dry air (287.0528J⋅kg−1⋅K−1). The solution is given by the barometric formula.
Air density must be calculated in order to solve for the pressure, and is used in calculating dynamic pressure for moving vehicles. Dynamic viscosity is an empirical function of temperature, and kinematic viscosity is calculated by dividing dynamic viscosity by the density.
Thus the standard consists of a tabulation of values at various altitudes, plus some formulas by which those values were derived. To accommodate the lowest points on Earth, the model starts at a base geopotential altitude of below sea level, with standard temperature set at 19 °C. With a temperature lapse rate of −6.5 °C (-11.7 °F) per km (roughly −2 °C (-3.6 °F) per 1,000 ft), the table interpolates to the standard mean sea level values of temperature, (1 atm) pressure, and a density of . The tropospheric tabulation continues to , where the temperature has fallen to , the pressure to , and the density to . Between 11 km and 20 km, the temperature remains constant.
lapse rate given per kilometer of geopotential altitude (A positive lapse rate (λ > 0) means temperature increases with height)
In the above table, geopotential altitude is calculated from a mathematical model that adjusts the altitude to include the variation of gravity with height, while geometric altitude is the standard direct vertical distance above mean sea level (MSL).
The equation that relates the two altitudes are (where z is the geometric altitude, h is the geopotential altitude, and r0 = 6,356,766 m in this model):
Note that the Lapse Rates cited in the table are given as °C per kilometer of geopotential altitude, not geometric altitude.
The ISA model is based on average conditions at mid latitudes, as determined by the ISO's TC 20/SC 6 technical committee. It has been revised from time to time since the middle of the 20th century.
Use at non-standard day conditions
The ISA models a hypothetical standard day to allow a reproducible engineering reference for calculation and testing of engine and vehicle performance at various altitudes. It does not provide a rigorous meteorological model of actual atmospheric conditions (for example, changes in barometric pressure due to wind conditions). Neither does it account for humidity effects; air is assumed to be dry and clean and of constant composition. Humidity effects are accounted for in vehicle or engine analysis by adding water vapor to the thermodynamic state of the air after obtaining the pressure and density from the standard atmosphere model.
Non-standard (hot or cold) days are modeled by adding a specified temperature delta to the standard temperature at altitude, but pressure is taken as the standard day value. Density and viscosity are recalculated at the resultant temperature and pressure using the ideal gas equation of state. Hot day, Cold day, Tropical, and Polar temperature profiles with altitude have been defined for use as performance references, such as United States Department of Defense MIL-STD-210C, and its successor MIL-HDBK-310.
ICAO Standard Atmosphere
The International Civil Aviation Organization (ICAO) published their "ICAO Standard Atmosphere" as Doc 7488-CD in 1993. It has the same model as the ISA, but extends the altitude coverage to 80 kilometers (262,500 feet).
The ICAO Standard Atmosphere, like the ISA, does not contain water vapor.
Some of the values defined by ICAO are:
Aviation standards and flying rules are based on the International Standard Atmosphere. Airspeed indicators are calibrated on the assumption that they are operating at sea level in the International Standard Atmosphere where the air density is 1.225 kg/m3.
Physical properties of the ICAO Standard Atmosphere are:
Other standard atmospheres
The U.S. Standard Atmosphere is a set of models that define values for atmospheric temperature, density, pressure and other properties over a wide range of altitudes. The first model, based on an existing international standard, was published in 1958 by the U.S. Committee on Extension to the Standard Atmosphere, and was updated in 1962, 1966, and 1976. The U.S. Standard Atmosphere, International Standard Atmosphere and WMO (World Meteorological Organization) standard atmospheres are the same as the ISO International Standard Atmosphere for altitudes up to 32 km.
NRLMSISE-00 is a newer model of the Earth's atmosphere from ground to space, developed by the US Naval Research Laboratory taking actual satellite drag data into account. A primary use of this model is to aid predictions of satellite orbital decay due to atmospheric drag. The COSPAR International Reference Atmosphere (CIRA) 2012 and the ISO 14222 Earth Atmosphere Density standard both recommend NRLMSISE-00 for composition uses.
JB2008 is a newer model of the Earth's atmosphere from 120 km to 2000 km, developed by the US Air Force Space Command and Space Environment Technologies taking into account realistic solar irradiances and time evolution of geomagnetic storms. It is most useful for calculating satellite orbital decay due to atmospheric drag. Both CIRA 2012 and ISO 14222 recommend JB2008 for mass density in drag uses.
See also
Acronyms and abbreviations in avionics
Density of air
Jet standard atmosphere
References
NASA JPL Reference Notes
ICAO, Manual of the ICAO Standard Atmosphere (extended to 80 kilometres (262 500 feet)), Doc 7488-CD, Third Edition, 1993, .
External links
Online 1976 Standard Atmosphere calculator with table en graph generator. Digital Dutch
Multilingual windows calculator which calculates the atmospheric (standard and not standard!) characteristics according to the "1976 standard atmosphere" and convert between various airspeeds (true / equivalent / calibrated) according to the appropriate atmospheric conditions
A Free Android version for complete International Standard Atmosphere model
NewByte standard atmosphere calculator and speed converter
ICAO atmosphere calculator
ICAO Standards
Complete ISA calculator (1976 model)
JB2008 source code and references
ICAO standard atmosphere 1993 calculator
Atmosphere of Earth
Atmospheric thermodynamics
Aviation meteorology
ISO standards | 0.767755 | 0.995343 | 0.76418 |
Red Queen's race | The Red Queen's race is an incident that appears in Lewis Carroll's Through the Looking-Glass and involves both the Red Queen, a representation of a Queen in chess, and Alice constantly running but remaining in the same spot.
"Well, in our country," said Alice, still panting a little, "you'd generally get to somewhere else—if you run very fast for a long time, as we've been doing."
"A slow sort of country!" said the Queen. "Now, here, you see, it takes all the running you can do, to keep in the same place. If you want to get somewhere else, you must run at least twice as fast as that!"
The Red Queen's race is often used to illustrate similar situations:
In evolutionary biology, to illustrate that sexual reproduction and the resulting genetic recombination may be just enough to allow individuals of a certain species to adapt to changes in their environment—see Red Queen hypothesis.
As an illustration of the relativistic effect that nothing can ever reach the speed of light, or the invariant speed; in particular, with respect to relativistic effect on light from galaxies near the edge of the expanding observable universe, or at the event horizon of a black hole.
Isaac Asimov used it in his short story "The Red Queen's Race" to illustrate the concept of predestination paradox.
In environmental sociology, to illustrate Allan Schnaiberg's concept of the treadmill of production where actors are perpetually driven to accumulate capital and expand the market in an effort to maintain relative economic and social position.
Vernor Vinge used it in his novel Rainbows End to illustrate the struggle between encouraging technological advancement and protecting the world from new weapons technologies.
James A. Robinson and Daron Acemoglu used it in their political science book The Narrow Corridor to illustrate the competition and cooperation required between state and society required to support the spread of liberty.
Andrew F. Krepinevich used it in his article "The New Nuclear Age: How China’s Growing Nuclear Arsenal Threatens Deterrence" to illustrate how in a tripolar nuclear power system it is not possible for each state to maintain nuclear parity with the combined arsenals of its two rivals.
Marc Reisner referenced the Red Queen in his book Cadillac Desert to describe a growing Los Angeles’ quest for water. As the city swelled in population, it required more and more water sources just to maintain a supply barely enough to sate its residents and farms.
Steve Blank used it in his article “The Red Queen Problem - Innovation in the DoD and Intelligence Community” as a metaphor for how the US Department of Defense and Intelligence community are not able to keep pace with their adversaries in the 21st century because of their outdated approach to technological innovation.
Mark Atherton used it in fraud detection and other areas of fighting online attackers to describe the never ending struggle to combat relentless adversaries.
Jay-Z compared the struggle for Black liberation to the Red Queen’s race in his song “Legacy”: “That’s called the Red Queen’s Race/You run this hard just to stay in place/Keep up the pace, baby/Keep up the pace.”
References
Alice's Adventures in Wonderland
English phrases
1871 introductions | 0.77116 | 0.990901 | 0.764143 |
Centrifugal pump | Centrifugal pumps are used to transport fluids by the conversion of rotational kinetic energy to the hydrodynamic energy of the fluid flow. The rotational energy typically comes from an engine or electric motor. They are a sub-class of dynamic axisymmetric work-absorbing turbomachinery. The fluid enters the pump impeller along or near to the rotating axis and is accelerated by the impeller, flowing radially outward into a diffuser or volute chamber (casing), from which it exits.
Common uses include water, sewage, agriculture, petroleum, and petrochemical pumping. Centrifugal pumps are often chosen for their high flow rate capabilities, abrasive solution compatibility, mixing potential, as well as their relatively simple engineering. A centrifugal fan is commonly used to implement an air handling unit or vacuum cleaner. The reverse function of the centrifugal pump is a water turbine converting potential energy of water pressure into mechanical rotational energy.
History
According to Reti, the first machine that could be characterized as a centrifugal pump was a mud lifting machine which appeared as early as 1475 in a treatise by the Italian Renaissance engineer Francesco di Giorgio Martini. True centrifugal pumps were not developed until the late 17th century, when Denis Papin built one using straight vanes. The curved vane was introduced by British inventor John Appold in 1851.
Working principle
Like most pumps, a centrifugal pump converts rotational energy, often from a motor, to energy in a moving fluid. A portion of the energy goes into kinetic energy of the fluid. Fluid enters axially through eye of the casing, is caught up in the impeller blades, and is whirled tangentially and radially outward until it leaves through all circumferential parts of the impeller into the diffuser part of the casing. The fluid gains both velocity and pressure while passing through the impeller. The doughnut-shaped diffuser, or scroll, section of the casing decelerates the flow and further increases the pressure.
Description by Euler
A consequence of Newton's second law of mechanics is the conservation of the angular momentum (or the “moment of momentum”) which is of fundamental significance to all turbomachines. Accordingly, the change of the angular momentum is equal to the sum of the external moments. Angular momentums
at inlet and outlet, an external torque and friction moments due to shear stresses act on an impeller or a diffuser, where:
is the fluid density (kg/m3)
is the flow rate (m3/s)
is the radius
the absolute velocity vector
is the peripheral circumferential velocity vector.
Since no pressure forces are created on cylindrical surfaces in the circumferential direction, it is possible to write Eq. (1.10) as:
Euler's pump equation
Based on Eq. (1.13) Euler developed the head pressure equation created by the impeller see Fig.2.2
In Eq. (2) the sum of 4 front element number call static pressure, the sum of last 2 element number call velocity pressure look carefully on the Fig 2.2 and the detail equation.
: theoretical head pressure: = between 9.78 and 9.82 m/s depending on latitude, conventional standard value of exactly 9.80665 m/s barycentric gravitational acceleration
: peripheral circumferential velocity vector
: inlet circumferential velocity vector
: angular velocity
: inlet relative velocity vector
: outlet relative velocity vector
inlet absolute velocity vector
: outlet absolute velocity vector
Velocity triangle
The color triangle formed by velocity vectors is called the velocity triangle. This rule was helpful to detail Eq.(1) become Eq.(2) and wide explained how the pump works.
Fig 2.3 (a) shows the velocity triangle of a forward-curved vane impeller; Fig 2.3 (b) shows the velocity triangle of a radial straight-vane impeller. It illustrates rather clearly energy added to the flow (shown in vector ) inversely change upon flow rate (shown in vector ).
Efficiency factor
where:
is the mechanics input power required (in watts)
is the fluid density (kg/m)
is the standard acceleration of gravity (9.80665 m/s)
is the energy head added to the flow (in metres)
is the flow rate (in m/s)
is the efficiency of the pump plant as a decimal
The head added by the pump is a sum of the static lift, the head loss due to friction and any losses due to valves or pipe bends are all expressed in metres of fluid. Power is more commonly expressed as kilowatts (103 W, kW) or horsepower. The value for the pump efficiency, , may be stated for the pump itself or as a combined efficiency of the pump and motor system.
Vertical centrifugal pumps
Vertical centrifugal pumps are also referred to as cantilever pumps. They utilize a unique shaft and bearing support configuration that allows the volute to hang in the sump while the bearings are outside the sump. This style of pump uses no stuffing box to seal the shaft but instead utilizes a "throttle bushing". A common application for this style of pump is in a parts washer.
Froth pumps
In the mineral industry, or in the extraction of oilsand, froth is generated to separate the rich minerals or bitumen from the sand and clays. Froth contains air that tends to block conventional pumps and cause loss of prime. Over history, industry has developed different ways to deal with this problem. In the pulp and paper industry holes are drilled in the impeller. Air escapes to the back of the impeller and a special expeller discharges the air back to the suction tank. The impeller may also feature special small vanes between the primary vanes called split vanes or secondary vanes. Some pumps may feature a large eye, an inducer or recirculation of pressurized froth from the pump discharge back to the suction to break the bubbles.
Multistage centrifugal pumps
A centrifugal pump containing two or more impellers is called a multistage centrifugal pump. The impellers may be mounted on the same shaft or on different shafts. At each stage, the fluid is directed to the center before making its way to the discharge on the outer diameter.
For higher pressures at the outlet, impellers can be connected in series. For higher flow output, impellers can be connected in parallel.
A common application of the multistage centrifugal pump is the boiler feedwater pump. For example, a 350 MW unit would require two feedpumps in parallel. Each feedpump is a multistage centrifugal pump producing 150 L/s at 21 MPa.
All energy transferred to the fluid is derived from the mechanical energy driving the impeller. This can be measured at isentropic compression, resulting in a slight temperature increase (in addition to the pressure increase).
Energy usage
The energy usage in a pumping installation is determined by the flow required, the height lifted and the length and friction characteristics of the pipeline. The power required to drive a pump is defined simply using SI units by:
where:
is the input power required (in watts)
is the fluid density (in kilograms per cubic metre, kg/m)
is the standard acceleration of gravity (9.80665 m/s)
is the energy Head added to the flow (in metres)
is the volumetric flow rate (in cubic metres per second, m3/s)
is the efficiency of the pump plant as a decimal
The head added by the pump is a sum of the static lift, the head loss due to friction and any losses due to valves or pipe bends all expressed in metres of fluid. Power is more commonly expressed as kilowatts (10 W, kW) or horsepower (1 hp = ). The value for the pump efficiency, , may be stated for the pump itself or as a combined efficiency of the pump and motor system.
The energy usage is determined by multiplying the power requirement by the length of time the pump is operating.
Problems of centrifugal pumps
These are some difficulties faced in centrifugal pumps:
Cavitation—the net positive suction head (NPSH) of the system is too low for the selected pump
Wear of the impeller—can be worsened by suspended solids or cavitation
Corrosion inside the pump caused by the fluid properties
Overheating due to low flow
Leakage along rotating shaft.
Lack of prime—centrifugal pumps must be filled (with the fluid to be pumped) in order to operate
Surge
Viscous liquids may reduce efficiency
Other pump types may be more suitable for high pressure applications
Large solids or debris may clog the pump
Centrifugal pumps for solids control
An oilfield solids control system needs many centrifugal pumps to sit on or in mud tanks. The types of centrifugal pumps used are sand pumps, submersible slurry pumps, shear pumps, and charging pumps. They are defined for their different functions, but their working principle is the same.
Magnetically coupled pumps
Magnetically coupled pumps, or magnetic drive pumps, vary from the traditional pumping style, as the motor is coupled to the pump by magnetic means rather than by a direct mechanical shaft. The pump works via a drive magnet, 'driving' the pump rotor, which is magnetically coupled to the primary shaft driven by the motor. They are often used where leakage of the fluid pumped poses a great risk (e.g., aggressive fluid in the chemical or nuclear industry, or electric shock - garden fountains). Other use cases include when corrosive, combustible, or toxic fluids must be pumped (e.g., hydrochloric acid, sodium hydroxide, sodium hypochlorite, sulfuric acid, ferric/ferrous chloride or nitric acid). They have no direct connection between the motor shaft and the impeller, so no stuffing box or gland is needed. There is no risk of leakage, unless the casing is broken. Since the pump shaft is not supported by bearings outside the pump's housing, support inside the pump is provided by bushings. The pump size of a magnetic drive pumps can go from few watts of power to a giant 1 MW.
Priming
The process of filling the pump with liquid is called priming. All centrifugal pumps require liquid in the liquid casing to prime. If the pump casing becomes filled with vapors or gases, the pump impeller becomes gas-bound and incapable of pumping. To ensure that a centrifugal pump remains primed and does not become gas-bound, most centrifugal pumps are located below the level of the source from which the pump is to take its suction. The same effect can be gained by supplying liquid to the pump suction under pressure supplied by another pump placed in the suction line.
Self-priming centrifugal pump
In normal conditions, common centrifugal pumps are unable to evacuate the air from an inlet line leading to a fluid level whose geodetic altitude is below that of the pump. Self-priming pumps have to be capable of evacuating air from the pump suction line without any external auxiliary devices.
Centrifugal pumps with an internal suction stage such as water-jet pumps or side-channel pumps are also classified as self-priming pumps. Self-Priming centrifugal pumps were invented in 1935. One of the first companies to market a self-priming centrifugal pump was American Marsh in 1938.
Centrifugal pumps that are not designed with an internal or external self-priming stage can only start to pump the fluid after the pump has initially been primed with the fluid. Sturdier but slower, their impellers are designed to move liquid, which is far denser than air, leaving them unable to operate when air is present. In addition, a suction-side swing check valve or a vent valve must be fitted to prevent any siphon action and ensure that the fluid remains in the casing when the pump has been stopped. In self-priming centrifugal pumps with a separation chamber the fluid pumped and the entrained air bubbles are pumped into the separation chamber by the impeller action.
The air escapes through the pump discharge nozzle whilst the fluid drops back down and is once more entrained by the impeller. The suction line is thus continuously evacuated. The design required for such a self-priming feature has an adverse effect on pump efficiency. Also, the dimensions of the separating chamber are relatively large. For these reasons this solution is only adopted for small pumps, e.g. garden pumps. More frequently used types of self-priming pumps are side-channel and water-ring pumps.
Another type of self-priming pump is a centrifugal pump with two casing chambers and an open impeller. This design is not only used for its self-priming capabilities but also for its degassing effects when pumping two-phase mixtures (air/gas and liquid) for a short time in process engineering or when handling polluted fluids, for example, when draining water from construction pits. This pump type operates without a foot valve and without an evacuation device on the suction side. The pump has to be primed with the fluid to be handled prior to commissioning. Two-phase mixture is pumped until the suction line has been evacuated and the fluid level has been pushed into the front suction intake chamber by atmospheric pressure. During normal pumping operation this pump works like an ordinary centrifugal pump.
See also
Centrifugal compressor
Axial flow pump
Net positive suction head (NPSH)
Pump
Seal (mechanical)
Specific speed (Ns or Nss)
Thermodynamic pump testing
Turbine
Turbopump
References
Sources
ASME B73 Standards Committee, Chemical Standard Pumps
External links
Pumps
Gas compressors
Turbines
Hydraulic engineering
Power engineering
Articles containing video clips | 0.768512 | 0.994298 | 0.76413 |
Bottom–up and top–down design | Bottom–up and top–down are both strategies of information processing and ordering knowledge, used in a variety of fields including software, humanistic and scientific theories (see systemics), and management and organization. In practice they can be seen as a style of thinking, teaching, or leadership.
A top–down approach (also known as stepwise design and stepwise refinement and in some cases used as a synonym of decomposition) is essentially the breaking down of a system to gain insight into its compositional subsystems in a reverse engineering fashion. In a top–down approach an overview of the system is formulated, specifying, but not detailing, any first-level subsystems. Each subsystem is then refined in yet greater detail, sometimes in many additional subsystem levels, until the entire specification is reduced to base elements. A top–down model is often specified with the assistance of black boxes, which makes it easier to manipulate. However black boxes may fail to clarify elementary mechanisms or be detailed enough to realistically validate the model. A top–down approach starts with the big picture, then breaks down into smaller segments.
A bottom–up approach is the piecing together of systems to give rise to more complex systems, thus making the original systems subsystems of the emergent system. Bottom–up processing is a type of information processing based on incoming data from the environment to form a perception. From a cognitive psychology perspective, information enters the eyes in one direction (sensory input, or the "bottom"), and is then turned into an image by the brain that can be interpreted and recognized as a perception (output that is "built up" from processing to final cognition). In a bottom–up approach the individual base elements of the system are first specified in great detail. These elements are then linked together to form larger subsystems, which then in turn are linked, sometimes in many levels, until a complete top-level system is formed. This strategy often resembles a "seed" model, by which the beginnings are small but eventually grow in complexity and completeness. But "organic strategies" may result in a tangle of elements and subsystems, developed in isolation and subject to local optimization as opposed to meeting a global purpose.
Product design and development
During the development of new products, designers and engineers rely on both bottom–up and top–down approaches. The bottom–up approach is being used when off-the-shelf or existing components are selected and integrated into the product. An example includes selecting a particular fastener, such as a bolt, and designing the receiving components such that the fastener will fit properly. In a top–down approach, a custom fastener would be designed such that it would fit properly in the receiving components. For perspective, for a product with more restrictive requirements (such as weight, geometry, safety, environment), such as a spacesuit, a more top–down approach is taken and almost everything is custom designed.
Computer science
Software development
Part of this section is from the Perl Design Patterns Book.
In the software development process, the top–down and bottom–up approaches play a key role.
Top–down approaches emphasize planning and a complete understanding of the system. It is inherent that no coding can begin until a sufficient level of detail has been reached in the design of at least some part of the system. Top–down approaches are implemented by attaching the stubs in place of the module. But these delay testing of the ultimate functional units of a system until significant design is complete.
Bottom–up emphasizes coding and early testing, which can begin as soon as the first module has been specified. But this approach runs the risk that modules may be coded without having a clear idea of how they link to other parts of the system, and that such linking may not be as easy as first thought. Re-usability of code is one of the main benefits of a bottom–up approach.
Top–down design was promoted in the 1970s by IBM researchers Harlan Mills and Niklaus Wirth. Mills developed structured programming concepts for practical use and tested them in a 1969 project to automate the New York Times morgue index. The engineering and management success of this project led to the spread of the top–down approach through IBM and the rest of the computer industry. Among other achievements, Niklaus Wirth, the developer of Pascal programming language, wrote the influential paper Program Development by Stepwise Refinement. Since Niklaus Wirth went on to develop languages such as Modula and Oberon (where one could define a module before knowing about the entire program specification), one can infer that top–down programming was not strictly what he promoted. Top–down methods were favored in software engineering until the late 1980s, and object-oriented programming assisted in demonstrating the idea that both aspects of top-down and bottom-up programming could be used.
Modern software design approaches usually combine top–down and bottom–up approaches. Although an understanding of the complete system is usually considered necessary for good design—leading theoretically to a top-down approach—most software projects attempt to make use of existing code to some degree. Pre-existing modules give designs a bottom–up flavor.
Programming
Top–down is a programming style, the mainstay of traditional procedural languages, in which design begins by specifying complex pieces and then dividing them into successively smaller pieces. The technique for writing a program using top–down methods is to write a main procedure that names all the major functions it will need. Later, the programming team looks at the requirements of each of those functions and the process is repeated. These compartmentalized subroutines eventually will perform actions so simple they can be easily and concisely coded. When all the various subroutines have been coded the program is ready for testing. By defining how the application comes together at a high level, lower-level work can be self-contained.
In a bottom–up approach the individual base elements of the system are first specified in great detail. These elements are then linked together to form larger subsystems, which in turn are linked, sometimes at many levels, until a complete top–level system is formed. This strategy often resembles a "seed" model, by which the beginnings are small, but eventually grow in complexity and completeness. Object-oriented programming (OOP) is a paradigm that uses "objects" to design applications and computer programs. In mechanical engineering with software programs such as Pro/ENGINEER, Solidworks, and Autodesk Inventor users can design products as pieces not part of the whole and later add those pieces together to form assemblies like building with Lego. Engineers call this "piece part design".
Parsing
Parsing is the process of analyzing an input sequence (such as that read from a file or a keyboard) in order to determine its grammatical structure. This method is used in the analysis of both natural languages and computer languages, as in a compiler.
Nanotechnology
Top–down and bottom–up are two approaches for the manufacture of products. These terms were first applied to the field of nanotechnology by the Foresight Institute in 1989 to distinguish between molecular manufacturing (to mass-produce large atomically precise objects) and conventional manufacturing (which can mass-produce large objects that are not atomically precise). Bottom–up approaches seek to have smaller (usually molecular) components built up into more complex assemblies, while top–down approaches seek to create nanoscale devices by using larger, externally controlled ones to direct their assembly. Certain valuable nanostructures, such as Silicon nanowires, can be fabricated using either approach, with processing methods selected on the basis of targeted applications.
A top–down approach often uses the traditional workshop or microfabrication methods where externally controlled tools are used to cut, mill, and shape materials into the desired shape and order. Micropatterning techniques, such as photolithography and inkjet printing belong to this category. Vapor treatment can be regarded as a new top–down secondary approaches to engineer nanostructures.
Bottom–up approaches, in contrast, use the chemical properties of single molecules to cause single-molecule components to (a) self-organize or self-assemble into some useful conformation, or (b) rely on positional assembly. These approaches use the concepts of molecular self-assembly and/or molecular recognition. See also Supramolecular chemistry. Such bottom–up approaches should, broadly speaking, be able to produce devices in parallel and much cheaper than top–down methods but could potentially be overwhelmed as the size and complexity of the desired assembly increases.
Neuroscience and psychology
These terms are also employed in cognitive sciences including neuroscience, cognitive neuroscience and cognitive psychology to discuss the flow of information in processing. Typically, sensory input is considered bottom–up, and higher cognitive processes, which have more information from other sources, are considered top–down. A bottom-up process is characterized by an absence of higher-level direction in sensory processing, whereas a top-down process is characterized by a high level of direction of sensory processing by more cognition, such as goals or targets (Biederman, 19).
According to college teaching notes written by Charles Ramskov, Irvin Rock, Neiser, and Richard Gregory claim that top–down approach involves perception that is an active and constructive process. Additionally, it is an approach not directly given by stimulus input, but is the result of stimulus, internal hypotheses, and expectation interactions. According to theoretical synthesis, "when a stimulus is presented short and clarity is uncertain that gives a vague stimulus, perception becomes a top-down approach."
Conversely, psychology defines bottom–up processing as an approach in which there is a progression from the individual elements to the whole. According to Ramskov, one proponent of bottom–up approach, Gibson, claims that it is a process that includes visual perception that needs information available from proximal stimulus produced by the distal stimulus. Theoretical synthesis also claims that bottom–up processing occurs "when a stimulus is presented long and clearly enough."
Certain cognitive processes, such as fast reactions or quick visual identification, are considered bottom–up processes because they rely primarily on sensory information, whereas processes such as motor control and directed attention are considered top–down because they are goal directed. Neurologically speaking, some areas of the brain, such as area V1 mostly have bottom–up connections. Other areas, such as the fusiform gyrus have inputs from higher brain areas and are considered to have top–down influence.
The study of visual attention is an example. If your attention is drawn to a flower in a field, it may be because the color or shape of the flower are visually salient. The information that caused you to attend to the flower came to you in a bottom–up fashion—your attention was not contingent on knowledge of the flower: the outside stimulus was sufficient on its own. Contrast this situation with one in which you are looking for a flower. You have a representation of what you are looking for. When you see the object, you are looking for, it is salient. This is an example of the use of top–down information.
In cognition, two thinking approaches are distinguished. "Top–down" (or "big chunk") is stereotypically the visionary, or the person who sees the larger picture and overview. Such people focus on the big picture and from that derive the details to support it. "Bottom–up" (or "small chunk") cognition is akin to focusing on the detail primarily, rather than the landscape. The expression "seeing the wood for the trees" references the two styles of cognition.
Studies in task switching and response selection show that there are differences through the two types of processing. Top–down processing primarily focuses on the attention side, such as task repetition (Schneider, 2015). Bottom–up processing focuses on item-based learning, such as finding the same object over and over again (Schneider, 2015). Implications for understanding attentional control of response selection in conflict situations are discussed (Schneider, 2015).
This also applies to how we structure these processing neurologically. With structuring information interfaces in our neurological processes for procedural learning. These processes were proven effective to work in our interface design. But although both top–down principles were effective in guiding interface design; they were not sufficient. They can be combined with iterative bottom–up methods to produce usable interfaces (Zacks & Tversky, 2003).
Schooling
Undergraduate (or bachelor) students are taught the basis of top–down bottom–up processing around their third year in the program. Going through four main parts of the processing when viewing it from a learning perspective. The two main definitions are that bottom–up processing is determined directly by environmental stimuli rather than the individual's knowledge and expectations (Koch, 2022).
Management and organization
In the fields of management and organization, the terms "top–down" and "bottom–up" are used to describe how decisions are made and/or how change is implemented.
A "top–down" approach is where an executive decision maker or other top person makes the decisions of how something should be done. This approach is disseminated under their authority to lower levels in the hierarchy, who are, to a greater or lesser extent, bound by them. For example, when wanting to make an improvement in a hospital, a hospital administrator might decide that a major change (such as implementing a new program) is needed, and then use a planned approach to drive the changes down to the frontline staff.
A bottom–up approach to changes is one that works from the grassroots, and originates in a flat structure with people working together, causing a decision to arise from their joint involvement. A decision by a number of activists, students, or victims of some incident to take action is a "bottom–up" decision. A bottom–up approach can be thought of as "an incremental change approach that represents an emergent process cultivated and upheld primarily by frontline workers".
Positive aspects of top–down approaches include their efficiency and superb overview of higher levels; and external effects can be internalized. On the negative side, if reforms are perceived to be imposed "from above", it can be difficult for lower levels to accept them (e.g., Bresser-Pereira, Maravall, and Przeworski 1993). Evidence suggests this to be true regardless of the content of reforms (e.g., Dubois 2002). A bottom–up approach allows for more experimentation and a better feeling for what is needed at the bottom. Other evidence suggests that there is a third combination approach to change.
Public health
Both top–down and bottom–up approaches are used in public health. There are many examples of top–down programs, often run by governments or large inter-governmental organizations; many of these are disease-or issue-specific, such as HIV control or smallpox eradication. Examples of bottom–up programs include many small NGOs set up to improve local access to healthcare. But many programs seek to combine both approaches; for instance, guinea worm eradication, a single-disease international program currently run by the Carter Center has involved the training of many local volunteers, boosting bottom-up capacity, as have international programs for hygiene, sanitation, and access to primary healthcare.
Architecture
Often the École des Beaux-Arts school of design is said to have primarily promoted top–down design because it taught that an architectural design should begin with a parti, a basic plan drawing of the overall project.
By contrast, the Bauhaus focused on bottom–up design. This method manifested itself in the study of translating small-scale organizational systems to a larger, more architectural scale (as with the wood panel carving and furniture design).
Ecology
In ecology top–down control refers to when a top predator controls the structure or population dynamics of the ecosystem. The interactions between these top predators and their prey are what influences lower trophic levels. Changes in the top level of trophic levels have an inverse effect on the lower trophic levels. Top–down control can have negative effects on the surrounding ecosystem if there is a drastic change in the number of predators. The classic example is of kelp forest ecosystems. In such ecosystems, sea otters are a keystone predator. They prey on urchins, which in turn eat kelp. When otters are removed, urchin populations grow and reduce the kelp forest creating urchin barrens. This reduces the diversity of the ecosystem as a whole and can have detrimental effects on all of the other organisms. In other words, such ecosystems are not controlled by productivity of the kelp, but rather, a top predator. One can see the inverse effect that top–down control has in this example; when the population of otters decreased, the population of the urchins increased.
Bottom–up control in ecosystems refers to ecosystems in which the nutrient supply, productivity, and type of primary producers (plants and phytoplankton) control the ecosystem structure. If there are not enough resources or producers in the ecosystem, there is not enough energy left for the rest of the animals in the food chain because of biomagnification and ecological efficiency. An example would be how plankton populations are controlled by the availability of nutrients. Plankton populations tend to be higher and more complex in areas where upwelling brings nutrients to the surface.
There are many different examples of these concepts. It is common for populations to be influenced by both types of control, and there are still debates going on as to which type of control affects food webs in certain ecosystems.
Philosophy and ethics
Top–down reasoning in ethics is when the reasoner starts from abstract universalizable principles and then reasons down them to particular situations. Bottom–up reasoning occurs when the reasoner starts from intuitive particular situational judgements and then reasons up to principles. Reflective equilibrium occurs when there is interaction between top-down and bottom-up reasoning until both are in harmony. That is to say, when universalizable abstract principles are reflectively found to be in equilibrium with particular intuitive judgements. The process occurs when cognitive dissonance occurs when reasoners try to resolve top–down with bottom–up reasoning, and adjust one or the other, until they are satisfied, they have found the best combinations of principles and situational judgements.
See also
The Cathedral and the Bazaar
Pseudocode
References cited
https://philpapers.org/rec/COHTNO
Citations and notes
Further reading
Corpeño, E (2021). "The Top-Down Approach to Problem Solving: How to Stop Struggling in Class and Start Learning". .
Goldstein, E.B. (2010). Sensation and Perception. USA: Wadsworth.
Galotti, K. (2008). Cognitive Psychology: In and out of the laboratory. USA: Wadsworth.
Dubois, Hans F.W. 2002. Harmonization of the European vaccination policy and the role TQM and reengineering could play. Quality Management in Health Care 10(2): 47–57.
J. A. Estes, M. T. Tinker, T. M. Williams, D. F. Doak "Killer Whale Predation on Sea Otters Linking Oceanic and Nearshore Ecosystems", Science, October 16, 1998: Vol. 282. no. 5388, pp. 473 – 476
Luiz Carlos Bresser-Pereira, José María Maravall, and Adam Przeworski, 1993. Economic reforms in new democracies. Cambridge: Cambridge University Press. .
External links
"Program Development by Stepwise Refinement", Communications of the ACM, Vol. 14, No. 4, April (1971)
Integrated Parallel Bottom-up and Top-down Approach. In Proceedings of the International Emergency Management Society's Fifth Annual Conference (TIEMS 98), May 19–22, Washington DC, USA (1998).
Changing Your Mind: On the Contributions of Top-Down and Bottom-Up Guidance in Visual Search for Feature Singletons, Journal of Experimental Psychology: Human Perception and Performance, Vol. 29, No. 2, 483–502, 2003.
K. Eric Drexler and Christine Peterson, Nanotechnology and Enabling Technologies, Foresight Briefing No. 2, 1989.
Empowering sustained patient safety: the benefits of combining top-down and bottom-up approaches
Dichotomies
Information science
Neuropsychology
Software design
Hierarchy | 0.767255 | 0.995917 | 0.764123 |
Polarization (waves) | (also ) is a property of transverse waves which specifies the geometrical orientation of the oscillations. In a transverse wave, the direction of the oscillation is perpendicular to the direction of motion of the wave. A simple example of a polarized transverse wave is vibrations traveling along a taut string (see image), for example, in a musical instrument like a guitar string. Depending on how the string is plucked, the vibrations can be in a vertical direction, horizontal direction, or at any angle perpendicular to the string. In contrast, in longitudinal waves, such as sound waves in a liquid or gas, the displacement of the particles in the oscillation is always in the direction of propagation, so these waves do not exhibit polarization. Transverse waves that exhibit polarization include electromagnetic waves such as light and radio waves, gravitational waves, and transverse sound waves (shear waves) in solids.
An electromagnetic wave such as light consists of a coupled oscillating electric field and magnetic field which are always perpendicular to each other; by convention, the "polarization" of electromagnetic waves refers to the direction of the electric field. In linear polarization, the fields oscillate in a single direction. In circular or elliptical polarization, the fields rotate at a constant rate in a plane as the wave travels, either in the right-hand or in the left-hand direction.
Light or other electromagnetic radiation from many sources, such as the sun, flames, and incandescent lamps, consists of short wave trains with an equal mixture of polarizations; this is called unpolarized light. Polarized light can be produced by passing unpolarized light through a polarizer, which allows waves of only one polarization to pass through. The most common optical materials do not affect the polarization of light, but some materials—those that exhibit birefringence, dichroism, or optical activity—affect light differently depending on its polarization. Some of these are used to make polarizing filters. Light also becomes partially polarized when it reflects at an angle from a surface.
According to quantum mechanics, electromagnetic waves can also be viewed as streams of particles called photons. When viewed in this way, the polarization of an electromagnetic wave is determined by a quantum mechanical property of photons called their spin. A photon has one of two possible spins: it can either spin in a right hand sense or a left hand sense about its direction of travel. Circularly polarized electromagnetic waves are composed of photons with only one type of spin, either right- or left-hand. Linearly polarized waves consist of photons that are in a superposition of right and left circularly polarized states, with equal amplitude and phases synchronized to give oscillation in a plane.
Polarization is an important parameter in areas of science dealing with transverse waves, such as optics, seismology, radio, and microwaves. Especially impacted are technologies such as lasers, wireless and optical fiber telecommunications, and radar.
Introduction
Wave propagation and polarization
Most sources of light are classified as incoherent and unpolarized (or only "partially polarized") because they consist of a random mixture of waves having different spatial characteristics, frequencies (wavelengths), phases, and polarization states. However, for understanding electromagnetic waves and polarization in particular, it is easier to just consider coherent plane waves; these are sinusoidal waves of one particular direction (or wavevector), frequency, phase, and polarization state. Characterizing an optical system in relation to a plane wave with those given parameters can then be used to predict its response to a more general case, since a wave with any specified spatial structure can be decomposed into a combination of plane waves (its so-called angular spectrum). Incoherent states can be modeled stochastically as a weighted combination of such uncorrelated waves with some distribution of frequencies (its spectrum), phases, and polarizations.
Transverse electromagnetic waves
Electromagnetic waves (such as light), traveling in free space or another homogeneous isotropic non-attenuating medium, are properly described as transverse waves, meaning that a plane wave's electric field vector and magnetic field are each in some direction perpendicular to (or "transverse" to) the direction of wave propagation; and are also perpendicular to each other. By convention, the "polarization" direction of an electromagnetic wave is given by its electric field vector. Considering a monochromatic plane wave of optical frequency (light of vacuum wavelength has a frequency of where is the speed of light), let us take the direction of propagation as the axis. Being a transverse wave the and fields must then contain components only in the and directions whereas . Using complex (or phasor) notation, the instantaneous physical electric and magnetic fields are given by the real parts of the complex quantities occurring in the following equations. As a function of time and spatial position (since for a plane wave in the direction the fields have no dependence on or ) these complex fields can be written as:
and
where is the wavelength (whose refractive index is ) and is the period of the wave. Here , , , and are complex numbers. In the second more compact form, as these equations are customarily expressed, these factors are described using the wavenumber and angular frequency (or "radian frequency") . In a more general formulation with propagation restricted to the direction, then the spatial dependence is replaced by where is called the wave vector, the magnitude of which is the wavenumber.
Thus the leading vectors and each contain up to two nonzero (complex) components describing the amplitude and phase of the wave's and polarization components (again, there can be no polarization component for a transverse wave in the direction). For a given medium with a characteristic impedance , is related to by:
In a dielectric, is real and has the value , where is the refractive index and is the impedance of free space. The impedance will be complex in a conducting medium. Note that given that relationship, the dot product of and must be zero:
indicating that these vectors are orthogonal (at right angles to each other), as expected.
Knowing the propagation direction ( in this case) and , one can just as well specify the wave in terms of just and describing the electric field. The vector containing and (but without the component which is necessarily zero for a transverse wave) is known as a Jones vector. In addition to specifying the polarization state of the wave, a general Jones vector also specifies the overall magnitude and phase of that wave. Specifically, the intensity of the light wave is proportional to the sum of the squared magnitudes of the two electric field components:
However, the wave's state of polarization is only dependent on the (complex) ratio of to . So let us just consider waves whose ; this happens to correspond to an intensity of about in free space (where ). And because the absolute phase of a wave is unimportant in discussing its polarization state, let us stipulate that the phase of is zero; in other words is a real number while may be complex. Under these restrictions, and can be represented as follows:
where the polarization state is now fully parameterized by the value of (such that ) and the relative phase .
Non-transverse waves
In addition to transverse waves, there are many wave motions where the oscillation is not limited to directions perpendicular to the direction of propagation. These cases are far beyond the scope of the current article which concentrates on transverse waves (such as most electromagnetic waves in bulk media), but one should be aware of cases where the polarization of a coherent wave cannot be described simply using a Jones vector, as we have just done.
Just considering electromagnetic waves, we note that the preceding discussion strictly applies to plane waves in a homogeneous isotropic non-attenuating medium, whereas in an anisotropic medium (such as birefringent crystals as discussed below) the electric or magnetic field may have longitudinal as well as transverse components. In those cases the electric displacement and magnetic flux density still obey the above geometry but due to anisotropy in the electric susceptibility (or in the magnetic permeability), now given by a tensor, the direction of (or ) may differ from that of (or ). Even in isotropic media, so-called inhomogeneous waves can be launched into a medium whose refractive index has a significant imaginary part (or "extinction coefficient") such as metals; these fields are also not strictly transverse. Surface waves or waves propagating in a waveguide (such as an optical fiber) are generally transverse waves, but might be described as an electric or magnetic transverse mode, or a hybrid mode.
Even in free space, longitudinal field components can be generated in focal regions, where the plane wave approximation breaks down. An extreme example is radially or tangentially polarized light, at the focus of which the electric or magnetic field respectively is longitudinal (along the direction of propagation).
For longitudinal waves such as sound waves in fluids, the direction of oscillation is by definition along the direction of travel, so the issue of polarization is normally not even mentioned. On the other hand, sound waves in a bulk solid can be transverse as well as longitudinal, for a total of three polarization components. In this case, the transverse polarization is associated with the direction of the shear stress and displacement in directions perpendicular to the propagation direction, while the longitudinal polarization describes compression of the solid and vibration along the direction of propagation. The differential propagation of transverse and longitudinal polarizations is important in seismology.
Polarization state
Polarization can be defined in terms of pure polarization states with only a coherent sinusoidal wave at one optical frequency. The vector in the adjacent diagram might describe the oscillation of the electric field emitted by a single-mode laser (whose oscillation frequency would be typically times faster). The field oscillates in the -plane, along the page, with the wave propagating in the direction, perpendicular to the page.
The first two diagrams below trace the electric field vector over a complete cycle for linear polarization at two different orientations; these are each considered a distinct state of polarization (SOP). The linear polarization at 45° can also be viewed as the addition of a horizontally linearly polarized wave (as in the leftmost figure) and a vertically polarized wave of the same amplitude .
Now if one were to introduce a phase shift in between those horizontal and vertical polarization components, one would generally obtain elliptical polarization as is shown in the third figure. When the phase shift is exactly ±90°, and the amplitudes are the same, then circular polarization is produced (fourth and fifth figures). Circular polarization can be created by sending linearly polarized light through a quarter-wave plate oriented at 45° to the linear polarization to create two components of the same amplitude with the required phase shift. The superposition of the original and phase-shifted components causes a rotating electric field vector, which is depicted in the animation on the right. Note that circular or elliptical polarization can involve either a clockwise or counterclockwise rotation of the field, depending on the relative phases of the components. These correspond to distinct polarization states, such as the two circular polarizations shown above.
The orientation of the and axes used in this description is arbitrary. The choice of such a coordinate system and viewing the polarization ellipse in terms of the and polarization components, corresponds to the definition of the Jones vector (below) in terms of those basis polarizations. Axes are selected to suit a particular problem, such as being in the plane of incidence. Since there are separate reflection coefficients for the linear polarizations in and orthogonal to the plane of incidence (p and s polarizations, see below), that choice greatly simplifies the calculation of a wave's reflection from a surface.
Any pair of orthogonal polarization states may be used as basis functions, not just linear polarizations. For instance, choosing right and left circular polarizations as basis functions simplifies the solution of problems involving circular birefringence (optical activity) or circular dichroism.
Polarization ellipse
For a purely polarized monochromatic wave the electric field vector over one cycle of oscillation traces out an ellipse.
A polarization state can then be described in relation to the geometrical parameters of the ellipse, and its "handedness", that is, whether the rotation around the ellipse is clockwise or counter clockwise. One parameterization of the elliptical figure specifies the orientation angle , defined as the angle between the major axis of the ellipse and the -axis along with the ellipticity , the ratio of the ellipse's major to minor axis. (also known as the axial ratio). The ellipticity parameter is an alternative parameterization of an ellipse's eccentricity or the ellipticity angle, as is shown in the figure. The angle is also significant in that the latitude (angle from the equator) of the polarization state as represented on the Poincaré sphere (see below) is equal to . The special cases of linear and circular polarization correspond to an ellipticity of infinity and unity (or of zero and 45°) respectively.
Jones vector
Full information on a completely polarized state is also provided by the amplitude and phase of oscillations in two components of the electric field vector in the plane of polarization. This representation was used above to show how different states of polarization are possible. The amplitude and phase information can be conveniently represented as a two-dimensional complex vector (the Jones vector):
Here and denote the amplitude of the wave in the two components of the electric field vector, while and represent the phases. The product of a Jones vector with a complex number of unit modulus gives a different Jones vector representing the same ellipse, and thus the same state of polarization. The physical electric field, as the real part of the Jones vector, would be altered but the polarization state itself is independent of absolute phase. The basis vectors used to represent the Jones vector need not represent linear polarization states (i.e. be real). In general any two orthogonal states can be used, where an orthogonal vector pair is formally defined as one having a zero inner product. A common choice is left and right circular polarizations, for example to model the different propagation of waves in two such components in circularly birefringent media (see below) or signal paths of coherent detectors sensitive to circular polarization.
Coordinate frame
Regardless of whether polarization state is represented using geometric parameters or Jones vectors, implicit in the parameterization is the orientation of the coordinate frame. This permits a degree of freedom, namely rotation about the propagation direction. When considering light that is propagating parallel to the surface of the Earth, the terms "horizontal" and "vertical" polarization are often used, with the former being associated with the first component of the Jones vector, or zero azimuth angle. On the other hand, in astronomy the equatorial coordinate system is generally used instead, with the zero azimuth (or position angle, as it is more commonly called in astronomy to avoid confusion with the horizontal coordinate system) corresponding to due north.
s and p designations
Another coordinate system frequently used relates to the plane of incidence. This is the plane made by the incoming propagation direction and the vector perpendicular to the plane of an interface, in other words, the plane in which the ray travels before and after reflection or refraction. The component of the electric field parallel to this plane is termed p-like (parallel) and the component perpendicular to this plane is termed s-like (from , German for 'perpendicular'). Polarized light with its electric field along the plane of incidence is thus denoted , while light whose electric field is normal to the plane of incidence is called . P-polarization is commonly referred to as transverse-magnetic (TM), and has also been termed pi-polarized or -polarized, or tangential plane polarized. S-polarization is also called transverse-electric (TE), as well as sigma-polarized or σ-polarized, or sagittal plane polarized.
Degree of polarization
Degree of polarization (DOP) is a quantity used to describe the portion of an electromagnetic wave which is polarized. can be calculated from the Stokes parameters. A perfectly polarized wave has a of 100%, whereas an unpolarized wave has a of 0%. A wave which is partially polarized, and therefore can be represented by a superposition of a polarized and unpolarized component, will have a somewhere in between 0 and 100%. is calculated as the fraction of the total power that is carried by the polarized component of the wave.
can be used to map the strain field in materials when considering the of the photoluminescence. The polarization of the photoluminescence is related to the strain in a material by way of the given material's photoelasticity tensor.
is also visualized using the Poincaré sphere representation of a polarized beam. In this representation, is equal to the length of the vector measured from the center of the sphere.
Unpolarized and partially polarized light
Implications for reflection and propagation
Polarization in wave propagation
In a vacuum, the components of the electric field propagate at the speed of light, so that the phase of the wave varies in space and time while the polarization state does not. That is, the electric field vector of a plane wave in the direction follows:
where is the wavenumber. As noted above, the instantaneous electric field is the real part of the product of the Jones vector times the phase factor When an electromagnetic wave interacts with matter, its propagation is altered according to the material's (complex) index of refraction. When the real or imaginary part of that refractive index is dependent on the polarization state of a wave, properties known as birefringence and polarization dichroism (or diattenuation) respectively, then the polarization state of a wave will generally be altered.
In such media, an electromagnetic wave with any given state of polarization may be decomposed into two orthogonally polarized components that encounter different propagation constants. The effect of propagation over a given path on those two components is most easily characterized in the form of a complex transformation matrix known as a Jones matrix:
The Jones matrix due to passage through a transparent material is dependent on the propagation distance as well as the birefringence. The birefringence (as well as the average refractive index) will generally be dispersive, that is, it will vary as a function of optical frequency (wavelength). In the case of non-birefringent materials, however, the Jones matrix is the identity matrix (multiplied by a scalar phase factor and attenuation factor), implying no change in polarization during propagation.
For propagation effects in two orthogonal modes, the Jones matrix can be written as
where and are complex numbers describing the phase delay and possibly the amplitude attenuation due to propagation in each of the two polarization eigenmodes. is a unitary matrix representing a change of basis from these propagation modes to the linear system used for the Jones vectors; in the case of linear birefringence or diattenuation the modes are themselves linear polarization states so and can be omitted if the coordinate axes have been chosen appropriately.
Birefringence
In a birefringent substance, electromagnetic waves of different polarizations travel at different speeds (phase velocities). As a result, when unpolarized waves travel through a plate of birefringent material, one polarization component has a shorter wavelength than the other, resulting in a phase difference between the components which increases the further the waves travel through the material. The Jones matrix is a unitary matrix: . Media termed diattenuating (or dichroic in the sense of polarization), in which only the amplitudes of the two polarizations are affected differentially, may be described using a Hermitian matrix (generally multiplied by a common phase factor). In fact, since matrix may be written as the product of unitary and positive Hermitian matrices, light propagation through any sequence of polarization-dependent optical components can be written as the product of these two basic types of transformations.
In birefringent media there is no attenuation, but two modes accrue a differential phase delay. Well known manifestations of linear birefringence (that is, in which the basis polarizations are orthogonal linear polarizations) appear in optical wave plates/retarders and many crystals. If linearly polarized light passes through a birefringent material, its state of polarization will generally change, its polarization direction is identical to one of those basis polarizations. Since the phase shift, and thus the change in polarization state, is usually wavelength-dependent, such objects viewed under white light in between two polarizers may give rise to colorful effects, as seen in the accompanying photograph.
Circular birefringence is also termed optical activity, especially in chiral fluids, or Faraday rotation, when due to the presence of a magnetic field along the direction of propagation. When linearly polarized light is passed through such an object, it will exit still linearly polarized, but with the axis of polarization rotated. A combination of linear and circular birefringence will have as basis polarizations two orthogonal elliptical polarizations; however, the term "elliptical birefringence" is rarely used.
One can visualize the case of linear birefringence (with two orthogonal linear propagation modes) with an incoming wave linearly polarized at a 45° angle to those modes. As a differential phase starts to accrue, the polarization becomes elliptical, eventually changing to purely circular polarization (90° phase difference), then to elliptical and eventually linear polarization (180° phase) perpendicular to the original polarization, then through circular again (270° phase), then elliptical with the original azimuth angle, and finally back to the original linearly polarized state (360° phase) where the cycle begins anew. In general the situation is more complicated and can be characterized as a rotation in the Poincaré sphere about the axis defined by the propagation modes. Examples for linear (blue), circular (red), and elliptical (yellow) birefringence are shown in the figure on the left. The total intensity and degree of polarization are unaffected. If the path length in the birefringent medium is sufficient, the two polarization components of a collimated beam (or ray) can exit the material with a positional offset, even though their final propagation directions will be the same (assuming the entrance face and exit face are parallel). This is commonly viewed using calcite crystals, which present the viewer with two slightly offset images, in opposite polarizations, of an object behind the crystal. It was this effect that provided the first discovery of polarization, by Erasmus Bartholinus in 1669.
Dichroism
Media in which transmission of one polarization mode is preferentially reduced are called dichroic or diattenuating. Like birefringence, diattenuation can be with respect to linear polarization modes (in a crystal) or circular polarization modes (usually in a liquid).
Devices that block nearly all of the radiation in one mode are known as or simply "polarizers". This corresponds to in the above representation of the Jones matrix. The output of an ideal polarizer is a specific polarization state (usually linear polarization) with an amplitude equal to the input wave's original amplitude in that polarization mode. Power in the other polarization mode is eliminated. Thus if unpolarized light is passed through an ideal polarizer (where and ) exactly half of its initial power is retained. Practical polarizers, especially inexpensive sheet polarizers, have additional loss so that . However, in many instances the more relevant figure of merit is the polarizer's degree of polarization or extinction ratio, which involve a comparison of to . Since Jones vectors refer to waves' amplitudes (rather than intensity), when illuminated by unpolarized light the remaining power in the unwanted polarization will be of the power in the intended polarization.
Specular reflection
In addition to birefringence and dichroism in extended media, polarization effects describable using Jones matrices can also occur at (reflective) interface between two materials of different refractive index. These effects are treated by the Fresnel equations. Part of the wave is transmitted and part is reflected; for a given material those proportions (and also the phase of reflection) are dependent on the angle of incidence and are different for the s- and p-polarizations. Therefore, the polarization state of reflected light (even if initially unpolarized) is generally changed.
Any light striking a surface at a special angle of incidence known as Brewster's angle, where the reflection coefficient for p-polarization is zero, will be reflected with only the s-polarization remaining. This principle is employed in the so-called "pile of plates polarizer" (see figure) in which part of the s-polarization is removed by reflection at each Brewster angle surface, leaving only the p-polarization after transmission through many such surfaces. The generally smaller reflection coefficient of the p-polarization is also the basis of polarized sunglasses; by blocking the s- (horizontal) polarization, most of the glare due to reflection from a wet street, for instance, is removed.
In the important special case of reflection at normal incidence (not involving anisotropic materials) there is no particular s- or p-polarization. Both the and polarization components are reflected identically, and therefore the polarization of the reflected wave is identical to that of the incident wave. However, in the case of circular (or elliptical) polarization, the handedness of the polarization state is thereby reversed, since by convention this is specified relative to the direction of propagation. The circular rotation of the electric field around the axes called "right-handed" for a wave in the direction is "left-handed" for a wave in the direction. But in the general case of reflection at a nonzero angle of incidence, no such generalization can be made. For instance, right-circularly polarized light reflected from a dielectric surface at a grazing angle, will still be right-handed (but elliptically) polarized. Linear polarized light reflected from a metal at non-normal incidence will generally become elliptically polarized. These cases are handled using Jones vectors acted upon by the different Fresnel coefficients for the s- and p-polarization components.
Measurement techniques involving polarization
Some optical measurement techniques are based on polarization. In many other optical techniques polarization is crucial or at least must be taken into account and controlled; such examples are too numerous to mention.
Measurement of stress
In engineering, the phenomenon of stress induced birefringence allows for stresses in transparent materials to be readily observed. As noted above and seen in the accompanying photograph, the chromaticity of birefringence typically creates colored patterns when viewed in between two polarizers. As external forces are applied, internal stress induced in the material is thereby observed. Additionally, birefringence is frequently observed due to stresses "frozen in" at the time of manufacture. This is famously observed in cellophane tape whose birefringence is due to the stretching of the material during the manufacturing process.
Ellipsometry
Ellipsometry is a powerful technique for the measurement of the optical properties of a uniform surface. It involves measuring the polarization state of light following specular reflection from such a surface. This is typically done as a function of incidence angle or wavelength (or both). Since ellipsometry relies on reflection, it is not required for the sample to be transparent to light or for its back side to be accessible.
Ellipsometry can be used to model the (complex) refractive index of a surface of a bulk material. It is also very useful in determining parameters of one or more thin film layers deposited on a substrate. Due to their reflection properties, not only are the predicted magnitude of the p and s polarization components, but their relative phase shifts upon reflection, compared to measurements using an ellipsometer. A normal ellipsometer does not measure the actual reflection coefficient (which requires careful photometric calibration of the illuminating beam) but the ratio of the p and s reflections, as well as change of polarization ellipticity (hence the name) induced upon reflection by the surface being studied. In addition to use in science and research, ellipsometers are used in situ to control production processes for instance.
Geology
The property of (linear) birefringence is widespread in crystalline minerals, and indeed was pivotal in the initial discovery of polarization. In mineralogy, this property is frequently exploited using polarization microscopes, for the purpose of identifying minerals. See optical mineralogy for more details.
Sound waves in solid materials exhibit polarization. Differential propagation of the three polarizations through the earth is a crucial in the field of seismology. Horizontally and vertically polarized seismic waves (shear waves) are termed SH and SV, while waves with longitudinal polarization (compressional waves) are termed P-waves.
Autopsy
Similarly, polarization microscopes can be used to aid in the detection of foreign matter in biological tissue slices if it is birefringent; autopsies often mention (a lack of or presence of) "polarizable foreign debris."
Chemistry
We have seen (above) that the birefringence of a type of crystal is useful in identifying it, and thus detection of linear birefringence is especially useful in geology and mineralogy. Linearly polarized light generally has its polarization state altered upon transmission through such a crystal, making it stand out when viewed in between two crossed polarizers, as seen in the photograph, above. Likewise, in chemistry, rotation of polarization axes in a liquid solution can be a useful measurement. In a liquid, linear birefringence is impossible, but there may be circular birefringence when a chiral molecule is in solution. When the right and left handed enantiomers of such a molecule are present in equal numbers (a so-called racemic mixture) then their effects cancel out. However, when there is only one (or a preponderance of one), as is more often the case for organic molecules, a net circular birefringence (or optical activity) is observed, revealing the magnitude of that imbalance (or the concentration of the molecule itself, when it can be assumed that only one enantiomer is present). This is measured using a polarimeter in which polarized light is passed through a tube of the liquid, at the end of which is another polarizer which is rotated in order to null the transmission of light through it.
Astronomy
In many areas of astronomy, the study of polarized electromagnetic radiation from outer space is of great importance. Although not usually a factor in the thermal radiation of stars, polarization is also present in radiation from coherent astronomical sources (e.g. hydroxyl or methanol masers), and incoherent sources such as the large radio lobes in active galaxies, and pulsar radio radiation (which may, it is speculated, sometimes be coherent), and is also imposed upon starlight by scattering from interstellar dust. Apart from providing information on sources of radiation and scattering, polarization also probes the interstellar magnetic field via Faraday rotation. The polarization of the cosmic microwave background is being used to study the physics of the very early universe. Synchrotron radiation is inherently polarized. It has been suggested that astronomical sources caused the chirality of biological molecules on Earth, but chirality selection on inorganic crystals has been proposed as an alternative theory.
Applications and examples
Polarized sunglasses
Unpolarized light, after being reflected by a specular (shiny) surface, generally obtains a degree of polarization. This phenomenon was observed in the early 1800s by the mathematician Étienne-Louis Malus, after whom Malus's law is named. Polarizing sunglasses exploit this effect to reduce glare from reflections by horizontal surfaces, notably the road ahead viewed at a grazing angle.
Wearers of polarized sunglasses will occasionally observe inadvertent polarization effects such as color-dependent birefringent effects, for example in toughened glass (e.g., car windows) or items made from transparent plastics, in conjunction with natural polarization by reflection or scattering. The polarized light from LCD monitors (see below) is extremely conspicuous when these are worn.
Sky polarization and photography
Polarization is observed in the light of the sky, as this is due to sunlight scattered by aerosols as it passes through Earth's atmosphere. The scattered light produces the brightness and color in clear skies. This partial polarization of scattered light can be used to darken the sky in photographs, increasing the contrast. This effect is most strongly observed at points on the sky making a 90° angle to the Sun. Polarizing filters use these effects to optimize the results of photographing scenes in which reflection or scattering by the sky is involved.
Sky polarization has been used for orientation in navigation. The Pfund sky compass was used in the 1950s when navigating near the poles of the Earth's magnetic field when neither the sun nor stars were visible (e.g., under daytime cloud or twilight). It has been suggested, controversially, that the Vikings exploited a similar device (the "sunstone") in their extensive expeditions across the North Atlantic in the 9th–11th centuries, before the arrival of the magnetic compass from Asia to Europe in the 12th century. Related to the sky compass is the "polar clock", invented by Charles Wheatstone in the late 19th century.
Display technologies
The principle of liquid-crystal display (LCD) technology relies on the rotation of the axis of linear polarization by the liquid crystal array. Light from the backlight (or the back reflective layer, in devices not including or requiring a backlight) first passes through a linear polarizing sheet. That polarized light passes through the actual liquid crystal layer which may be organized in pixels (for a TV or computer monitor) or in another format such as a seven-segment display or one with custom symbols for a particular product. The liquid crystal layer is produced with a consistent right (or left) handed chirality, essentially consisting of tiny helices. This causes circular birefringence, and is engineered so that there is a 90 degree rotation of the linear polarization state. However, when a voltage is applied across a cell, the molecules straighten out, lessening or totally losing the circular birefringence. On the viewing side of the display is another linear polarizing sheet, usually oriented at 90 degrees from the one behind the active layer. Therefore, when the circular birefringence is removed by the application of a sufficient voltage, the polarization of the transmitted light remains at right angles to the front polarizer, and the pixel appears dark. With no voltage, however, the 90 degree rotation of the polarization causes it to exactly match the axis of the front polarizer, allowing the light through. Intermediate voltages create intermediate rotation of the polarization axis and the pixel has an intermediate intensity. Displays based on this principle are widespread, and now are used in the vast majority of televisions, computer monitors and video projectors, rendering the previous CRT technology essentially obsolete. The use of polarization in the operation of LCD displays is immediately apparent to someone wearing polarized sunglasses, often making the display unreadable.
In a totally different sense, polarization encoding has become the leading (but not sole) method for delivering separate images to the left and right eye in stereoscopic displays used for 3D movies. This involves separate images intended for each eye either projected from two different projectors with orthogonally oriented polarizing filters or, more typically, from a single projector with time multiplexed polarization (a fast alternating polarization device for successive frames). Polarized 3D glasses with suitable polarizing filters ensure that each eye receives only the intended image. Historically such systems used linear polarization encoding because it was inexpensive and offered good separation. However, circular polarization makes separation of the two images insensitive to tilting of the head, and is widely used in 3-D movie exhibition today, such as the system from RealD. Projecting such images requires screens that maintain the polarization of the projected light when viewed in reflection (such as silver screens); a normal diffuse white projection screen causes depolarization of the projected images, making it unsuitable for this application.
Although now obsolete, CRT computer displays suffered from reflection by the glass envelope, causing glare from room lights and consequently poor contrast. Several anti-reflection solutions were employed to ameliorate this problem. One solution utilized the principle of reflection of circularly polarized light. A circular polarizing filter in front of the screen allows for the transmission of (say) only right circularly polarized room light. Now, right circularly polarized light (depending on the convention used) has its electric (and magnetic) field direction rotating clockwise while propagating in the +z direction. Upon reflection, the field still has the same direction of rotation, but now propagation is in the −z direction making the reflected wave left circularly polarized. With the right circular polarization filter placed in front of the reflecting glass, the unwanted light reflected from the glass will thus be in very polarization state that is blocked by that filter, eliminating the reflection problem. The reversal of circular polarization on reflection and elimination of reflections in this manner can be easily observed by looking in a mirror while wearing 3-D movie glasses which employ left- and right-handed circular polarization in the two lenses. Closing one eye, the other eye will see a reflection in which it cannot see itself; that lens appears black. However, the other lens (of the closed eye) will have the correct circular polarization allowing the closed eye to be easily seen by the open one.
Radio transmission and reception
All radio (and microwave) antennas used for transmitting or receiving are intrinsically polarized. They transmit in (or receive signals from) a particular polarization, being totally insensitive to the opposite polarization; in certain cases that polarization is a function of direction. Most antennas are nominally linearly polarized, but elliptical and circular polarization is a possibility. In the case of linear polarization, the same kind of filtering as described above, is possible. In the case of elliptical polarization (circular polarization is in reality just a kind of elliptical polarization where the length of both elasticity factors is the same), filtering out a single angle (e.g. 90°) will have virtually no impact as the wave at any time can be in any of the 360 degrees.
The vast majority of antennas are linearly polarized. In fact it can be shown from considerations of symmetry that an antenna that lies entirely in a plane which also includes the observer, can only have its polarization in the direction of that plane. This applies to many cases, allowing one to easily infer such an antenna's polarization at an intended direction of propagation. So a typical rooftop Yagi or log-periodic antenna with horizontal conductors, as viewed from a second station toward the horizon, is necessarily horizontally polarized. But a vertical "whip antenna" or AM broadcast tower used as an antenna element (again, for observers horizontally displaced from it) will transmit in the vertical polarization. A turnstile antenna with its four arms in the horizontal plane, likewise transmits horizontally polarized radiation toward the horizon. However, when that same turnstile antenna is used in the "axial mode" (upwards, for the same horizontally-oriented structure) its radiation is circularly polarized. At intermediate elevations it is elliptically polarized.
Polarization is important in radio communications because, for instance, if one attempts to use a horizontally polarized antenna to receive a vertically polarized transmission, the signal strength will be substantially reduced (or under very controlled conditions, reduced to nothing). This principle is used in satellite television in order to double the channel capacity over a fixed frequency band. The same frequency channel can be used for two signals broadcast in opposite polarizations. By adjusting the receiving antenna for one or the other polarization, either signal can be selected without interference from the other.
Especially due to the presence of the ground, there are some differences in propagation (and also in reflections responsible for TV ghosting) between horizontal and vertical polarizations. AM and FM broadcast radio usually use vertical polarization, while television uses horizontal polarization. At low frequencies especially, horizontal polarization is avoided. That is because the phase of a horizontally polarized wave is reversed upon reflection by the ground. A distant station in the horizontal direction will receive both the direct and reflected wave, which thus tend to cancel each other. This problem is avoided with vertical polarization. Polarization is also important in the transmission of radar pulses and reception of radar reflections by the same or a different antenna. For instance, back scattering of radar pulses by rain drops can be avoided by using circular polarization. Just as specular reflection of circularly polarized light reverses the handedness of the polarization, as discussed above, the same principle applies to scattering by objects much smaller than a wavelength such as rain drops. On the other hand, reflection of that wave by an irregular metal object (such as an airplane) will typically introduce a change in polarization and (partial) reception of the return wave by the same antenna.
The effect of free electrons in the ionosphere, in conjunction with the earth's magnetic field, causes Faraday rotation, a sort of circular birefringence. This is the same mechanism which can rotate the axis of linear polarization by electrons in interstellar space as mentioned below. The magnitude of Faraday rotation caused by such a plasma is greatly exaggerated at lower frequencies, so at the higher microwave frequencies used by satellites the effect is minimal. However, medium or short wave transmissions received following refraction by the ionosphere are strongly affected. Since a wave's path through the ionosphere and the earth's magnetic field vector along such a path are rather unpredictable, a wave transmitted with vertical (or horizontal) polarization will generally have a resulting polarization in an arbitrary orientation at the receiver.
Polarization and vision
Many animals are capable of perceiving some of the components of the polarization of light, e.g., linear horizontally polarized light. This is generally used for navigational purposes, since the linear polarization of sky light is always perpendicular to the direction of the sun. This ability is very common among the insects, including bees, which use this information to orient their communicative dances. Polarization sensitivity has also been observed in species of octopus, squid, cuttlefish, and mantis shrimp. In the latter case, one species measures all six orthogonal components of polarization, and is believed to have optimal polarization vision. The rapidly changing, vividly colored skin patterns of cuttlefish, used for communication, also incorporate polarization patterns, and mantis shrimp are known to have polarization selective reflective tissue. Sky polarization was thought to be perceived by pigeons, which was assumed to be one of their aids in homing, but research indicates this is a popular myth.
The naked human eye is weakly sensitive to polarization, without the need for intervening filters. Polarized light creates a very faint pattern near the center of the visual field, called Haidinger's brush. This pattern is very difficult to see, but with practice one can learn to detect polarized light with the naked eye.
Angular momentum using circular polarization
It is well known that electromagnetic radiation carries a certain linear momentum in the direction of propagation. In addition, however, light carries a certain angular momentum if it is circularly polarized (or partially so). In comparison with lower frequencies such as microwaves, the amount of angular momentum in light, even of pure circular polarization, compared to the same wave's linear momentum (or radiation pressure) is very small and difficult to even measure. However, it was utilized in an experiment to achieve speeds of up to 600 million revolutions per minute.
See also
Quantum Physics
Plane of polarization
Spin angular momentum of light
Optics
Depolarizer (optics)
Fluorescence anisotropy
Glan–Taylor prism
Kerr effect
Nicol prism
Pockels effect
Polarization rotator
Polarized light microscopy
Polarizer
Polaroid (polarizer)
Radial polarization
Rayleigh sky model
Waveplate
References
Cited references
General references
External links
Feynman's lecture on polarization
Polarized Light Digital Image Gallery: Microscopic images made using polarization effects
MathPages: The relationship between photon spin and polarization
A virtual polarization microscope
Polarization angle in satellite dishes.
Molecular Expressions: Science, Optics and You — Polarization of Light: Interactive Java tutorial
Antenna Polarization
Animations of Linear, Circular and Elliptical Polarizations on YouTube
Electromagnetic radiation
Antennas (radio)
Broadcast engineering
Physical optics | 0.765251 | 0.998523 | 0.76412 |
Fusion power | Fusion power is a proposed form of power generation that would generate electricity by using heat from nuclear fusion reactions. In a fusion process, two lighter atomic nuclei combine to form a heavier nucleus, while releasing energy. Devices designed to harness this energy are known as fusion reactors. Research into fusion reactors began in the 1940s, but as of 2024, no device has reached net power, although net positive reactions have been achieved.
Fusion processes require fuel and a confined environment with sufficient temperature, pressure, and confinement time to create a plasma in which fusion can occur. The combination of these figures that results in a power-producing system is known as the Lawson criterion. In stars the most common fuel is hydrogen, and gravity provides extremely long confinement times that reach the conditions needed for fusion energy production. Proposed fusion reactors generally use heavy hydrogen isotopes such as deuterium and tritium (and especially a mixture of the two), which react more easily than protium (the most common hydrogen isotope) and produce a helium nucleus and an energized neutron, to allow them to reach the Lawson criterion requirements with less extreme conditions. Most designs aim to heat their fuel to around 100 million kelvins, which presents a major challenge in producing a successful design. Tritium is extremely rare on Earth, having a half life of only ~12.3 years. Consequently, during the operation of envisioned fusion reactors, known as breeder reactors, helium cooled pebble beds (HCPBs) are subjected to neutron fluxes to generate tritium to complete the fuel cycle.
As a source of power, nuclear fusion has a number of potential advantages compared to fission. These include reduced radioactivity in operation, little high-level nuclear waste, ample fuel supplies (assuming tritium breeding or some forms of aneutronic fuels), and increased safety. However, the necessary combination of temperature, pressure, and duration has proven to be difficult to produce in a practical and economical manner. A second issue that affects common reactions is managing neutrons that are released during the reaction, which over time degrade many common materials used within the reaction chamber.
Fusion researchers have investigated various confinement concepts. The early emphasis was on three main systems: z-pinch, stellarator, and magnetic mirror. The current leading designs are the tokamak and inertial confinement (ICF) by laser. Both designs are under research at very large scales, most notably the ITER tokamak in France and the National Ignition Facility (NIF) laser in the United States. Researchers are also studying other designs that may offer less expensive approaches. Among these alternatives, there is increasing interest in magnetized target fusion and inertial electrostatic confinement, and new variations of the stellarator.
Background
Mechanism
Fusion reactions occur when two or more atomic nuclei come close enough for long enough that the nuclear force pulling them together exceeds the electrostatic force pushing them apart, fusing them into heavier nuclei. For nuclei heavier than iron-56, the reaction is endothermic, requiring an input of energy. The heavy nuclei bigger than iron have many more protons resulting in a greater repulsive force. For nuclei lighter than iron-56, the reaction is exothermic, releasing energy when they fuse. Since hydrogen has a single proton in its nucleus, it requires the least effort to attain fusion, and yields the most net energy output. Also since it has one electron, hydrogen is the easiest fuel to fully ionize.
The repulsive electrostatic interaction between nuclei operates across larger distances than the strong force, which has a range of roughly one femtometer—the diameter of a proton or neutron. The fuel atoms must be supplied enough kinetic energy to approach one another closely enough for the strong force to overcome the electrostatic repulsion in order to initiate fusion. The "Coulomb barrier" is the quantity of kinetic energy required to move the fuel atoms near enough. Atoms can be heated to extremely high temperatures or accelerated in a particle accelerator to produce this energy.
An atom loses its electrons once it is heated past its ionization energy. An ion is the name for the resultant bare nucleus. The result of this ionization is plasma, which is a heated cloud of ions and free electrons that were formerly bound to them. Plasmas are electrically conducting and magnetically controlled because the charges are separated. This is used by several fusion devices to confine the hot particles.
Cross section
A reaction's cross section, denoted σ, measures the probability that a fusion reaction will happen. This depends on the relative velocity of the two nuclei. Higher relative velocities generally increase the probability, but the probability begins to decrease again at very high energies.
In a plasma, particle velocity can be characterized using a probability distribution. If the plasma is thermalized, the distribution looks like a Gaussian curve, or Maxwell–Boltzmann distribution. In this case, it is useful to use the average particle cross section over the velocity distribution. This is entered into the volumetric fusion rate:
where:
is the energy made by fusion, per time and volume
n is the number density of species A or B, of the particles in the volume
is the cross section of that reaction, average over all the velocities of the two species v
is the energy released by that fusion reaction.
Lawson criterion
The Lawson criterion considers the energy balance between the energy produced in fusion reactions to the energy being lost to the environment. In order to generate usable energy, a system would have to produce more energy than it loses. Lawson assumed an energy balance, shown below.
where:
is the net power from fusion
is the efficiency of capturing the output of the fusion
is the rate of energy generated by the fusion reactions
is the conduction losses as energetic mass leaves the plasma
is the radiation losses as energy leaves as light.
The rate of fusion, and thus Pfusion, depends on the temperature and density of the plasma. The plasma loses energy through conduction and radiation. Conduction occurs when ions, electrons, or neutrals impact other substances, typically a surface of the device, and transfer a portion of their kinetic energy to the other atoms. The rate of conduction is also based on the temperature and density. Radiation is energy that leaves the cloud as light. Radiation also increases with temperature as well as the mass of the ions. Fusion power systems must operate in a region where the rate of fusion is higher than the losses.
Triple product: density, temperature, time
The Lawson criterion argues that a machine holding a thermalized and quasi-neutral plasma has to generate enough energy to overcome its energy losses. The amount of energy released in a given volume is a function of the temperature, and thus the reaction rate on a per-particle basis, the density of particles within that volume, and finally the confinement time, the length of time that energy stays within the volume. This is known as the "triple product": the plasma density, temperature, and confinement time.
In magnetic confinement, the density is low, on the order of a "good vacuum". For instance, in the ITER device the fuel density is about , which is about one-millionth atmospheric density. This means that the temperature and/or confinement time must increase. Fusion-relevant temperatures have been achieved using a variety of heating methods that were developed in the early 1970s. In modern machines, , the major remaining issue was the confinement time. Plasmas in strong magnetic fields are subject to a number of inherent instabilities, which must be suppressed to reach useful durations. One way to do this is to simply make the reactor volume larger, which reduces the rate of leakage due to classical diffusion. This is why ITER is so large.
In contrast, inertial confinement systems approach useful triple product values via higher density, and have short confinement intervals. In NIF, the initial frozen hydrogen fuel load has a density less than water that is increased to about 100 times the density of lead. In these conditions, the rate of fusion is so high that the fuel fuses in the microseconds it takes for the heat generated by the reactions to blow the fuel apart. Although NIF is also large, this is a function of its "driver" design, not inherent to the fusion process.
Energy capture
Multiple approaches have been proposed to capture the energy that fusion produces. The simplest is to heat a fluid. The commonly targeted D-T reaction releases much of its energy as fast-moving neutrons. Electrically neutral, the neutron is unaffected by the confinement scheme. In most designs, it is captured in a thick "blanket" of lithium surrounding the reactor core. When struck by a high-energy neutron, the blanket heats up. It is then actively cooled with a working fluid that drives a turbine to produce power.
Another design proposed to use the neutrons to breed fission fuel in a blanket of nuclear waste, a concept known as a fission-fusion hybrid. In these systems, the power output is enhanced by the fission events, and power is extracted using systems like those in conventional fission reactors.
Designs that use other fuels, notably the proton-boron aneutronic fusion reaction, release much more of their energy in the form of charged particles. In these cases, power extraction systems based on the movement of these charges are possible. Direct energy conversion was developed at Lawrence Livermore National Laboratory (LLNL) in the 1980s as a method to maintain a voltage directly using fusion reaction products. This has demonstrated energy capture efficiency of 48 percent.
Plasma behavior
Plasma is an ionized gas that conducts electricity. In bulk, it is modeled using magnetohydrodynamics, which is a combination of the Navier–Stokes equations governing fluids and Maxwell's equations governing how magnetic and electric fields behave. Fusion exploits several plasma properties, including:
Self-organizing plasma conducts electric and magnetic fields. Its motions generate fields that can in turn contain it.
Diamagnetic plasma can generate its own internal magnetic field. This can reject an externally applied magnetic field, making it diamagnetic.
Magnetic mirrors can reflect plasma when it moves from a low to high density field.:24
Methods
Magnetic confinement
Tokamak: the most well-developed and well-funded approach. This method drives hot plasma around in a magnetically confined torus, with an internal current. When completed, ITER will become the world's largest tokamak. As of September 2018 an estimated 226 experimental tokamaks were either planned, decommissioned or operating (50) worldwide.
Spherical tokamak: also known as spherical torus. A variation on the tokamak with a spherical shape.
Stellarator: Twisted rings of hot plasma. The stellarator attempts to create a natural twisted plasma path, using external magnets. Stellarators were developed by Lyman Spitzer in 1950 and evolved into four designs: Torsatron, Heliotron, Heliac and Helias. One example is Wendelstein 7-X, a German device. It is the world's largest stellarator.
Internal rings: Stellarators create a twisted plasma using external magnets, while tokamaks do so using a current induced in the plasma. Several classes of designs provide this twist using conductors inside the plasma. Early calculations showed that collisions between the plasma and the supports for the conductors would remove energy faster than fusion reactions could replace it. Modern variations, including the Levitated Dipole Experiment (LDX), use a solid superconducting torus that is magnetically levitated inside the reactor chamber.
Magnetic mirror: Developed by Richard F. Post and teams at Lawrence Livermore National Laboratory (LLNL) in the 1960s. Magnetic mirrors reflect plasma back and forth in a line. Variations included the Tandem Mirror, magnetic bottle and the biconic cusp. A series of mirror machines were built by the US government in the 1970s and 1980s, principally at LLNL. However, calculations in the 1970s estimated it was unlikely these would ever be commercially useful.
Bumpy torus: A number of magnetic mirrors are arranged end-to-end in a toroidal ring. Any fuel ions that leak out of one are confined in a neighboring mirror, permitting the plasma pressure to be raised arbitrarily high without loss. An experimental facility, the ELMO Bumpy Torus or EBT was built and tested at Oak Ridge National Laboratory (ORNL) in the 1970s.
Field-reversed configuration: This device traps plasma in a self-organized quasi-stable structure; where the particle motion makes an internal magnetic field which then traps itself.
Spheromak: Similar to a field-reversed configuration, a semi-stable plasma structure made by using the plasmas' self-generated magnetic field. A spheromak has both toroidal and poloidal fields, while a field-reversed configuration has no toroidal field.
Dynomak is a spheromak that is formed and sustained using continuous magnetic flux injection.
Reversed field pinch: Here the plasma moves inside a ring. It has an internal magnetic field. Moving out from the center of this ring, the magnetic field reverses direction.
Inertial confinement
Indirect drive: Lasers heat a structure known as a Hohlraum that becomes so hot it begins to radiate x-ray light. These x-rays heat a fuel pellet, causing it to collapse inward to compress the fuel. The largest system using this method is the National Ignition Facility, followed closely by Laser Mégajoule.
Direct drive: Lasers directly heat the fuel pellet. Notable direct drive experiments have been conducted at the Laboratory for Laser Energetics (LLE) and the GEKKO XII facilities. Good implosions require fuel pellets with close to a perfect shape in order to generate a symmetrical inward shock wave that produces the high-density plasma.
Fast ignition: This method uses two laser blasts. The first blast compresses the fusion fuel, while the second ignites it. this technique had lost favor for energy production.
Magneto-inertial fusion or Magnetized Liner Inertial Fusion: This combines a laser pulse with a magnetic pinch. The pinch community refers to it as magnetized liner inertial fusion while the ICF community refers to it as magneto-inertial fusion.
Ion Beams: Ion beams replace laser beams to heat the fuel. The main difference is that the beam has momentum due to mass, whereas lasers do not. As of 2019 it appears unlikely that ion beams can be sufficiently focused spatially and in time.
Z-machine: Sends an electric current through thin tungsten wires, heating them sufficiently to generate x-rays. Like the indirect drive approach, these x-rays then compress a fuel capsule.
Magnetic or electric pinches
Z-pinch: A current travels in the z-direction through the plasma. The current generates a magnetic field that compresses the plasma. Pinches were the first method for human-made controlled fusion. The z-pinch has inherent instabilities that limit its compression and heating to values too low for practical fusion. The largest such machine, the UK's ZETA, was the last major experiment of the sort. The problems in z-pinch led to the tokamak design. The dense plasma focus is a possibly superior variation.
Theta-pinch: A current circles around the outside of a plasma column, in the theta direction. This induces a magnetic field running down the center of the plasma, as opposed to around it. The early theta-pinch device Scylla was the first to conclusively demonstrate fusion, but later work demonstrated it had inherent limits that made it uninteresting for power production.
Sheared Flow Stabilized Z-Pinch: Research at the University of Washington under Uri Shumlak investigated the use of sheared-flow stabilization to smooth out the instabilities of Z-pinch reactors. This involves accelerating neutral gas along the axis of the pinch. Experimental machines included the FuZE and Zap Flow Z-Pinch experimental reactors. In 2017, British technology investor and entrepreneur Benj Conway, together with physicists Brian Nelson and Uri Shumlak, co-founded Zap Energy to attempt to commercialize the technology for power production.
Screw Pinch: This method combines a theta and z-pinch for improved stabilization.
Inertial electrostatic confinement
Fusor: An electric field heats ions to fusion conditions. The machine typically uses two spherical cages, a cathode inside the anode, inside a vacuum. These machines are not considered a viable approach to net power because of their high conduction and radiation losses. They are simple enough to build that amateurs have fused atoms using them.
Polywell: Attempts to combine magnetic confinement with electrostatic fields, to avoid the conduction losses generated by the cage.
Other
Magnetized target fusion: Confines hot plasma using a magnetic field and squeezes it using inertia. Examples include LANL FRX-L machine, General Fusion (piston compression with liquid metal liner), HyperJet Fusion (plasma jet compression with plasma liner).
Uncontrolled: Fusion has been initiated by man, using uncontrolled fission explosions to stimulate fusion. Early proposals for fusion power included using bombs to initiate reactions. See Project PACER.
Beam fusion: A beam of high energy particles fired at another beam or target can initiate fusion. This was used in the 1970s and 1980s to study the cross sections of fusion reactions. However beam systems cannot be used for power because keeping a beam coherent takes more energy than comes from fusion.
Muon-catalyzed fusion: This approach replaces electrons in diatomic molecules of isotopes of hydrogen with muons—more massive particles with the same electric charge. Their greater mass compresses the nuclei enough such that the strong interaction can cause fusion. As of 2007 producing muons required more energy than can be obtained from muon-catalyzed fusion.
Lattice confinement fusion: Lattice confinement fusion (LCF) is a type of nuclear fusion in which deuteron-saturated metals are exposed to gamma radiation or ion beams, such as in an IEC fusor, avoiding the confined high-temperature plasmas used in other methods of fusion.
Common tools
Many approaches, equipment, and mechanisms are employed across multiple projects to address fusion heating, measurement, and power production.
Machine learning
A deep reinforcement learning system has been used to control a tokamak-based reactor. The system was able to manipulate the magnetic coils to manage the plasma. The system was able to continuously adjust to maintain appropriate behavior (more complex than step-based systems). In 2014, Google began working with California-based fusion company TAE Technologies to control the Joint European Torus (JET) to predict plasma behavior. DeepMind has also developed a control scheme with TCV.
Heating
Electrostatic heating: an electric field can do work on charged ions or electrons, heating them.
Neutral beam injection: hydrogen is ionized and accelerated by an electric field to form a charged beam that is shone through a source of neutral hydrogen gas towards the plasma which itself is ionized and contained by a magnetic field. Some of the intermediate hydrogen gas is accelerated towards the plasma by collisions with the charged beam while remaining neutral: this neutral beam is thus unaffected by the magnetic field and so reaches the plasma. Once inside the plasma the neutral beam transmits energy to the plasma by collisions which ionize it and allow it to be contained by the magnetic field, thereby both heating and refueling the reactor in one operation. The remainder of the charged beam is diverted by magnetic fields onto cooled beam dumps.
Radio frequency heating: a radio wave causes the plasma to oscillate (i.e., microwave oven). This is also known as electron cyclotron resonance heating, using for example gyrotrons, or dielectric heating.
Magnetic reconnection: when plasma gets dense, its electromagnetic properties can change, which can lead to magnetic reconnection. Reconnection helps fusion because it instantly dumps energy into a plasma, heating it quickly. Up to 45% of the magnetic field energy can heat the ions.
Magnetic oscillations: varying electric currents can be supplied to magnetic coils that heat plasma confined within a magnetic wall.
Antiproton annihilation: antiprotons injected into a mass of fusion fuel can induce thermonuclear reactions. This possibility as a method of spacecraft propulsion, known as antimatter-catalyzed nuclear pulse propulsion, was investigated at Pennsylvania State University in connection with the proposed AIMStar project.
Measurement
The diagnostics of a fusion scientific reactor are extremely complex and varied. The diagnostics required for a fusion power reactor will be various but less complicated than those of a scientific reactor as by the time of commercialization, many real-time feedback and control diagnostics will have been perfected. However, the operating environment of a commercial fusion reactor will be harsher for diagnostic systems than in a scientific reactor because continuous operations may involve higher plasma temperatures and higher levels of neutron irradiation. In many proposed approaches, commercialization will require the additional ability to measure and separate diverter gases, for example helium and impurities, and to monitor fuel breeding, for instance the state of a tritium breeding liquid lithium liner. The following are some basic techniques.
Flux loop: a loop of wire is inserted into the magnetic field. As the field passes through the loop, a current is made. The current measures the total magnetic flux through that loop. This has been used on the National Compact Stellarator Experiment, the polywell, and the LDX machines. A Langmuir probe, a metal object placed in a plasma, can be employed. A potential is applied to it, giving it a voltage against the surrounding plasma. The metal collects charged particles, drawing a current. As the voltage changes, the current changes. This makes an IV Curve. The IV-curve can be used to determine the local plasma density, potential and temperature.
Thomson scattering: "Light scatters" from plasma can be used to reconstruct plasma behavior, including density and temperature. It is common in Inertial confinement fusion, Tokamaks, and fusors. In ICF systems, firing a second beam into a gold foil adjacent to the target makes x-rays that traverse the plasma. In tokamaks, this can be done using mirrors and detectors to reflect light.
Neutron detectors: Several types of neutron detectors can record the rate at which neutrons are produced.
X-ray detectors Visible, IR, UV, and X-rays are emitted anytime a particle changes velocity. If the reason is deflection by a magnetic field, the radiation is cyclotron radiation at low speeds and synchrotron radiation at high speeds. If the reason is deflection by another particle, plasma radiates X-rays, known as Bremsstrahlung radiation.
Power production
Neutron blankets absorb neutrons, which heats the blanket. Power can be extracted from the blanket in various ways:
Steam turbines can be driven by heat transferred into a working fluid that turns into steam, driving electric generators.
Neutron blankets: These neutrons can regenerate spent fission fuel. Tritium can be produced using a breeder blanket of liquid lithium or a helium cooled pebble bed made of lithium-bearing ceramic pebbles.
Direct conversion: The kinetic energy of a particle can be converted into voltage. It was first suggested by Richard F. Post in conjunction with magnetic mirrors, in the late 1960s. It has been proposed for Field-Reversed Configurations as well as Dense Plasma Focus devices. The process converts a large fraction of the random energy of the fusion products into directed motion. The particles are then collected on electrodes at various large electrical potentials. This method has demonstrated an experimental efficiency of 48 percent.
Traveling-wave tubes pass charged helium atoms at several megavolts and just coming off the fusion reaction through a tube with a coil of wire around the outside. This passing charge at high voltage pulls electricity through the wire.
Confinement
Confinement refers to all the conditions necessary to keep a plasma dense and hot long enough to undergo fusion. General principles:
Equilibrium: The forces acting on the plasma must be balanced. One exception is inertial confinement, where the fusion must occur faster than the dispersal time.
Stability: The plasma must be constructed so that disturbances will not lead to the plasma dispersing.
Transport or conduction: The loss of material must be sufficiently slow. The plasma carries energy off with it, so rapid loss of material will disrupt fusion. Material can be lost by transport into different regions or conduction through a solid or liquid.
To produce self-sustaining fusion, part of the energy released by the reaction must be used to heat new reactants and maintain the conditions for fusion.
Magnetic confinement
Magnetic Mirror
Magnetic mirror effect. If a particle follows the field line and enters a region of higher field strength, the particles can be reflected. Several devices apply this effect. The most famous was the magnetic mirror machines, a series of devices built at LLNL from the 1960s to the 1980s. Other examples include magnetic bottles and Biconic cusp. Because the mirror machines were straight, they had some advantages over ring-shaped designs. The mirrors were easier to construct and maintain and direct conversion energy capture was easier to implement. Poor confinement has led this approach to be abandoned, except in the polywell design.
Magnetic loops
Magnetic loops bend the field lines back on themselves, either in circles or more commonly in nested toroidal surfaces. The most highly developed systems of this type are the tokamak, the stellarator, and the reversed field pinch. Compact toroids, especially the field-reversed configuration and the spheromak, attempt to combine the advantages of toroidal magnetic surfaces with those of a simply connected (non-toroidal) machine, resulting in a mechanically simpler and smaller confinement area.
Inertial confinement
Inertial confinement is the use of rapid implosion to heat and confine plasma. A shell surrounding the fuel is imploded using a direct laser blast (direct drive), a secondary x-ray blast (indirect drive), or heavy beams. The fuel must be compressed to about 30 times solid density with energetic beams. Direct drive can in principle be efficient, but insufficient uniformity has prevented success.:19–20 Indirect drive uses beams to heat a shell, driving the shell to radiate x-rays, which then implode the pellet. The beams are commonly laser beams, but ion and electron beams have been investigated.:182–193
Electrostatic confinement
Electrostatic confinement fusion devices use electrostatic fields. The best known is the fusor. This device has a cathode inside an anode wire cage. Positive ions fly towards the negative inner cage, and are heated by the electric field in the process. If they miss the inner cage they can collide and fuse. Ions typically hit the cathode, however, creating prohibitory high conduction losses. Fusion rates in fusors are low because of competing physical effects, such as energy loss in the form of light radiation. Designs have been proposed to avoid the problems associated with the cage, by generating the field using a non-neutral cloud. These include a plasma oscillating device, a magnetically shielded-grid, a penning trap, the polywell, and the F1 cathode driver concept.
Fuels
The fuels considered for fusion power have all been light elements like the isotopes of hydrogen—protium, deuterium, and tritium. The deuterium and helium-3 reaction requires helium-3, an isotope of helium so scarce on Earth that it would have to be mined extraterrestrially or produced by other nuclear reactions. Ultimately, researchers hope to adopt the protium–boron-11 reaction, because it does not directly produce neutrons, although side reactions can.
Deuterium, tritium
The easiest nuclear reaction, at the lowest energy, is D+T:
+ → (3.5 MeV) + (14.1 MeV)
This reaction is common in research, industrial and military applications, usually as a neutron source. Deuterium is a naturally occurring isotope of hydrogen and is commonly available. The large mass ratio of the hydrogen isotopes makes their separation easy compared to the uranium enrichment process. Tritium is a natural isotope of hydrogen, but because it has a short half-life of 12.32 years, it is hard to find, store, produce, and is expensive. Consequently, the deuterium-tritium fuel cycle requires the breeding of tritium from lithium using one of the following reactions:
+ → +
+ → + +
The reactant neutron is supplied by the D-T fusion reaction shown above, and the one that has the greatest energy yield. The reaction with 6Li is exothermic, providing a small energy gain for the reactor. The reaction with 7Li is endothermic, but does not consume the neutron. Neutron multiplication reactions are required to replace the neutrons lost to absorption by other elements. Leading candidate neutron multiplication materials are beryllium and lead, but the 7Li reaction helps to keep the neutron population high. Natural lithium is mainly 7Li, which has a low tritium production cross section compared to 6Li so most reactor designs use breeding blankets with enriched 6Li.
Drawbacks commonly attributed to D-T fusion power include:
The supply of neutrons results in neutron activation of the reactor materials.:242
80% of the resultant energy is carried off by neutrons, which limits the use of direct energy conversion.
It requires the radioisotope tritium. Tritium may leak from reactors. Some estimates suggest that this would represent a substantial environmental radioactivity release.
The neutron flux expected in a commercial D-T fusion reactor is about 100 times that of fission power reactors, posing problems for material design. After a series of D-T tests at JET, the vacuum vessel was sufficiently radioactive that it required remote handling for the year following the tests.
In a production setting, the neutrons would react with lithium in the breeding blanket composed of lithium ceramic pebbles or liquid lithium, yielding tritium. The energy of the neutrons ends up in the lithium, which would then be transferred to drive electrical production. The lithium blanket protects the outer portions of the reactor from the neutron flux. Newer designs, the advanced tokamak in particular, use lithium inside the reactor core as a design element. The plasma interacts directly with the lithium, preventing a problem known as "recycling". The advantage of this design was demonstrated in the Lithium Tokamak Experiment.
Deuterium
Fusing two deuterium nuclei is the second easiest fusion reaction. The reaction has two branches that occur with nearly equal probability:
+ → +
+ → +
This reaction is also common in research. The optimum energy to initiate this reaction is 15 keV, only slightly higher than that for the D-T reaction. The first branch produces tritium, so that a D-D reactor is not tritium-free, even though it does not require an input of tritium or lithium. Unless the tritons are quickly removed, most of the tritium produced is burned in the reactor, which reduces the handling of tritium, with the disadvantage of producing more, and higher-energy, neutrons. The neutron from the second branch of the D-D reaction has an energy of only , while the neutron from the D-T reaction has an energy of , resulting in greater isotope production and material damage. When the tritons are removed quickly while allowing the 3He to react, the fuel cycle is called "tritium suppressed fusion". The removed tritium decays to 3He with a 12.5 year half life. By recycling the 3He decay into the reactor, the fusion reactor does not require materials resistant to fast neutrons.
Assuming complete tritium burn-up, the reduction in the fraction of fusion energy carried by neutrons would be only about 18%, so that the primary advantage of the D-D fuel cycle is that tritium breeding is not required. Other advantages are independence from lithium resources and a somewhat softer neutron spectrum. The disadvantage of D-D compared to D-T is that the energy confinement time (at a given pressure) must be 30 times longer and the power produced (at a given pressure and volume) is 68 times less.
Assuming complete removal of tritium and 3He recycling, only 6% of the fusion energy is carried by neutrons. The tritium-suppressed D-D fusion requires an energy confinement that is 10 times longer compared to D-T and double the plasma temperature.
Deuterium, helium-3
A second-generation approach to controlled fusion power involves combining helium-3 (3He) and deuterium (2H):
+ → +
This reaction produces 4He and a high-energy proton. As with the p-11B aneutronic fusion fuel cycle, most of the reaction energy is released as charged particles, reducing activation of the reactor housing and potentially allowing more efficient energy harvesting (via any of several pathways). In practice, D-D side reactions produce a significant number of neutrons, leaving p-11B as the preferred cycle for aneutronic fusion.
Proton, boron-11
Both material science problems and non-proliferation concerns are greatly diminished by aneutronic fusion. Theoretically, the most reactive aneutronic fuel is 3He. However, obtaining reasonable quantities of 3He implies large scale extraterrestrial mining on the Moon or in the atmosphere of Uranus or Saturn. Therefore, the most promising candidate fuel for such fusion is fusing the readily available protium (i.e. a proton) and boron. Their fusion releases no neutrons, but produces energetic charged alpha (helium) particles whose energy can directly be converted to electrical power:
+ → 3
Side reactions are likely to yield neutrons that carry only about 0.1% of the power,:177–182 which means that neutron scattering is not used for energy transfer and material activation is reduced several thousand-fold. The optimum temperature for this reaction of 123 keV is nearly ten times higher than that for pure hydrogen reactions, and energy confinement must be 500 times better than that required for the D-T reaction. In addition, the power density is 2500 times lower than for D-T, although per unit mass of fuel, this is still considerably higher compared to fission reactors.
Because the confinement properties of the tokamak and laser pellet fusion are marginal, most proposals for aneutronic fusion are based on radically different confinement concepts, such as the Polywell and the Dense Plasma Focus. In 2013, a research team led by Christine Labaune at École Polytechnique, reported a new fusion rate record for proton-boron fusion, with an estimated 80 million fusion reactions during a 1.5 nanosecond laser fire, 100 times greater than reported in previous experiments.
Material selection
Structural material stability is a critical issue. Materials that can survive the high temperatures and neutron bombardment experienced in a fusion reactor are considered key to success. The principal issues are the conditions generated by the plasma, neutron degradation of wall surfaces, and the related issue of plasma-wall surface conditions. Reducing hydrogen permeability is seen as crucial to hydrogen recycling and control of the tritium inventory. Materials with the lowest bulk hydrogen solubility and diffusivity provide the optimal candidates for stable barriers. A few pure metals, including tungsten and beryllium, and compounds such as carbides, dense oxides, and nitrides have been investigated. Research has highlighted that coating techniques for preparing well-adhered and perfect barriers are of equivalent importance. The most attractive techniques are those in which an ad-layer is formed by oxidation alone. Alternative methods utilize specific gas environments with strong magnetic and electric fields. Assessment of barrier performance represents an additional challenge. Classical coated membranes gas permeation continues to be the most reliable method to determine hydrogen permeation barrier (HPB) efficiency. In 2021, in response to increasing numbers of designs for fusion power reactors for 2040, the United Kingdom Atomic Energy Authority published the UK Fusion Materials Roadmap 2021–2040, focusing on five priority areas, with a focus on tokamak family reactors:
Novel materials to minimize the amount of activation in the structure of the fusion power plant;
Compounds that can be used within the power plant to optimise breeding of tritium fuel to sustain the fusion process;
Magnets and insulators that are resistant to irradiation from fusion reactions—especially under cryogenic conditions;
Structural materials able to retain their strength under neutron bombardment at high operating temperatures (over 550 degrees C);
Engineering assurance for fusion materials—providing irradiated sample data and modelled predictions such that plant designers, operators and regulators have confidence that materials are suitable for use in future commercial power stations.
Superconducting materials
In a plasma that is embedded in a magnetic field (known as a magnetized plasma) the fusion rate scales as the magnetic field strength to the 4th power. For this reason, many fusion companies that rely on magnetic fields to control their plasma are trying to develop high temperature superconducting devices. In 2021, SuperOx, a Russian and Japanese company, developed a new manufacturing process for making superconducting YBCO wire for fusion reactors. This new wire was shown to conduct between 700 and 2000 Amps per square millimeter. The company was able to produce 186 miles of wire in nine months.
Containment considerations
Even on smaller production scales, the containment apparatus is blasted with matter and energy. Designs for plasma containment must consider:
A heating and cooling cycle, up to a 10 MW/m2 thermal load.
Neutron radiation, which over time leads to neutron activation and embrittlement.
High energy ions leaving at tens to hundreds of electronvolts.
Alpha particles leaving at millions of electronvolts.
Electrons leaving at high energy.
Light radiation (IR, visible, UV, X-ray).
Depending on the approach, these effects may be higher or lower than fission reactors. One estimate put the radiation at 100 times that of a typical pressurized water reactor. Depending on the approach, other considerations such as electrical conductivity, magnetic permeability, and mechanical strength matter. Materials must also not end up as long-lived radioactive waste.
Plasma-wall surface conditions
For long term use, each atom in the wall is expected to be hit by a neutron and displaced about 100 times before the material is replaced. High-energy neutrons produce hydrogen and helium via nuclear reactions that tend to form bubbles at grain boundaries and result in swelling, blistering or embrittlement.
Selection of materials
Tungsten is widely regarded as the optimal material for plasma-facing components in next-generation fusion devices due to its unique properties and potential for enhancements. Its low sputtering rates and high melting point make it particularly suitable for the high-stress environments of fusion reactors, allowing it to withstand intense conditions without rapid degradation. Additionally, tungsten's low tritium retention through co-deposition and implantation is essential in fusion contexts, as it helps to minimize the accumulation of this radioactive isotope.
Liquid metals (lithium, gallium, tin) have been proposed, e.g., by injection of 1–5 mm thick streams flowing at 10 m/s on solid substrates.
Graphite features a gross erosion rate due to physical and chemical sputtering amounting to many meters per year, requiring redeposition of the sputtered material. The redeposition site generally does not exactly match the sputter site, allowing net erosion that may be prohibitive. An even larger problem is that tritium is redeposited with the redeposited graphite. The tritium inventory in the wall and dust could build up to many kilograms, representing a waste of resources and a radiological hazard in case of an accident. Graphite found favor as material for short-lived experiments, but appears unlikely to become the primary plasma-facing material (PFM) in a commercial reactor.
Ceramic materials such as silicon carbide (SiC) have similar issues like graphite. Tritium retention in silicon carbide plasma-facing components is approximately 1.5-2 times higher than in graphite, resulting in reduced fuel efficiency and heightened safety risks in fusion reactors. SiC tends to trap more tritium, limiting its availability for fusion and increasing the risk of hazardous accumulation, complicating tritium management. Furthermore, the chemical and physical sputtering of SiC remains significant, contributing to tritium buildup through co-deposition over time and with increasing particle fluence. As a result, carbon-based materials have been excluded from ITER, DEMO, and similar devices.
Tungsten's sputtering rate is orders of magnitude smaller than carbon's, and tritium is much less incorporated into redeposited tungsten. However, tungsten plasma impurities are much more damaging than carbon impurities, and self-sputtering can be high, requiring the plasma in contact with the tungsten not be too hot (a few tens of eV rather than hundreds of eV). Tungsten also has issues around eddy currents and melting in off-normal events, as well as some radiological issues.
Safety and the environment
Accident potential
Accident potential and effect on the environment are critical to social acceptance of nuclear fusion, also known as a social license. Fusion reactors are not subject to catastrophic meltdown. It requires precise and controlled temperature, pressure and magnetic field parameters to produce net energy, and any damage or loss of required control would rapidly quench the reaction. Fusion reactors operate with seconds or even microseconds worth of fuel at any moment. Without active refueling, the reactions immediately quench.
The same constraints prevent runaway reactions. Although the plasma is expected to have a volume of or more, the plasma typically contains only a few grams of fuel. By comparison, a fission reactor is typically loaded with enough fuel for months or years, and no additional fuel is necessary to continue the reaction. This large fuel supply is what offers the possibility of a meltdown.
In magnetic containment, strong fields develop in coils that are mechanically held in place by the reactor structure. Failure of this structure could release this tension and allow the magnet to "explode" outward. The severity of this event would be similar to other industrial accidents or an MRI machine quench/explosion, and could be effectively contained within a containment building similar to those used in fission reactors.
In laser-driven inertial containment the larger size of the reaction chamber reduces the stress on materials. Although failure of the reaction chamber is possible, stopping fuel delivery prevents catastrophic failure.
Most reactor designs rely on liquid hydrogen as a coolant and to convert stray neutrons into tritium, which is fed back into the reactor as fuel. Hydrogen is flammable, and it is possible that hydrogen stored on-site could ignite. In this case, the tritium fraction of the hydrogen would enter the atmosphere, posing a radiation risk. Calculations suggest that about of tritium and other radioactive gases in a typical power station would be present. The amount is small enough that it would dilute to legally acceptable limits by the time they reached the station's perimeter fence.
The likelihood of small industrial accidents, including the local release of radioactivity and injury to staff, are estimated to be minor compared to fission. They would include accidental releases of lithium or tritium or mishandling of radioactive reactor components.
Magnet quench
A magnet quench is an abnormal termination of magnet operation that occurs when part of the superconducting coil exits the superconducting state (becomes normal). This can occur because the field inside the magnet is too large, the rate of change of field is too large (causing eddy currents and resultant heating in the copper support matrix), or a combination of the two.
More rarely a magnet defect can cause a quench. When this happens, that particular spot is subject to rapid Joule heating from the current, which raises the temperature of the surrounding regions. This pushes those regions into the normal state as well, which leads to more heating in a chain reaction. The entire magnet rapidly becomes normal over several seconds, depending on the size of the superconducting coil. This is accompanied by a loud bang as the energy in the magnetic field is converted to heat, and the cryogenic fluid boils away. The abrupt decrease of current can result in kilovolt inductive voltage spikes and arcing. Permanent damage to the magnet is rare, but components can be damaged by localized heating, high voltages, or large mechanical forces.
In practice, magnets usually have safety devices to stop or limit the current when a quench is detected. If a large magnet undergoes a quench, the inert vapor formed by the evaporating cryogenic fluid can present a significant asphyxiation hazard to operators by displacing breathable air.
A large section of the superconducting magnets in CERN's Large Hadron Collider unexpectedly quenched during start-up operations in 2008, destroying multiple magnets. In order to prevent a recurrence, the LHC's superconducting magnets are equipped with fast-ramping heaters that are activated when a quench event is detected. The dipole bending magnets are connected in series. Each power circuit includes 154 individual magnets, and should a quench event occur, the entire combined stored energy of these magnets must be dumped at once. This energy is transferred into massive blocks of metal that heat up to several hundred degrees Celsius—because of resistive heating—in seconds. A magnet quench is a "fairly routine event" during the operation of a particle accelerator.
Effluents
The natural product of the fusion reaction is a small amount of helium, which is harmless to life. Hazardous tritium is difficult to retain completely.
Although tritium is volatile and biologically active, the health risk posed by a release is much lower than that of most radioactive contaminants, because of tritium's short half-life (12.32 years) and very low decay energy (~14.95 keV), and because it does not bioaccumulate (it cycles out of the body as water, with a biological half-life of 7 to 14 days). ITER incorporates total containment facilities for tritium.
Radioactive waste
Fusion reactors create far less radioactive material than fission reactors. Further, the material it creates is less damaging biologically, and the radioactivity dissipates within a time period that is well within existing engineering capabilities for safe long-term waste storage. In specific terms, except in the case of aneutronic fusion, the neutron flux turns the structural materials radioactive. The amount of radioactive material at shut-down may be comparable to that of a fission reactor, with important differences. The half-lives of fusion and neutron activation radioisotopes tend to be less than those from fission, so that the hazard decreases more rapidly. Whereas fission reactors produce waste that remains radioactive for thousands of years, the radioactive material in a fusion reactor (other than tritium) would be the reactor core itself and most of this would be radioactive for about 50 years, with other low-level waste being radioactive for another 100 years or so thereafter. The fusion waste's short half-life eliminates the challenge of long-term storage. By 500 years, the material would have the same radiotoxicity as coal ash.
Nonetheless, classification as intermediate level waste rather than low-level waste may complicate safety discussions.
The choice of materials is less constrained than in conventional fission, where many materials are required for their specific neutron cross-sections. Fusion reactors can be designed using "low activation", materials that do not easily become radioactive. Vanadium, for example, becomes much less radioactive than stainless steel. Carbon fiber materials are also low-activation, are strong and light, and are promising for laser-inertial reactors where a magnetic field is not required.
Nuclear proliferation
In some scenarios, fusion power technology could be adapted to produce materials for military purposes. A huge amount of tritium could be produced by a fusion power station; tritium is used in the trigger of hydrogen bombs and in modern boosted fission weapons, but it can be produced in other ways. The energetic neutrons from a fusion reactor could be used to breed weapons-grade plutonium or uranium for an atomic bomb (for example by transmutation of to , or to ).
A study conducted in 2011 assessed three scenarios:
Small-scale fusion station: As a result of much higher power consumption, heat dissipation and a more recognizable design compared to enrichment gas centrifuges, this choice would be much easier to detect and therefore implausible.
Commercial facility: The production potential is significant. But no fertile or fissile substances necessary for the production of weapon-usable materials needs to be present at a civil fusion system at all. If not shielded, detection of these materials can be done by their characteristic gamma radiation. The underlying redesign could be detected by regular design information verification. In the (technically more feasible) case of solid breeder blanket modules, it would be necessary for incoming components to be inspected for the presence of fertile material, otherwise plutonium for several weapons could be produced each year.
Prioritizing weapon-grade material regardless of secrecy: The fastest way to produce weapon-usable material was seen in modifying a civil fusion power station. No weapons-compatible material is required during civil use. Even without the need for covert action, such a modification would take about two months to start production and at least an additional week to generate a significant amount. This was considered to be enough time to detect a military use and to react with diplomatic or military means. To stop the production, a military destruction of parts of the facility while leaving out the reactor would be sufficient.
Another study concluded "...large fusion reactors—even if not designed for fissile material breeding—could easily produce several hundred kg Pu per year with high weapon quality and very low source material requirements." It was emphasized that the implementation of features for intrinsic proliferation resistance might only be possible at an early phase of research and development. The theoretical and computational tools needed for hydrogen bomb design are closely related to those needed for inertial confinement fusion, but have very little in common with magnetic confinement fusion.
Fuel reserves
Fusion power commonly proposes the use of deuterium as fuel and many current designs also use lithium. Assuming a fusion energy output equal to the 1995 global power output of about 100 EJ/yr (= 1 × 1020 J/yr) and that this does not increase in the future, which is unlikely, then known current lithium reserves would last 3000 years. Lithium from sea water would last 60 million years, however, and a more complicated fusion process using only deuterium would have fuel for 150 billion years. To put this in context, 150 billion years is close to 30 times the remaining lifespan of the Sun, and more than 10 times the estimated age of the universe.
Economics
The EU spent almost through the 1990s. ITER represents an investment of over twenty billion dollars, and possibly tens of billions more, including in kind contributions. Under the European Union's Sixth Framework Programme, nuclear fusion research received (in addition to ITER funding), compared with for sustainable energy research, putting research into fusion power well ahead of that of any single rival technology. The United States Department of Energy has allocated $US367M–$US671M every year since 2010, peaking in 2020, with plans to reduce investment to $US425M in its FY2021 Budget Request. About a quarter of this budget is directed to support ITER.
The size of the investments and time lines meant that fusion research was traditionally almost exclusively publicly funded. However, starting in the 2010s, the promise of commercializing a paradigm-changing low-carbon energy source began to attract a raft of companies and investors. Over two dozen start-up companies attracted over one billion dollars from roughly 2000 to 2020, mainly from 2015, and a further three billion in funding and milestone related commitments in 2021, with investors including Jeff Bezos, Peter Thiel and Bill Gates, as well as institutional investors including Legal & General, and energy companies including Equinor, Eni, Chevron, and the Chinese ENN Group. In 2021, Commonwealth Fusion Systems (CFS) obtained $1.8 billion in scale-up funding, and Helion Energy obtained a half-billion dollars with an additional $1.7 billion contingent on meeting milestones.
Scenarios developed in the 2000s and early 2010s discussed the effects of the commercialization of fusion power on the future of human civilization. Using nuclear fission as a guide, these saw ITER and later DEMO as bringing online the first commercial reactors around 2050 and a rapid expansion after mid-century. Some scenarios emphasized "fusion nuclear science facilities" as a step beyond ITER. However, the economic obstacles to tokamak-based fusion power remain immense, requiring investment to fund prototype tokamak reactors and development of new supply chains, a problem which will affect any kind of fusion reactor. Tokamak designs appear to be labour-intensive, while the commercialization risk of alternatives like inertial fusion energy is high due to the lack of government resources.
Scenarios since 2010 note computing and material science advances enabling multi-phase national or cost-sharing "Fusion Pilot Plants" (FPPs) along various technology pathways, such as the UK Spherical Tokamak for Energy Production, within the 2030–2040 time frame. Notably, in June 2021, General Fusion announced it would accept the UK government's offer to host the world's first substantial public-private partnership fusion demonstration plant, at Culham Centre for Fusion Energy. The plant will be constructed from 2022 to 2025 and is intended to lead the way for commercial pilot plants in the late 2025s. The plant will be 70% of full scale and is expected to attain a stable plasma of 150 million degrees. In the United States, cost-sharing public-private partnership FPPs appear likely, and in 2022 the DOE announced a new Milestone-Based Fusion Development Program as the centerpiece of its Bold Decadal Vision for Commercial Fusion Energy, which envisages private sector-led teams delivering FPP pre-conceptual designs, defining technology roadmaps, and pursuing the R&D necessary to resolve critical-path scientific and technical issues towards an FPP design. Compact reactor technology based on such demonstration plants may enable commercialization via a fleet approach from the 2030s if early markets can be located.
The widespread adoption of non-nuclear renewable energy has transformed the energy landscape. Such renewables are projected to supply 74% of global energy by 2050. The steady fall of renewable energy prices challenges the economic competitiveness of fusion power.
Some economists suggest fusion power is unlikely to match other renewable energy costs. Fusion plants are expected to face large start up and capital costs. Moreover, operation and maintenance are likely to be costly. While the costs of the China Fusion Engineering Test Reactor are not well known, an EU DEMO fusion concept was projected to feature a levelized cost of energy (LCOE) of $121/MWh.
Fuel costs are low, but economists suggest that the energy cost for a one-gigawatt plant would increase by $16.5 per MWh for every $1 billion increase in the capital investment in construction. There is also the risk that easily obtained lithium will be used up making batteries. Obtaining it from seawater would be very costly and might require more energy than the energy that would be generated.
In contrast, renewable levelized cost of energy estimates are substantially lower. For instance, the 2019 levelized cost of energy of solar energy was estimated to be $40-$46/MWh, on shore wind was estimated at $29-$56/MWh, and offshore wind was approximately $92/MWh.
However, fusion power may still have a role filling energy gaps left by renewables, depending on how administration priorities for energy and environmental justice influence the market. In the 2020s, socioeconomic studies of fusion that began to consider these factors emerged, and in 2022 EUROFusion launched its Socio-Economic Studies and Prospective Research and Development strands to investigate how such factors might affect commercialization pathways and timetables. Similarly, in April 2023 Japan announced a national strategy to industrialise fusion. Thus, fusion power may work in tandem with other renewable energy sources rather than becoming the primary energy source. In some applications, fusion power could provide the base load, especially if including integrated thermal storage and cogeneration and considering the potential for retrofitting coal plants.
Regulation
As fusion pilot plants move within reach, legal and regulatory issues must be addressed. In September 2020, the United States National Academy of Sciences consulted with private fusion companies to consider a national pilot plant. The following month, the United States Department of Energy, the Nuclear Regulatory Commission (NRC) and the Fusion Industry Association co-hosted a public forum to begin the process. In November 2020, the International Atomic Energy Agency (IAEA) began working with various nations to create safety standards such as dose regulations and radioactive waste handling. In January and March 2021, NRC hosted two public meetings on regulatory frameworks. A public-private cost-sharing approach was endorsed in the 27 December H.R.133 Consolidated Appropriations Act, 2021, which authorized $325 million over five years for a partnership program to build fusion demonstration facilities, with a 100% match from private industry.
Subsequently, the UK Regulatory Horizons Council published a report calling for a fusion regulatory framework by early 2022 in order to position the UK as a global leader in commercializing fusion power. This call was met by the UK government publishing in October 2021 both its Fusion Green Paper and its Fusion Strategy, to regulate and commercialize fusion, respectively. Then, in April 2023, in a decision likely to influence other nuclear regulators, the NRC announced in a unanimous vote that fusion energy would be regulated not as fission but under the same regulatory regime as particle accelerators.
Then, in October 2023 the UK government, in enacting the Energy Act 2023, made the UK the first country to legislate for fusion separately from fission, to support planning and investment, including the UK's planned prototype fusion power plant for 2040; STEP the UK is working with Canada and Japan in this regard. Meanwhile, in February 2024 the US House of Representatives passed the Atomic Energy Advancement Act, which includes the Fusion Energy Act, which establishes a regulatory framework for fusion energy systems.
Geopolitics
Given the potential of fusion to transform the world's energy industry and mitigate climate change, fusion science has traditionally been seen as an integral part of peace-building science diplomacy. However, technological developments and private sector involvement has raised concerns over intellectual property, regulatory administration, global leadership; equity, and potential weaponization. These challenge ITER's peace-building role and led to calls for a global commission. Fusion power significantly contributing to climate change by 2050 seems unlikely without substantial breakthroughs and a space race mentality emerging, but a contribution by 2100 appears possible, with the extent depending on the type and particularly cost of technology pathways.
Developments from late 2020 onwards have led to talk of a "new space race" with multiple entrants, pitting the US against China and the UK's STEP FPP, with China now outspending the US and threatening to leapfrog US technology. On 24 September 2020, the United States House of Representatives approved a research and commercialization program. The Fusion Energy Research section incorporated a milestone-based, cost-sharing, public-private partnership program modeled on NASA's COTS program, which launched the commercial space industry. In February 2021, the National Academies published Bringing Fusion to the U.S. Grid, recommending a market-driven, cost-sharing plant for 2035–2040, and the launch of the Congressional Bipartisan Fusion Caucus followed.
In December 2020, an independent expert panel reviewed EUROfusion's design and R&D work on DEMO, and EUROfusion confirmed it was proceeding with its Roadmap to Fusion Energy, beginning the conceptual design of DEMO in partnership with the European fusion community, suggesting an EU-backed machine had entered the race.
In October 2023, the UK-oriented Agile Nations group announced a fusion working group. One month later, the UK and the US announced a bilateral partnership to accelerate fusion energy. Then, in December 2023 at COP28 the US announced a US global strategy to commercialize fusion energy. Then, in April 2024, Japan and the US announced a similar partnership, and in May of the same year, the G7 announced a G7 Working Group on Fusion Energy to promote international collaborations to accelerate the development of commercial energy and promote R&D between countries, as well as rationalize fusion regulation. Later the same year, the US partnered with the IAEA to launch the Fusion Energy Solutions Taskforce, to collaboratively crowdsource ideas to accelerate commercial fusion energy, in line with the US COP28 statement.
Specifically to resolve the tritium supply problem, in February 2024, the UK (UKAEA) and Canada (Canadian Nuclear Laboratories) announced an agreement by which Canada could refurbish its Candu deuterium-uranium tritium-generating heavywater nuclear plants and even build new ones, guaranteeing a supply of tritium into the 2070s, while the UKAEA would test breeder materials and simulate how tritium could be captured, purified, and injected back into the fusion reaction.
In 2024, both South Korea and Japan announced major initiatives to accelerate their national fusion strategies, by building electricity-generating public-private fusion plants in the 2030s, aiming to begin operations in the 2040s and 2030s respectively.
Advantages
Fusion power promises to provide more energy for a given weight of fuel than any fuel-consuming energy source currently in use. The fuel (primarily deuterium) exists abundantly in the ocean: about 1 in 6500 hydrogen atoms in seawater is deuterium. Although this is only about 0.015%, seawater is plentiful and easy to access, implying that fusion could supply the world's energy needs for millions of years.
First generation fusion plants are expected to use the deuterium-tritium fuel cycle. This will require the use of lithium for breeding of the tritium. It is not known for how long global lithium supplies will suffice to supply this need as well as those of the battery and metallurgical industries. It is expected that second generation plants will move on to the more formidable deuterium-deuterium reaction. The deuterium-helium-3 reaction is also of interest, but the light helium isotope is practically non-existent on Earth. It is thought to exist in useful quantities in the lunar regolith, and is abundant in the atmospheres of the gas giant planets.
Fusion power could be used for so-called "deep space" propulsion within the solar system and for interstellar space exploration where solar energy is not available, including via antimatter-fusion hybrid drives.
Disadvantages
Fusion power has a number of disadvantages. Because 80 percent of the energy in any reactor fueled by deuterium and tritium appears in the form of neutron streams, such reactors share many of the drawbacks of fission reactors. This includes the production of large quantities of radioactive waste and serious radiation damage to reactor components. Additionally, naturally occurring tritium is extremely rare. While the hope is that fusion reactors can breed their own tritium, tritium self-sufficiency is extremely challenging, not least because tritium is difficult to contain (tritium has leaked from 48 of 65 nuclear sites in the US). In any case the reserve and start-up tritium inventory requirements are likely to be unacceptably large.
If reactors can be made to operate using only deuterium fuel, then the tritium replenishment issue is eliminated and neutron radiation damage may be reduced. However, the probabilities of deuterium-deuterium reactions are about 20 times lower than for deuterium-tritium. Additionally, the temperature needed is about 3 times higher than for deuterium-tritium (see cross section). The higher temperatures and lower reaction rates thus significantly complicate the engineering challenges. In any case, other drawbacks remain, for instance reactors requiring only deuterium fueling will have greatly enhanced nuclear weapons proliferation potential.
History
Early experiments
The first machine to achieve controlled thermonuclear fusion was a pinch machine at Los Alamos National Laboratory called Scylla I at the start of 1958. The team that achieved it was led by a British scientist named James Tuck and included a young Marshall Rosenbluth. Tuck had been involved in the Manhattan project, but had switched to working on fusion in the early 1950s. He applied for funding for the project as part of a White House sponsored contest to develop a fusion reactor along with Lyman Spitzer. The previous year, 1957, the British had claimed that they had achieved thermonuclear fusion reactions on the Zeta pinch machine. However, it turned out that the neutrons they had detected were from beam-target interactions, not fusion, and they withdrew the claim.
Scylla I was a classified machine at the time, so the achievement was hidden from the public. A traditional Z-pinch passes a current down the center of a plasma, which makes a magnetic force around the outside which squeezes the plasma to fusion conditions. Scylla I was a θ-pinch, which used deuterium to pass a current around the outside of its cylinder to create a magnetic force in the center. After the success of Scylla I, Los Alamos went on to build multiple pinch machines over the next few years.
Spitzer continued his stellarator research at Princeton. While fusion did not immediately transpire, the effort led to the creation of the Princeton Plasma Physics Laboratory.
First tokamak
In the early 1950s, Soviet physicists I.E. Tamm and A.D. Sakharov developed the concept of the tokamak, combining a low-power pinch device with a low-power stellarator.
A.D. Sakharov's group constructed the first tokamaks, achieving the first quasistationary fusion reaction.:90
Over time, the "advanced tokamak" concept emerged, which included non-circular plasma, internal diverters and limiters, superconducting magnets, operation in the "H-mode" island of increased stability, and the compact tokamak, with the magnets on the inside of the vacuum chamber.
First inertial confinement experiments
Laser fusion was suggested in 1962 by scientists at Lawrence Livermore National Laboratory (LLNL), shortly after the invention of the laser in 1960. Inertial confinement fusion experiments using lasers began as early as 1965. Several laser systems were built at LLNL, including the Argus, the Cyclops, the Janus, the long path, the Shiva laser, and the Nova.
Laser advances included frequency-tripling crystals that transformed infrared laser beams into ultraviolet beams and "chirping", which changed a single wavelength into a full spectrum that could be amplified and then reconstituted into one frequency. Laser research cost over one billion dollars in the 1980s.
1980s
The Tore Supra, JET, T-15, and JT-60 tokamaks were built in the 1980s. In 1984, Martin Peng of ORNL proposed the spherical tokamak with a much smaller radius. It used a single large conductor in the center, with magnets as half-rings off of this conductor. The aspect ratio fell to as low as 1.2.:B247:225 Peng's advocacy caught the interest of Derek Robinson, who built the Small Tight Aspect Ratio Tokamak, (START).
1990s
In 1991, the Preliminary Tritium Experiment at the Joint European Torus achieved the world's first controlled release of fusion power.
In 1996, Tore Supra created a plasma for two minutes with a current of almost 1 million amperes, totaling 280 MJ of injected and extracted energy.
In 1997, JET produced a peak of 16.1 MW of fusion power (65% of heat to plasma), with fusion power of over 10 MW sustained for over 0.5 sec.
2000s
"Fast ignition" saved power and moved ICF into the race for energy production.
In 2006, China's Experimental Advanced Superconducting Tokamak (EAST) test reactor was completed. It was the first tokamak to use superconducting magnets to generate both toroidal and poloidal fields.
In March 2009, the laser-driven ICF NIF became operational.
In the 2000s, privately backed fusion companies entered the race, including TAE Technologies, General Fusion, and Tokamak Energy.
2010s
Private and public research accelerated in the 2010s. General Fusion developed plasma injector technology and Tri Alpha Energy tested its C-2U device. The French Laser Mégajoule began operation. NIF achieved net energy gain in 2013, as defined in the very limited sense as the hot spot at the core of the collapsed target, rather than the whole target.
In 2014, Phoenix Nuclear Labs sold a high-yield neutron generator that could sustain 5×1011 deuterium fusion reactions per second over a 24-hour period.
In 2015, MIT announced a tokamak it named the ARC fusion reactor, using rare-earth barium-copper oxide (REBCO) superconducting tapes to produce high-magnetic field coils that it claimed could produce comparable magnetic field strength in a smaller configuration than other designs.
In October, researchers at the Max Planck Institute of Plasma Physics in Greifswald, Germany, completed building the largest stellarator to date, the Wendelstein 7-X (W7-X). The W7-X stellarator began Operational phase 1 (OP1.1) on 10 December 2015, successfully producing helium plasma. The objective was to test vital systems and understand the machine's physics. By February 2016, hydrogen plasma was achieved, with temperatures reaching up to 100 million Kelvin. The initial tests used five graphite limiters. After over 2,000 pulses and achieving significant milestones, OP1.1 concluded on 10 March 2016. An upgrade followed, and OP1.2 in 2017 aimed to test an uncooled divertor. By June 2018, record temperatures were reached. W7-X concluded its first campaigns with limiter and island divertor tests, achieving notable advancements by the end of 2018. It soon produced helium and hydrogen plasmas lasting up to 30 minutes.
In 2017, Helion Energy's fifth-generation plasma machine went into operation. The UK's Tokamak Energy's ST40 generated "first plasma". The next year, Eni announced a $50 million investment in Commonwealth Fusion Systems, to attempt to commercialize MIT's ARC technology.
2020s
In January 2021, SuperOx announced the commercialization of a new superconducting wire with more than 700 A/mm2 current capability.
TAE Technologies announced results for its Norman device, holding a temperature of about 60 MK for 30 milliseconds, 8 and 10 times higher, respectively, than the company's previous devices.
In October, Oxford-based First Light Fusion revealed its projectile fusion project, which fires an aluminum disc at a fusion target, accelerated by a 9 mega-amp electrical pulse, reaching speeds of . The resulting fusion generates neutrons whose energy is captured as heat.
On November 8, in an invited talk to the 63rd Annual Meeting of the APS Division of Plasma Physics, the National Ignition Facility claimed to have triggered fusion ignition in the laboratory on August 8, 2021, for the first time in the 60+ year history of the ICF program. The shot yielded 1.3 MJ of fusion energy, an over 8X improvement on tests done in spring of 2021. NIF estimates that 230 kJ of energy reached the fuel capsule, which resulted in an almost 6-fold energy output from the capsule. A researcher from Imperial College London stated that the majority of the field agreed that ignition had been demonstrated.
In November 2021, Helion Energy reported receiving $500 million in Series E funding for its seventh-generation Polaris device, designed to demonstrate net electricity production, with an additional $1.7 billion of commitments tied to specific milestones, while Commonwealth Fusion Systems raised an additional $1.8 billion in Series B funding to construct and operate its SPARC tokamak, the single largest investment in any private fusion company.
In April 2022, First Light announced that their hypersonic projectile fusion prototype had produced neutrons compatible with fusion. Their technique electromagnetically fires projectiles at Mach 19 at a caged fuel pellet. The deuterium fuel is compressed at Mach 204, reaching pressure levels of 100 TPa.
On December 13, 2022, the US Department of Energy reported that researchers at the National Ignition Facility had achieved a net energy gain from a fusion reaction. The reaction of hydrogen fuel at the facility produced about 3.15 MJ of energy while consuming 2.05 MJ of input. However, while the fusion reactions may have produced more than 3 megajoules of energy—more than was delivered to the target—NIF's 192 lasers consumed 322 MJ of grid energy in the conversion process.
In May 2023, the United States Department of Energy (DOE) provided a grant of $46 million to eight companies across seven states to support fusion power plant design and research efforts. This funding, under the Milestone-Based Fusion Development Program, aligns with objectives to demonstrate pilot-scale fusion within a decade and to develop fusion as a carbon-neutral energy source by 2050. The granted companies are tasked with addressing the scientific and technical challenges to create viable fusion pilot plant designs in the next 5–10 years. The recipient firms include Commonwealth Fusion Systems, Focused Energy Inc., Princeton Stellarators Inc., Realta Fusion Inc., Tokamak Energy Inc., Type One Energy Group, Xcimer Energy Inc., and Zap Energy Inc.
In December 2023, the largest and most advanced tokamak JT-60SA was inaugurated in Naka, Japan. The reactor is a joint project between Japan and the European Union. The reactor had achieved its first plasma in October 2023. Subsequently, South Korea's fusion reactor project, the Korean Superconducting Tokamak Advanced Research, successfully operated for 102 seconds in a high-containment mode (H-mode) containing high ion temperatures of more than 100 million degrees in plasma tests conducted from December 2023 to February 2024.
Records
Fusion records continue to advance:
See also
COLEX process, for production of Li-6
Fusion ignition
High beta fusion reactor
Inertial electrostatic confinement
Levitated dipole
List of fusion experiments
Magnetic mirror
Starship
References
Bibliography
(manuscript)
Nuttall, William J., Konishi, Satoshi, Takeda, Shutaro, and Webbe-Wood, David (2020). Commercialising Fusion Energy: How Small Businesses are Transforming Big Science. IOP Publishing. .
Further reading
Oreskes, Naomi, "Fusion's False Promise: Despite a recent advance, nuclear fusion is not the solution to the climate crisis", Scientific American, vol. 328, no. 6 (June 2023), p. 86.
External links
Fusion Device Information System
Fusion Energy Base
Fusion Industry Association
Princeton Satellite Systems News
U.S. Fusion Energy Science Program
Sustainable energy | 0.765057 | 0.998775 | 0.76412 |
Atmospheric circulation | Atmospheric circulation is the large-scale movement of air and together with ocean circulation is the means by which thermal energy is redistributed on the surface of the Earth. The Earth's atmospheric circulation varies from year to year, but the large-scale structure of its circulation remains fairly constant. The smaller-scale weather systems – mid-latitude depressions, or tropical convective cells – occur chaotically, and long-range weather predictions of those cannot be made beyond ten days in practice, or a month in theory (see chaos theory and the butterfly effect).
The Earth's weather is a consequence of its illumination by the Sun and the laws of thermodynamics. The atmospheric circulation can be viewed as a heat engine driven by the Sun's energy and whose energy sink, ultimately, is the blackness of space. The work produced by that engine causes the motion of the masses of air, and in that process it redistributes the energy absorbed by the Earth's surface near the tropics to the latitudes nearer the poles, and thence to space.
The large-scale atmospheric circulation "cells" shift polewards in warmer periods (for example, interglacials compared to glacials), but remain largely constant as they are, fundamentally, a property of the Earth's size, rotation rate, heating and atmospheric depth, all of which change little. Over very long time periods (hundreds of millions of years), a tectonic uplift can significantly alter their major elements, such as the jet stream, and plate tectonics may shift ocean currents. During the extremely hot climates of the Mesozoic, a third desert belt may have existed at the Equator.
Latitudinal circulation features
The wind belts girdling the planet are organised into three cells in each hemisphere—the Hadley cell, the Ferrel cell, and the polar cell. Those cells exist in both the northern and southern hemispheres. The vast bulk of the atmospheric motion occurs in the Hadley cell. The high pressure systems acting on the Earth's surface are balanced by the low pressure systems elsewhere. As a result, there is a balance of forces acting on the Earth's surface.
The horse latitudes are an area of high pressure at about 30° to 35° latitude (north or south) where winds diverge into the adjacent zones of Hadley or Ferrel cells, and which typically have light winds, sunny skies, and little precipitation.
Hadley cell
The atmospheric circulation pattern that George Hadley described was an attempt to explain the trade winds. The Hadley cell is a closed circulation loop which begins at the equator. There, moist air is warmed by the Earth's surface, decreases in density and rises. A similar air mass rising on the other side of the equator forces those rising air masses to move poleward. The rising air creates a low pressure zone near the equator. As the air moves poleward, it cools, becomes denser, and descends at about the 30th parallel, creating a high-pressure area. The descended air then travels toward the equator along the surface, replacing the air that rose from the equatorial zone, closing the loop of the Hadley cell. The poleward movement of the air in the upper part of the troposphere deviates toward the east, caused by the coriolis acceleration. At the ground level, however, the movement of the air toward the equator in the lower troposphere deviates toward the west, producing a wind from the east. The winds that flow to the west (from the east, easterly wind) at the ground level in the Hadley cell are called the trade winds.
Though the Hadley cell is described as located at the equator, it shifts northerly (to higher latitudes) in June and July and southerly (toward lower latitudes) in December and January, as a result of the Sun's heating of the surface. The zone where the greatest heating takes place is called the "thermal equator". As the southern hemisphere's summer is in December to March, the movement of the thermal equator to higher southern latitudes takes place then.
The Hadley system provides an example of a thermally direct circulation. The power of the Hadley system, considered as a heat engine, is estimated at 200 terawatts.
Ferrel cell
Part of the air rising at 60° latitude diverges at high altitude toward the poles and creates the polar cell. The rest moves toward the equator where it collides at 30° latitude with the high-level air of the Hadley cell. There it subsides and strengthens the high pressure ridges beneath. A large part of the energy that drives the Ferrel cell is provided by the polar and Hadley cells circulating on either side, which drag the air of the Ferrel cell with it.
The Ferrel cell, theorized by William Ferrel (1817–1891), is, therefore, a secondary circulation feature, whose existence depends upon the Hadley and polar cells on either side of it. It might be thought of as an eddy created by the Hadley and polar cells.
The air of the Ferrel cell that descends at 30° latitude returns poleward at the ground level, and as it does so it deviates toward the east. In the upper atmosphere of the Ferrel cell, the air moving toward the equator deviates toward the west. Both of those deviations, as in the case of the Hadley and polar cells, are driven by conservation of angular momentum. As a result, just as the easterly Trade Winds are found below the Hadley cell, the Westerlies are found beneath the Ferrel cell.
The Ferrel cell is weak, because it has neither a strong source of heat nor a strong sink, so the airflow and temperatures within it are variable. For this reason, the mid-latitudes are sometimes known as the "zone of mixing." The Hadley and polar cells are truly closed loops, the Ferrel cell is not, and the telling point is in the Westerlies, which are more formally known as "the Prevailing Westerlies." The easterly Trade Winds and the polar easterlies have nothing over which to prevail, as their parent circulation cells are strong enough and face few obstacles either in the form of massive terrain features or high pressure zones. The weaker Westerlies of the Ferrel cell, however, can be disrupted. The local passage of a cold front may change that in a matter of minutes, and frequently does. As a result, at the surface, winds can vary abruptly in direction. But the winds above the surface, where they are less disrupted by terrain, are essentially westerly. A low pressure zone at 60° latitude that moves toward the equator, or a high pressure zone at 30° latitude that moves poleward, will accelerate the Westerlies of the Ferrel cell. A strong high, moving polewards may bring westerly winds for days.
The Ferrel system acts as a heat pump with a coefficient of performance of 12.1, consuming kinetic energy from the Hadley and polar systems at an approximate rate of 275 terawatts.
Polar cell
The polar cell is a simple system with strong convection drivers. Though cool and dry relative to equatorial air, the air masses at the 60th parallel are still sufficiently warm and moist to undergo convection and drive a thermal loop. At the 60th parallel, the air rises to the tropopause (about 8 km at this latitude) and moves poleward. As it does so, the upper-level air mass deviates toward the east. When the air reaches the polar areas, it has cooled by radiation to space and is considerably denser than the underlying air. It descends, creating a cold, dry high-pressure area. At the polar surface level, the mass of air is driven away from the pole toward the 60th parallel, replacing the air that rose there, and the polar circulation cell is complete. As the air at the surface moves toward the equator, it deviates westwards, again as a result of the Coriolis effect. The air flows at the surface are called the polar easterlies, flowing from northeast to southwest near the north pole and from southeast to northwest near the south pole.
The outflow of air mass from the cell creates harmonic waves in the atmosphere known as Rossby waves. These ultra-long waves determine the path of the polar jet stream, which travels within the transitional zone between the tropopause and the Ferrel cell. By acting as a heat sink, the polar cell moves the abundant heat from the equator toward the polar regions.
The polar cell, terrain, and katabatic winds in Antarctica can create very cold conditions at the surface, for instance the lowest temperature recorded on Earth: −89.2 °C at Vostok Station in Antarctica, measured in 1983.
Contrast between cells
The Hadley cell and the polar cell are similar in that they are thermally direct; in other words, they exist as a direct consequence of surface temperatures. Their thermal characteristics drive the weather in their domain. The sheer volume of energy that the Hadley cell transports, and the depth of the heat sink contained within the polar cell, ensures that transient weather phenomena not only have negligible effect on the systems as a whole, but — except under unusual circumstances — they do not form. The endless chain of passing highs and lows which is part of everyday life for mid-latitude dwellers, under the Ferrel cell at latitudes between 30 and 60° latitude, is unknown above the 60th and below the 30th parallels. There are some notable exceptions to this rule; over Europe, unstable weather extends to at least the 70th parallel north.
Longitudinal circulation features
While the Hadley, Ferrel, and polar cells (whose axes are oriented along parallels or latitudes) are the major features of global heat transport, they do not act alone. Temperature differences also drive a set of circulation cells, whose axes of circulation are longitudinally oriented. This atmospheric motion is known as zonal overturning circulation.
Latitudinal circulation is a result of the highest solar radiation per unit area (solar intensity) falling on the tropics. The solar intensity decreases as the latitude increases, reaching essentially zero at the poles. Longitudinal circulation, however, is a result of the heat capacity of water, its absorptivity, and its mixing. Water absorbs more heat than does the land, but its temperature does not rise as greatly as does the land. As a result, temperature variations on land are greater than on water.
The Hadley, Ferrel, and polar cells operate at the largest scale of thousands of kilometers (synoptic scale). The latitudinal circulation can also act on this scale of oceans and continents, and this effect is seasonal or even decadal. Warm air rises over the equatorial, continental, and western Pacific Ocean regions. When it reaches the tropopause, it cools and subsides in a region of relatively cooler water mass.
The Pacific Ocean cell plays a particularly important role in Earth's weather. This entirely ocean-based cell comes about as the result of a marked difference in the surface temperatures of the western and eastern Pacific. Under ordinary circumstances, the western Pacific waters are warm, and the eastern waters are cool. The process begins when strong convective activity over equatorial East Asia and subsiding cool air off South America's west coast create a wind pattern which pushes Pacific water westward and piles it up in the western Pacific. (Water levels in the western Pacific are about 60 cm higher than in the eastern Pacific.).
The daily (diurnal) longitudinal effects are at the mesoscale (a horizontal range of 5 to several hundred kilometres). During the day, air warmed by the relatively hotter land rises, and as it does so it draws a cool breeze from the sea that replaces the risen air. At night, the relatively warmer water and cooler land reverses the process, and a breeze from the land, of air cooled by the land, is carried offshore by night.
Walker circulation
The Pacific cell is of such importance that it has been named the Walker circulation after Sir Gilbert Walker, an early-20th-century director of British observatories in India, who sought a means of predicting when the monsoon winds of India would fail. While he was never successful in doing so, his work led him to the discovery of a link between the periodic pressure variations in the Indian Ocean, and those between the eastern and western Pacific, which he termed the "Southern Oscillation".
The movement of air in the Walker circulation affects the loops on either side. Under normal circumstances, the weather behaves as expected. But every few years, the winters become unusually warm or unusually cold, or the frequency of hurricanes increases or decreases, and the pattern sets in for an indeterminate period.
The Walker Cell plays a key role in this and in the El Niño phenomenon. If convective activity slows in the Western Pacific for some reason (this reason is not currently known), the climates of areas adjacent to the Western Pacific are affected. First, the upper-level westerly winds fail. This cuts off the source of returning, cool air that would normally subside at about 30° south latitude, and therefore the air returning as surface easterlies ceases. There are two consequences. Warm water ceases to surge into the eastern Pacific from the west (it was "piled" by past easterly winds) since there is no longer a surface wind to push it into the area of the east Pacific. This and the corresponding effects of the Southern Oscillation result in long-term unseasonable temperatures and precipitation patterns in North and South America, Australia, and Southeast Africa, and the disruption of ocean currents.
Meanwhile, in the Atlantic, fast-blowing upper level Westerlies of the Hadley cell form, which would ordinarily be blocked by the Walker circulation and unable to reach such intensities. These winds disrupt the tops of nascent hurricanes and greatly diminish the number which are able to reach full strength.
El Niño – Southern Oscillation
El Niño and La Niña are opposite surface temperature anomalies of the Southern Pacific, which heavily influence the weather on a large scale. In the case of El Niño, warm surface water approaches the coasts of South America which results in blocking the upwelling of nutrient-rich deep water. This has serious impacts on the fish populations.
In the La Niña case, the convective cell over the western Pacific strengthens inordinately, resulting in colder than normal winters in North America and a more robust cyclone season in South-East Asia and Eastern Australia. There is also an increased upwelling of deep cold ocean waters and more intense uprising of surface air near South America, resulting in increasing numbers of drought occurrences, although fishermen reap benefits from the more nutrient-filled eastern Pacific waters.
See also
Brewer–Dobson circulation
Prevailing winds
References
External links
Animation showing global cloud circulation for one month based on weather satellite images
Air-sea interactions and Ocean Circulation patterns on Thailand's Government weather department | 0.768127 | 0.994767 | 0.764108 |
Contact force | A contact force is any force that occurs as a result of two objects making contact with each other. Contact forces are very common and are responsible for most visible interactions between macroscopic collections of matter. Pushing a car or kicking a ball are some of the everyday examples where contact forces are at work. In the first case the force is continuously applied to the car by a person, while in the second case the force is delivered in a short impulse.
Contact forces are often decomposed into orthogonal components, one perpendicular to the surface(s) in contact called the normal force, and one parallel to the surface(s) in contact, called the friction force.
Not all forces are contact forces; for example, the weight of an object is the force between the object and the Earth, even though the two do not need to make contact. Gravitational forces, electrical forces and magnetic forces are body forces and can exist without contact occurring.
Origin of contact forces
The microscopic origin of contact forces is diverse. Normal force is directly a result of Pauli exclusion principle and not a true force per se: Everyday objects do not actually touch each other; rather, contact forces are the result of the interactions of the electrons at or near the surfaces of the objects. The atoms in the two surfaces cannot penetrate one another without a large investment of energy because there is no low energy state for which the electron wavefunctions from the two surfaces overlap; thus no microscopic force is needed to prevent this penetration. On the more macroscopic level, such surfaces can be treated as a single object, and two bodies do not penetrate each other due to the stability of matter, which is again a consequence of Pauli exclusion principle, but also of the fundamental forces of nature: Cracks in the bodies do not widen due to electromagnetic forces that create the chemical bonds between the atoms; the atoms themselves do not disintegrate because of the electromagnetic forces between the electrons and the nuclei; and the nuclei do not disintegrate due to the nuclear forces.
As for friction, it is a result of both microscopic adhesion and chemical bond formation due to the electromagnetic force, and of microscopic structures stressing into each other; in the latter phenomena, in order to allow motion, the microscopic structures must either slide one above the other, or must acquire enough energy to break one another. Thus the force acting against motion is a combination of the normal force and of the force required to widen microscopic cracks within matter; the latter force is again due to electromagnetic interaction. Additionally, strain is created inside matter, and this strain is due to a combination of electromagnetic interactions (as electrons are attracted to nuclei and repelled from each other) and of Pauli exclusion principle, the latter working similarly to the case of normal force.
See also
Non-contact force
Body force
Surface force
Action at a distance (physics)
Spring force
References
Force | 0.767837 | 0.995094 | 0.764071 |
Entropic force | In physics, an entropic force acting in a system is an emergent phenomenon resulting from the entire system's statistical tendency to increase its entropy, rather than from a particular underlying force on the atomic scale.
Mathematical formulation
In the canonical ensemble, the entropic force associated to a macrostate partition is given by
where is the temperature, is the entropy associated to the macrostate , and is the present macrostate.
Examples
Pressure of an ideal gas
The internal energy of an ideal gas depends only on its temperature, and not on the volume of its containing box, so it is not an energy effect that tends to increase the volume of the box as gas pressure does. This implies that the pressure of an ideal gas has an entropic origin.
What is the origin of such an entropic force? The most general answer is that the effect of thermal fluctuations tends to bring a thermodynamic system toward a macroscopic state that corresponds to a maximum in the number of microscopic states (or micro-states) that are compatible with this macroscopic state. In other words, thermal fluctuations tend to bring a system toward its macroscopic state of maximum entropy.
Brownian motion
The entropic approach to Brownian movement was initially proposed by R. M. Neumann. Neumann derived the entropic force for a particle undergoing three-dimensional Brownian motion using the Boltzmann equation, denoting this force as a diffusional driving force or radial force. In the paper, three example systems are shown to exhibit such a force:
electrostatic system of molten salt,
surface tension and,
elasticity of rubber.
Polymers
A standard example of an entropic force is the elasticity of a freely jointed polymer molecule. For an ideal chain, maximizing its entropy means reducing the distance between its two free ends. Consequently, a force that tends to collapse the chain is exerted by the ideal chain between its two free ends. This entropic force is proportional to the distance between the two ends. The entropic force by a freely jointed chain has a clear mechanical origin and can be computed using constrained Lagrangian dynamics. With regards to biological polymers, there appears to be an intricate link between the entropic force and function. For example, disordered polypeptide segments in the context of the folded regions of the same polypeptide chain have been shown to generate an entropic force that has functional implications.
Hydrophobic force
Another example of an entropic force is the hydrophobic force. At room temperature, it partly originates from the loss of entropy by the 3D network of water molecules when they interact with molecules of dissolved substance. Each water molecule is capable of
donating two hydrogen bonds through the two protons,
accepting two more hydrogen bonds through the two sp3-hybridized lone pairs.
Therefore, water molecules can form an extended three-dimensional network. Introduction of a non-hydrogen-bonding surface disrupts this network. The water molecules rearrange themselves around the surface, so as to minimize the number of disrupted hydrogen bonds. This is in contrast to hydrogen fluoride (which can accept 3 but donate only 1) or ammonia (which can donate 3 but accept only 1), which mainly form linear chains.
If the introduced surface had an ionic or polar nature, there would be water molecules standing upright on 1 (along the axis of an orbital for ionic bond) or 2 (along a resultant polarity axis) of the four sp3 orbitals. These orientations allow easy movement, i.e. degrees of freedom, and thus lowers entropy minimally. But a non-hydrogen-bonding surface with a moderate curvature forces the water molecule to sit tight on the surface, spreading 3 hydrogen bonds tangential to the surface, which then become locked in a clathrate-like basket shape. Water molecules involved in this clathrate-like basket around the non-hydrogen-bonding surface are constrained in their orientation. Thus, any event that would minimize such a surface is entropically favored. For example, when two such hydrophobic particles come very close, the clathrate-like baskets surrounding them merge. This releases some of the water molecules into the bulk of the water, leading to an increase in entropy.
Another related and counter-intuitive example of entropic force is protein folding, which is a spontaneous process and where hydrophobic effect also plays a role. Structures of water-soluble proteins typically have a core in which hydrophobic side chains are buried from water, which stabilizes the folded state. Charged and polar side chains are situated on the solvent-exposed surface where they interact with surrounding water molecules. Minimizing the number of hydrophobic side chains exposed to water is the principal driving force behind the folding process, although formation of hydrogen bonds within the protein also stabilizes protein structure.
Colloids
Entropic forces are important and widespread in the physics of colloids, where they are responsible for the depletion force, and the ordering of hard particles, such as the crystallization of hard spheres, the isotropic-nematic transition in liquid crystal phases of hard rods, and the ordering of hard polyhedra. Because of this, entropic forces can be an important driver of self-assembly
Entropic forces arise in colloidal systems due to the osmotic pressure that comes from particle crowding. This was first discovered in, and is most intuitive for, colloid-polymer mixtures described by the Asakura–Oosawa model. In this model, polymers are approximated as finite-sized spheres that can penetrate one another, but cannot penetrate the colloidal particles. The inability of the polymers to penetrate the colloids leads to a region around the colloids in which the polymer density is reduced. If the regions of reduced polymer density around two colloids overlap with one another, by means of the colloids approaching one another, the polymers in the system gain an additional free volume that is equal to the volume of the intersection of the reduced density regions. The additional free volume causes an increase in the entropy of the polymers, and drives them to form locally dense-packed aggregates. A similar effect occurs in sufficiently dense colloidal systems without polymers, where osmotic pressure also drives the local dense packing of colloids into a diverse array of structures that can be rationally designed by modifying the shape of the particles. These effects are for anisotropic particles referred to as directional entropic forces.
Cytoskeleton
Contractile forces in biological cells are typically driven by molecular motors associated with the cytoskeleton. However, a growing body of evidence shows that contractile forces may also be of entropic origin. The foundational example is the action of microtubule crosslinker Ase1, which localizes to microtubule overlaps in the mitotic spindle. Molecules of Ase1 are confined to the microtubule overlap, where they are free to diffuse one-dimensionally. Analogically to an ideal gas in a container, molecules of Ase1 generate pressure on the overlap ends. This pressure drives the overlap expansion, which results in the contractile sliding of the microtubules. An analogous example was found in the actin cytoskeleton. Here, the actin-bundling protein anillin drives actin contractility in cytokinetic rings.
Controversial examples
Some forces that are generally regarded as conventional forces have been argued to be actually entropic in nature. These theories remain controversial and are the subject of ongoing work. Matt Visser, professor of mathematics at Victoria University of Wellington, NZ in "Conservative Entropic Forces" criticizes selected approaches but generally concludes:
Gravity
In 2009, Erik Verlinde argued that gravity can be explained as an entropic force. It claimed (similar to Jacobson's result) that gravity is a consequence of the "information associated with the positions of material bodies". This model combines the thermodynamic approach to gravity with Gerard 't Hooft's holographic principle. It implies that gravity is not a fundamental interaction, but an emergent phenomenon.
Other forces
In the wake of the discussion started by Verlinde, entropic explanations for other fundamental forces have been suggested, including Coulomb's law. The same approach was argued to explain dark matter, dark energy and Pioneer effect.
Links to adaptive behavior
It was argued that causal entropic forces lead to spontaneous emergence of tool use and social cooperation. Causal entropic forces by definition maximize entropy production between the present and future time horizon, rather than just greedily maximizing instantaneous entropy production like typical entropic forces.
A formal simultaneous connection between the mathematical structure of the discovered laws of nature, intelligence and the entropy-like measures of complexity was previously noted in 2000 by Andrei Soklakov in the context of Occam's razor principle.
See also
Colloids
Nanomechanics
Thermodynamics
Abraham–Lorentz force
Entropic gravity
Entropy
Introduction to entropy
Entropic elasticity of an ideal chain
Hawking radiation
Data clustering
Depletion force
Maximal entropy random walk
References
Materials science
Thermodynamic entropy
Soft matter | 0.780899 | 0.978434 | 0.764059 |
Exothermic process | In thermodynamics, an exothermic process is a thermodynamic process or reaction that releases energy from the system to its surroundings, usually in the form of heat, but also in a form of light (e.g. a spark, flame, or flash), electricity (e.g. a battery), or sound (e.g. explosion heard when burning hydrogen). The term exothermic was first coined by 19th-century French chemist Marcellin Berthelot.
The opposite of an exothermic process is an endothermic process, one that absorbs energy, usually in the form of heat. The concept is frequently applied in the physical sciences to chemical reactions where chemical bond energy is converted to thermal energy (heat).
Two types of chemical reactions
Exothermic and endothermic describe two types of chemical reactions or systems found in nature, as follows:
Exothermic
An exothermic reaction occurs when heat is released to the surroundings. According to the IUPAC, an exothermic reaction is "a reaction for which the overall standard enthalpy change ΔH⚬ is negative". Some examples of exothermic process are fuel combustion, condensation and nuclear fission, which is used in nuclear power plants to release large amounts of energy.
Endothermic
In an endothermic reaction or system, energy is taken from the surroundings in the course of the reaction, usually driven by a favorable entropy increase in the system. An example of an endothermic reaction is a first aid cold pack, in which the reaction of two chemicals, or dissolving of one in another, requires calories from the surroundings, and the reaction cools the pouch and surroundings by absorbing heat from them.
Photosynthesis, the process that allows plants to convert carbon dioxide and water to sugar and oxygen, is an endothermic process: plants absorb radiant energy from the sun and use it in an endothermic, otherwise non-spontaneous process. The chemical energy stored can be freed by the inverse (spontaneous) process: combustion of sugar, which gives carbon dioxide, water and heat (radiant energy).
Energy release
Exothermic refers to a transformation in which a closed system releases energy (heat) to the surroundings, expressed by
When the transformation occurs at constant pressure and without exchange of electrical energy, heat is equal to the enthalpy change, i.e.
while at constant volume, according to the first law of thermodynamics it equals internal energy change, i.e.
In an adiabatic system (i.e. a system that does not exchange heat with the surroundings), an otherwise exothermic process results in an increase in temperature of the system.
In exothermic chemical reactions, the heat that is released by the reaction takes the form of electromagnetic energy or kinetic energy of molecules. The transition of electrons from one quantum energy level to another causes light to be released. This light is equivalent in energy to some of the stabilization energy of the energy for the chemical reaction, i.e. the bond energy. This light that is released can be absorbed by other molecules in solution to give rise to molecular translations and rotations, which gives rise to the classical understanding of heat. In an exothermic reaction, the activation energy (energy needed to start the reaction) is less than the energy that is subsequently released, so there is a net release of energy.
Examples
Some examples of exothermic processes are:
Combustion of fuels such as wood, coal and oil/petroleum
The thermite reaction
The reaction of alkali metals and other highly electropositive metals with water
Condensation of rain from water vapor
Mixing water and strong acids or strong bases
The reaction of acids and bases
Dehydration of carbohydrates by sulfuric acid
The setting of cement and concrete
Some polymerization reactions such as the setting of epoxy resin
The reaction of most metals with halogens or oxygen
Nuclear fusion in hydrogen bombs and in stellar cores (to iron)
Nuclear fission of heavy elements
The reaction between zinc and hydrochloric acid
Respiration (breaking down of glucose to release energy in cells)
Implications for chemical reactions
Chemical exothermic reactions are generally more spontaneous than their counterparts, endothermic reactions.
In a thermochemical reaction that is exothermic, the heat may be listed among the products of the reaction.
See also
Calorimetry
Chemical thermodynamics
Differential scanning calorimetry
Endergonic
Endergonic reaction
Exergonic
Exergonic reaction
Endothermic reaction
References
External links
Observe exothermic reactions in a simple experiment
Thermodynamic processes
Chemical thermodynamics
da:Exoterm | 0.772554 | 0.988935 | 0.764006 |
Flywheel | A flywheel is a mechanical device that uses the conservation of angular momentum to store rotational energy, a form of kinetic energy proportional to the product of its moment of inertia and the square of its rotational speed. In particular, assuming the flywheel's moment of inertia is constant (i.e., a flywheel with fixed mass and second moment of area revolving about some fixed axis) then the stored (rotational) energy is directly associated with the square of its rotational speed.
Since a flywheel serves to store mechanical energy for later use, it is natural to consider it as a kinetic energy analogue of an electrical capacitor. Once suitably abstracted, this shared principle of energy storage is described in the generalized concept of an accumulator. As with other types of accumulators, a flywheel inherently smooths sufficiently small deviations in the power output of a system, thereby effectively playing the role of a low-pass filter with respect to the mechanical velocity (angular, or otherwise) of the system. More precisely, a flywheel's stored energy will donate a surge in power output upon a drop in power input and will conversely absorb any excess power input (system-generated power) in the form of rotational energy.
Common uses of a flywheel include smoothing a power output in reciprocating engines, energy storage, delivering energy at higher rates than the source, controlling the orientation of a mechanical system using gyroscope and reaction wheel, etc. Flywheels are typically made of steel and rotate on conventional bearings; these are generally limited to a maximum revolution rate of a few thousand RPM. High energy density flywheels can be made of carbon fiber composites and employ magnetic bearings, enabling them to revolve at speeds up to 60,000 RPM (1 kHz).
History
The principle of the flywheel is found in the Neolithic spindle and the potter's wheel, as well as circular sharpening stones in antiquity. In the early 11th century, Ibn Bassal pioneered the use of flywheel in noria and saqiyah. The use of the flywheel as a general mechanical device to equalize the speed of rotation is, according to the American medievalist Lynn White, recorded in the De diversibus artibus (On various arts) of the German artisan Theophilus Presbyter (ca. 1070–1125) who records applying the device in several of his machines.
In the Industrial Revolution, James Watt contributed to the development of the flywheel in the steam engine, and his contemporary James Pickard used a flywheel combined with a crank to transform reciprocating motion into rotary motion.
Physics
The kinetic energy (or more specifically rotational energy) stored by the flywheel's rotor can be calculated by . ω is the angular velocity, and is the moment of inertia of the flywheel about its axis of symmetry. The moment of inertia is a measure of resistance to torque applied on a spinning object (i.e. the higher the moment of inertia, the slower it will accelerate when a given torque is applied). The moment of inertia can be calculated for cylindrical shapes using mass and radius. For a solid cylinder it is , for a thin-walled empty cylinder it is approximately , and for a thick-walled empty cylinder with constant density it is .
For a given flywheel design, the kinetic energy is proportional to the ratio of the hoop stress to the material density and to the mass. The specific tensile strength of a flywheel can be defined as . The flywheel material with the highest specific tensile strength will yield the highest energy storage per unit mass. This is one reason why carbon fiber is a material of interest. For a given design the stored energy is proportional to the hoop stress and the volume.
An electric motor-powered flywheel is common in practice. The output power of the electric motor is approximately equal to the output power of the flywheel. It can be calculated by , where is the voltage of rotor winding, is stator voltage, and is the angle between two voltages. Increasing amounts of rotation energy can be stored in the flywheel until the rotor shatters. This happens when the hoop stress within the rotor exceeds the ultimate tensile strength of the rotor material. Tensile stress can be calculated by , where is the density of the cylinder, is the radius of the cylinder, and is the angular velocity of the cylinder.
Design
A rimmed flywheel has a rim, a hub, and spokes. Calculation of the flywheel's moment of inertia can be more easily analysed by applying various simplifications. One method is to assume the spokes, shaft and hub have zero moments of inertia, and the flywheel's moment of inertia is from the rim alone. Another is to lump moments of inertia of spokes, hub and shaft may be estimated as a percentage of the flywheel's moment of inertia, with the majority from the rim, so that . For example, if the moments of inertia of hub, spokes and shaft are deemed negligible, and the rim's thickness is very small compared to its mean radius, the radius of rotation of the rim is equal to its mean radius and thus .
A shaftless flywheel eliminates the annulus holes, shaft or hub. It has higher energy density than conventional design but requires a specialized magnetic bearing and control system. The specific energy of a flywheel is determined by, in which is the shape factor, the material's tensile strength and the density. While a typical flywheel has a shape factor of 0.3, the shaftless flywheel has a shape factor close to 0.6, out of a theoretical limit of about 1.
A superflywheel consists of a solid core (hub) and multiple thin layers of high-strength flexible materials (such as special steels, carbon fiber composites, glass fiber, or graphene) wound around it. Compared to conventional flywheels, superflywheels can store more energy and are safer to operate. In case of failure, a superflywheel does not explode or burst into large shards like a regular flywheel, but instead splits into layers. The separated layers then slow a superflywheel down by sliding against the inner walls of the enclosure, thus preventing any further destruction. Although the exact value of energy density of a superflywheel would depend on the material used, it could theoretically be as high as 1200 Wh (4.4 MJ) per kg of mass for graphene superflywheels. The first superflywheel was patented in 1964 by the Soviet-Russian scientist Nurbei Guilia.
Materials
Flywheels are made from many different materials; the application determines the choice of material. Small flywheels made of lead are found in children's toys. Cast iron flywheels are used in old steam engines. Flywheels used in car engines are made of cast or nodular iron, steel or aluminum. Flywheels made from high-strength steel or composites have been proposed for use in vehicle energy storage and braking systems.
The efficiency of a flywheel is determined by the maximum amount of energy it can store per unit weight. As the flywheel's rotational speed or angular velocity is increased, the stored energy increases; however, the stresses also increase. If the hoop stress surpass the tensile strength of the material, the flywheel will break apart. Thus, the tensile strength limits the amount of energy that a flywheel can store.
In this context, using lead for a flywheel in a child's toy is not efficient; however, the flywheel velocity never approaches its burst velocity because the limit in this case is the pulling-power of the child. In other applications, such as an automobile, the flywheel operates at a specified angular velocity and is constrained by the space it must fit in, so the goal is to maximize the stored energy per unit volume. The material selection therefore depends on the application.
Applications
Flywheels are often used to provide continuous power output in systems where the energy source is not continuous. For example, a flywheel is used to smooth the fast angular velocity fluctuations of the crankshaft in a reciprocating engine. In this case, a crankshaft flywheel stores energy when torque is exerted on it by a firing piston and then returns that energy to the piston to compress a fresh charge of air and fuel. Another example is the friction motor which powers devices such as toy cars. In unstressed and inexpensive cases, to save on cost, the bulk of the mass of the flywheel is toward the rim of the wheel. Pushing the mass away from the axis of rotation heightens rotational inertia for a given total mass.
A flywheel may also be used to supply intermittent pulses of energy at power levels that exceed the abilities of its energy source. This is achieved by accumulating energy in the flywheel over a period of time, at a rate that is compatible with the energy source, and then releasing energy at a much higher rate over a relatively short time when it is needed. For example, flywheels are used in power hammers and riveting machines.
Flywheels can be used to control direction and oppose unwanted motions. Flywheels in this context have a wide range of applications: gyroscopes for instrumentation, ship stability, satellite stabilization (reaction wheel), keeping a toy spin spinning (friction motor), stabilizing magnetically-levitated objects (Spin-stabilized magnetic levitation).
Flywheels may also be used as an electric compensator, like a synchronous compensator, that can either produce or sink reactive power but would not affect the real power. The purposes for that application are to improve the power factor of the system or adjust the grid voltage. Typically, the flywheels used in this field are similar in structure and installation as the synchronous motor (but it is called synchronous compensator or synchronous condenser in this context). There are also some other kinds of compensator using flywheels, like the single phase induction machine. But the basic ideas here are the same, the flywheels are controlled to spin exactly at the frequency which you want to compensate. For a synchronous compensator, you also need to keep the voltage of rotor and stator in phase, which is the same as keeping the magnetic field of rotor and the total magnetic field in phase (in the rotating frame reference).
See also
Accumulator (energy)
Clutch
Diesel rotary uninterruptible power supply
Dual-mass flywheel
Fidget spinner
Flywheel training
List of moments of inertia
References
Further reading
https://pserc.wisc.edu/documents/general_information/presentations/presentations_by_pserc_university_members/heydt_synchronous_mach_sep03.pdf
External links
Flywheel batteries on Interesting Thing of the Day
"Darwin-made, outback-tested energy storage system to be used in remote Africa", Renew Economy—Flywheel-based microgrid stabilization technology
Articles containing video clips | 0.765729 | 0.997719 | 0.763983 |
Mathematical descriptions of the electromagnetic field | There are various mathematical descriptions of the electromagnetic field that are used in the study of electromagnetism, one of the four fundamental interactions of nature. In this article, several approaches are discussed, although the equations are in terms of electric and magnetic fields, potentials, and charges with currents, generally speaking.
Vector field approach
The most common description of the electromagnetic field uses two three-dimensional vector fields called the electric field and the magnetic field. These vector fields each have a value defined at every point of space and time and are thus often regarded as functions of the space and time coordinates. As such, they are often written as (electric field) and (magnetic field).
If only the electric field (E) is non-zero, and is constant in time, the field is said to be an electrostatic field. Similarly, if only the magnetic field (B) is non-zero and is constant in time, the field is said to be a magnetostatic field. However, if either the electric or magnetic field has a time-dependence, then both fields must be considered together as a coupled electromagnetic field using Maxwell's equations.
Maxwell's equations in the vector field approach
The behaviour of electric and magnetic fields, whether in cases of electrostatics, magnetostatics, or electrodynamics (electromagnetic fields), is governed by Maxwell-Heaviside's equations:
{| class="toccolours collapsible" style="background-color:#ECFCF4; padding:6; cellpadding=6;text-align:left;border:2px solid #50C878"
|-
|text-align="center" colspan="2"|Maxwell's equations (vector fields)
|-
| || Gauss's law
|-
| || Gauss's law for magnetism
|-
| || Faraday's law of induction
|-
| || Ampère-Maxwell law
|}
where ρ is the charge density, which can (and often does) depend on time and position, ε0 is the electric constant, μ0 is the magnetic constant, and J is the current per unit area, also a function of time and position. The equations take this form with the International System of Quantities.
When dealing with only nondispersive isotropic linear materials, Maxwell's equations are often modified to ignore bound charges by replacing the permeability and permittivity of free space with the permeability and permittivity of the linear material in question. For some materials that have more complex responses to electromagnetic fields, these properties can be represented by tensors, with time-dependence related to the material's ability to respond to rapid field changes (dispersion (optics), Green–Kubo relations), and possibly also field dependencies representing nonlinear and/or nonlocal material responses to large amplitude fields (nonlinear optics).
Potential field approach
Many times in the use and calculation of electric and magnetic fields, the approach used first computes an associated potential: the electric potential, , for the electric field, and the magnetic vector potential, A, for the magnetic field. The electric potential is a scalar field, while the magnetic potential is a vector field. This is why sometimes the electric potential is called the scalar potential and the magnetic potential is called the vector potential. These potentials can be used to find their associated fields as follows:
Maxwell's equations in potential formulation
These relations can be substituted into Maxwell's equations to express the latter in terms of the potentials. Faraday's law and Gauss's law for magnetism (the homogeneous equations) turn out to be identically true for any potentials. This is because of the way the fields are expressed as gradients and curls of the scalar and vector potentials. The homogeneous equations in terms of these potentials involve the divergence of the curl and the curl of the gradient , which are always zero. The other two of Maxwell's equations (the inhomogeneous equations) are the ones that describe the dynamics in the potential formulation.
These equations taken together are as powerful and complete as Maxwell's equations. Moreover, the problem has been reduced somewhat, as the electric and magnetic fields together had six components to solve for. In the potential formulation, there are only four components: the electric potential and the three components of the vector potential. However, the equations are messier than Maxwell's equations using the electric and magnetic fields.
Gauge freedom
These equations can be simplified by taking advantage of the fact that the electric and magnetic fields are physically meaningful quantities that can be measured; the potentials are not. There is a freedom to constrain the form of the potentials provided that this does not affect the resultant electric and magnetic fields, called gauge freedom. Specifically for these equations, for any choice of a twice-differentiable scalar function of position and time λ, if is a solution for a given system, then so is another potential given by:
This freedom can be used to simplify the potential formulation. Either of two such scalar functions is typically chosen: the Coulomb gauge and the Lorenz gauge.
Coulomb gauge
The Coulomb gauge is chosen in such a way that , which corresponds to the case of magnetostatics. In terms of λ, this means that it must satisfy the equation
This choice of function results in the following formulation of Maxwell's equations:
Several features about Maxwell's equations in the Coulomb gauge are as follows. Firstly, solving for the electric potential is very easy, as the equation is a version of Poisson's equation. Secondly, solving for the magnetic vector potential is particularly difficult. This is the big disadvantage of this gauge. The third thing to note, and something that is not immediately obvious, is that the electric potential changes instantly everywhere in response to a change in conditions in one locality.
For instance, if a charge is moved in New York at 1 pm local time, then a hypothetical observer in Australia who could measure the electric potential directly would measure a change in the potential at 1 pm New York time. This seemingly violates causality in special relativity, i.e. the impossibility of information, signals, or anything travelling faster than the speed of light. The resolution to this apparent problem lies in the fact that, as previously stated, no observers can measure the potentials; they measure the electric and magnetic fields. So, the combination of ∇φ and ∂A/∂t used in determining the electric field restores the speed limit imposed by special relativity for the electric field, making all observable quantities consistent with relativity.
Lorenz gauge condition
A gauge that is often used is the Lorenz gauge condition. In this, the scalar function λ is chosen such that
meaning that λ must satisfy the equation
The Lorenz gauge results in the following form of Maxwell's equations:
The operator is called the d'Alembertian (some authors denote this by only the square ). These equations are inhomogeneous versions of the wave equation, with the terms on the right side of the equation serving as the source functions for the wave. As with any wave equation, these equations lead to two types of solution: advanced potentials (which are related to the configuration of the sources at future points in time), and retarded potentials (which are related to the past configurations of the sources); the former are usually disregarded where the field is to analyzed from a causality perspective.
As pointed out above, the Lorenz gauge is no more valid than any other gauge since the potentials cannot be directly measured, however the Lorenz gauge has the advantage of the equations being Lorentz invariant.
Extension to quantum electrodynamics
Canonical quantization of the electromagnetic fields proceeds by elevating the scalar and vector potentials; φ(x), A(x), from fields to field operators. Substituting into the previous Lorenz gauge equations gives:
Here, J and ρ are the current and charge density of the matter field. If the matter field is taken so as to describe the interaction of electromagnetic fields with the Dirac electron given by the four-component Dirac spinor field ψ, the current and charge densities have form:
where α are the first three Dirac matrices. Using this, we can re-write Maxwell's equations as:
which is the form used in quantum electrodynamics.
Geometric algebra formulations
Analogous to the tensor formulation, two objects, one for the electromagnetic field and one for the current density, are introduced. In geometric algebra (GA) these are multivectors, which sometimes follow Ricci calculus.
Algebra of physical space
In the Algebra of physical space (APS), also known as the Clifford algebra , the field and current are represented by multivectors.
The field multivector, known as the Riemann–Silberstein vector, is
and the four-current multivector is
using an orthonormal basis . Similarly, the unit pseudoscalar is , due to the fact that the basis used is orthonormal. These basis vectors share the algebra of the Pauli matrices, but are usually not equated with them, as they are different objects with different interpretations.
After defining the derivative
Maxwell's equations are reduced to the single equation
In three dimensions, the derivative has a special structure allowing the introduction of a cross product:
from which it is easily seen that Gauss's law is the scalar part, the Ampère–Maxwell law is the vector part, Faraday's law is the pseudovector part, and Gauss's law for magnetism is the pseudoscalar part of the equation. After expanding and rearranging, this can be written as
Spacetime algebra
We can identify APS as a subalgebra of the spacetime algebra (STA) , defining and . The s have the same algebraic properties of the gamma matrices but their matrix representation is not needed. The derivative is now
The Riemann–Silberstein becomes a bivector
and the charge and current density become a vector
Owing to the identity
Maxwell's equations reduce to the single equation
Differential forms approach
In what follows, cgs-Gaussian units, not SI units are used. (To convert to SI, see here.) By Einstein notation, we implicitly take the sum over all values of the indices that can vary within the dimension.
Field 2-form
In free space, where and are constant everywhere, Maxwell's equations simplify considerably once the language of differential geometry and differential forms is used. The electric and magnetic fields are now jointly described by a 2-form F in a 4-dimensional spacetime manifold. The Faraday tensor (electromagnetic tensor) can be written as a 2-form in Minkowski space with metric signature as
which is the exterior derivative of the electromagnetic four-potential
The source free equations can be written by the action of the exterior derivative on this 2-form. But for the equations with source terms (Gauss's law and the Ampère-Maxwell equation), the Hodge dual of this 2-form is needed. The Hodge star operator takes a p-form to a-form, where n is the number of dimensions. Here, it takes the 2-form (F) and gives another 2-form (in four dimensions, ). For the basis cotangent vectors, the Hodge dual is given as (see )
and so on. Using these relations, the dual of the Faraday 2-form is the Maxwell tensor,
Current 3-form, dual current 1-form
Here, the 3-form J is called the electric current form or current 3-form:
That F is a closed form, and the exterior derivative of its Hodge dual is the current 3-form, express Maxwell's equations:
Here d denotes the exterior derivative – a natural coordinate- and metric-independent differential operator acting on forms, and the (dual) Hodge star operator is a linear transformation from the space of 2-forms to the space of (4 − 2)-forms defined by the metric in Minkowski space (in four dimensions even by any metric conformal to this metric). The fields are in natural units where .
Since d2 = 0, the 3-form J satisfies the conservation of current (continuity equation):
The current 3-form can be integrated over a 3-dimensional space-time region. The physical interpretation of this integral is the charge in that region if it is spacelike, or the amount of charge that flows through a surface in a certain amount of time if that region is a spacelike surface cross a timelike interval.
As the exterior derivative is defined on any manifold, the differential form version of the Bianchi identity makes sense for any 4-dimensional manifold, whereas the source equation is defined if the manifold is oriented and has a Lorentz metric. In particular the differential form version of the Maxwell equations are a convenient and intuitive formulation of the Maxwell equations in general relativity.
Note: In much of the literature, the notations and are switched, so that is a 1-form called the current and is a 3-form called the dual current.
Linear macroscopic influence of matter
In a linear, macroscopic theory, the influence of matter on the electromagnetic field is described through more general linear transformation in the space of 2-forms. We call
the constitutive transformation. The role of this transformation is comparable to the Hodge duality transformation. The Maxwell equations in the presence of matter then become:
where the current 3-form J still satisfies the continuity equation .
When the fields are expressed as linear combinations (of exterior products) of basis forms θi,
the constitutive relation takes the form
where the field coefficient functions and the constitutive coefficients are anticommutative for swapping of each one's indices. In particular, the Hodge star operator that was used in the above case is obtained by taking
in terms of tensor index notation with respect to a (not necessarily orthonormal) basis in a tangent space and its dual basis in , having the gram metric matrix and its inverse matrix , and is the Levi-Civita symbol with . Up to scaling, this is the only invariant tensor of this type that can be defined with the metric.
In this formulation, electromagnetism generalises immediately to any 4-dimensional oriented manifold or with small adaptations any manifold.
Alternative metric signature
In the particle physicist's sign convention for the metric signature , the potential 1-form is
The Faraday curvature 2-form becomes
and the Maxwell tensor becomes
The current 3-form J is
and the corresponding dual 1-form is
The current norm is now positive and equals
with the canonical volume form .
Curved spacetime
Traditional formulation
Matter and energy generate curvature of spacetime. This is the subject of general relativity. Curvature of spacetime affects electrodynamics. An electromagnetic field having energy and momentum also generates curvature in spacetime. Maxwell's equations in curved spacetime can be obtained by replacing the derivatives in the equations in flat spacetime with covariant derivatives. (Whether this is the appropriate generalization requires separate investigation.) The sourced and source-free equations become (cgs-Gaussian units):
and
Here,
is a Christoffel symbol that characterizes the curvature of spacetime and ∇α is the covariant derivative.
Formulation in terms of differential forms
The formulation of the Maxwell equations in terms of differential forms can be used without change in general relativity. The equivalence of the more traditional general relativistic formulation using the covariant derivative with the differential form formulation can be seen as follows. Choose local coordinates xα that gives a basis of 1-forms dxα in every point of the open set where the coordinates are defined. Using this basis and cgs-Gaussian units we define
The antisymmetric field tensor Fαβ, corresponding to the field 2-form F
The current-vector infinitesimal 3-form J
The epsilon tensor contracted with the differential 3-form produces 6 times the number of terms required.
Here g is as usual the determinant of the matrix representing the metric tensor, gαβ. A small computation that uses the symmetry of the Christoffel symbols (i.e., the torsion-freeness of the Levi-Civita connection) and the covariant constantness of the Hodge star operator then shows that in this coordinate neighborhood we have:
the Bianchi identity
the source equation
the continuity equation
Classical electrodynamics as the curvature of a line bundle
An elegant and intuitive way to formulate Maxwell's equations is to use complex line bundles or a principal U(1)-bundle, on the fibers of which U(1) acts regularly. The principal U(1)-connection ∇ on the line bundle has a curvature F = ∇2, which is a two-form that automatically satisfies and can be interpreted as a field strength. If the line bundle is trivial with flat reference connection d we can write and with A the 1-form composed of the electric potential and the magnetic vector potential.
In quantum mechanics, the connection itself is used to define the dynamics of the system. This formulation allows a natural description of the Aharonov–Bohm effect. In this experiment, a static magnetic field runs through a long magnetic wire (e.g., an iron wire magnetized longitudinally). Outside of this wire the magnetic induction is zero, in contrast to the vector potential, which essentially depends on the magnetic flux through the cross-section of the wire and does not vanish outside. Since there is no electric field either, the Maxwell tensor throughout the space-time region outside the tube, during the experiment. This means by definition that the connection ∇ is flat there.
In mentioned Aharonov–Bohm effect, however, the connection depends on the magnetic field through the tube since the holonomy along a non-contractible curve encircling the tube is the magnetic flux through the tube in the proper units. This can be detected quantum-mechanically with a double-slit electron diffraction experiment on an electron wave traveling around the tube. The holonomy corresponds to an extra phase shift, which leads to a shift in the diffraction pattern.
Discussion and other approaches
Following are the reasons for using each of such formulations.
Potential formulation
In advanced classical mechanics it is often useful, and in quantum mechanics frequently essential, to express Maxwell's equations in a potential formulation involving the electric potential (also called scalar potential) φ, and the magnetic potential (a vector potential) A. For example, the analysis of radio antennas makes full use of Maxwell's vector and scalar potentials to separate the variables, a common technique used in formulating the solutions of differential equations. The potentials can be introduced by using the Poincaré lemma on the homogeneous equations to solve them in a universal way (this assumes that we consider a topologically simple, e.g. contractible space). The potentials are defined as in the table above. Alternatively, these equations define E and B in terms of the electric and magnetic potentials that then satisfy the homogeneous equations for E and B as identities. Substitution gives the non-homogeneous Maxwell equations in potential form.
Many different choices of A and φ are consistent with given observable electric and magnetic fields E and B, so the potentials seem to contain more, (classically) unobservable information. The non uniqueness of the potentials is well understood, however. For every scalar function of position and time , the potentials can be changed by a gauge transformation as
without changing the electric and magnetic field. Two pairs of gauge transformed potentials and are called gauge equivalent, and the freedom to select any pair of potentials in its gauge equivalence class is called gauge freedom. Again by the Poincaré lemma (and under its assumptions), gauge freedom is the only source of indeterminacy, so the field formulation is equivalent to the potential formulation if we consider the potential equations as equations for gauge equivalence classes.
The potential equations can be simplified using a procedure called gauge fixing. Since the potentials are only defined up to gauge equivalence, we are free to impose additional equations on the potentials, as long as for every pair of potentials there is a gauge equivalent pair that satisfies the additional equations (i.e. if the gauge fixing equations define a slice to the gauge action). The gauge-fixed potentials still have a gauge freedom under all gauge transformations that leave the gauge fixing equations invariant. Inspection of the potential equations suggests two natural choices. In the Coulomb gauge, we impose , which is mostly used in the case of magneto statics when we can neglect the term. In the Lorenz gauge (named after the Dane Ludvig Lorenz), we impose
The Lorenz gauge condition has the advantage of being Lorentz invariant and leading to Lorentz-invariant equations for the potentials.
Manifestly covariant (tensor) approach
Maxwell's equations are exactly consistent with special relativity—i.e., if they are valid in one inertial reference frame, then they are automatically valid in every other inertial reference frame. In fact, Maxwell's equations were crucial in the historical development of special relativity. However, in the usual formulation of Maxwell's equations, their consistency with special relativity is not obvious; it can only be proven by a laborious calculation.
For example, consider a conductor moving in the field of a magnet. In the frame of the magnet, that conductor experiences a magnetic force. But in the frame of a conductor moving relative to the magnet, the conductor experiences a force due to an electric field. The motion is exactly consistent in these two different reference frames, but it mathematically arises in quite different ways.
For this reason and others, it is often useful to rewrite Maxwell's equations in a way that is "manifestly covariant"—i.e. obviously consistent with special relativity, even with just a glance at the equations—using covariant and contravariant four-vectors and tensors. This can be done using the EM tensor F, or the 4-potential A, with the 4-current J.
Differential forms approach
Gauss's law for magnetism and the Faraday–Maxwell law can be grouped together since the equations are homogeneous, and be seen as geometric identities expressing the field F (a 2-form), which can be derived from the 4-potential A. Gauss's law for electricity and the Ampere–Maxwell law could be seen as the dynamical equations of motion of the fields, obtained via the Lagrangian principle of least action, from the "interaction term" AJ (introduced through gauge covariant derivatives), coupling the field to matter. For the field formulation of Maxwell's equations in terms of a principle of extremal action, see electromagnetic tensor.
Often, the time derivative in the Faraday–Maxwell equation motivates calling this equation "dynamical", which is somewhat misleading in the sense of the preceding analysis. This is rather an artifact of breaking relativistic covariance by choosing a preferred time direction. To have physical degrees of freedom propagated by these field equations, one must include a kinetic term for A, and take into account the non-physical degrees of freedom that can be removed by gauge transformation . See also gauge fixing and Faddeev–Popov ghosts.
Geometric calculus approach
This formulation uses the algebra that spacetime generates through the introduction of a distributive, associative (but not commutative) product called the geometric product. Elements and operations of the algebra can generally be associated with geometric meaning. The members of the algebra may be decomposed by grade (as in the formalism of differential forms) and the (geometric) product of a vector with a k-vector decomposes into a -vector and a -vector. The -vector component can be identified with the inner product and the -vector component with the outer product. It is of algebraic convenience that the geometric product is invertible, while the inner and outer products are not. As such, powerful techniques such as Green's functions can be used. The derivatives that appear in Maxwell's equations are vectors and electromagnetic fields are represented by the Faraday bivector F. This formulation is as general as that of differential forms for manifolds with a metric tensor, as then these are naturally identified with r-forms and there are corresponding operations. Maxwell's equations reduce to one equation in this formalism. This equation can be separated into parts as is done above for comparative reasons.
See also
Ricci calculus
Electromagnetic wave equation
Speed of light
Electric constant
Magnetic constant
Free space
Near and far field
Electromagnetic field
Electromagnetic radiation
Quantum electrodynamics
List of electromagnetism equations
Notes
References
(with worked problems in Warnick, Russer 2006 )
Electromagnetism
Mathematical physics | 0.774079 | 0.986956 | 0.763982 |
Project Daedalus | Project Daedalus (named after Daedalus, the Greek mythological designer who crafted wings for human flight) was a study conducted between 1973 and 1978 by the British Interplanetary Society to design a plausible uncrewed interstellar probe. Intended mainly as a scientific probe, the design criteria specified that the spacecraft had to use existing or near-future technology and had to be able to reach its destination within a human lifetime. Alan Bond led a team of scientists and engineers who proposed using a fusion rocket to reach Barnard's Star 5.9 light years away. The trip was estimated to take 50 years, but the design was required to be flexible enough that it could be sent to any other target star.
All the papers produced by the study are available in a BIS book, Project Daedalus: Demonstrating the Engineering Feasibility of Interstellar Travel.
Concept
Daedalus would be constructed in Earth orbit and have an initial mass of 54,000 tonnes including 50,000 tonnes of fuel and 500 tonnes of scientific payload. Daedalus was to be a two-stage spacecraft. The first stage would operate for two years, taking the spacecraft to 7.1% of light speed (0.071 c), and then after it was jettisoned, the second stage would fire for 1.8 years, taking the spacecraft up to about 12% of light speed (0.12 c), before being shut down for a 46-year cruise period. Due to the extreme temperature range of operation required, from near absolute zero to 1600 K, the engine bells and support structure would be made of molybdenum alloyed with titanium, zirconium, and carbon, which retains strength even at cryogenic temperatures. A major stimulus for the project was Friedwardt Winterberg's inertial confinement fusion drive concept, for which he received the Hermann Oberth gold medal award.
This velocity is well beyond the capabilities of chemical rockets or even the type of nuclear pulse propulsion studied during Project Orion. According to Dr. Tony Martin, controlled-fusion engine and the nuclear–electric systems have very low thrust, equipment to convert nuclear energy into electrical has a large mass, which results in small acceleration, which would take a century to achieve the desired speed; thermodynamic nuclear engines of the NERVA type require a great quantity of fuel, photon rockets have to generate power at a rate of 3 W per kg of vehicle mass and require mirrors with absorptivity of less than 1 part in 106, interstellar ramjet's problems are tenuous interstellar medium with a density of about 1 atom/cm3, a large diameter funnel, and high power required for its electric field. Thus the only suitable propulsion method for the project was thermonuclear pulse propulsion.
Daedalus would be propelled by a fusion rocket using pellets of a deuterium/helium-3 mix that would be ignited in the reaction chamber by inertial confinement using electron beams. The electron beam system would be powered by a set of induction coils trapping energy from the plasma exhaust stream. 250 pellets would be detonated per second, and the resulting plasma would be directed by a magnetic nozzle. The computed burn-up fraction for the fusion fuels was 0.175 and 0.133 producing exhaust velocities of 10,600 km/s and 9,210 km/s respectively. Due to scarcity of helium-3 on Earth, it was to be mined from the atmosphere of Jupiter by large hot-air balloon supported robotic factories over a 20-year period, or from a less distant source, such as the Moon.
The second stage would have two 5-metre optical telescopes and two 20-metre radio telescopes. About 25 years after launch these telescopes would begin examining the area around Barnard's Star to learn more about any accompanying planets. This information would be sent back to Earth, using the 40-metre diameter second stage engine bell as a communications dish, and targets of interest would be selected. Since the spacecraft would not decelerate, upon reaching Barnard's Star, Daedalus would carry 18 autonomous sub-probes that would be launched between 7.2 and 1.8 years before the main craft entered the target system. These sub-probes would be propelled by nuclear-powered ion drives and would carry cameras, spectrometers, and other sensory equipment. The sub-probes would fly past their targets, still travelling at 12% of the speed of light, and transmit their findings back to the Daedalus' second stage, mothership, for relay back to Earth.
The ship's payload bay containing its sub-probes, telescopes, and other equipment would be protected from the interstellar medium during transit by a beryllium disc, up to 7 mm thick, weighing up to 50 tonnes. This erosion shield would be made from beryllium due to its lightness and high latent heat of vaporisation. Larger obstacles that might be encountered while passing through the target system would be dispersed by an artificially generated cloud of particles, ejected by support vehicles called dust bugs about 200 km ahead of the vehicle. The spacecraft would carry a number of robot wardens capable of autonomously repairing damage or malfunctions.
Specifications
Overall length: 190 metres
Payload mass: 450 tonnes
Variants
A quantitative engineering analysis of a self-replicating variation on Project Daedalus was published in 1980 by Robert Freitas. The non-replicating design was modified to include all subsystems necessary for self-replication. Use the probe to deliver a seed factory, with a mass of about 443 metric tons, to a distant site. Have the seed factory replicate many copies of itself on-site, to increase its total manufacturing capacity, then use the resulting automated industrial complex to construct probes, with a seed factory on board, over a 1,000-year period. Each REPRO would weigh over 10 million tons due to the extra fuel needed to decelerate from 12% of lightspeed.
Another possibility is to equip the Daedalus with a magnetic sail similar to the magnetic scoop on a Bussard ramjet to use the destination star heliosphere as a brake, making carrying deceleration fuel unnecessary, allowing a much more in-depth study of the star system chosen.
See also
Breakthrough Starshot
Project Icarus
Project Longshot
Enzmann starship
Further reading
References
External links
Project Daedalus, The Encyclopedia of Astrobiology Astronomy and Spaceflight
Starship Daedalus
Project Daedalus – Origins
The Daedalus Starship
Renderings of the Daedalus Starship to scale
Project Daedalus
Project Daedalus: The Propulsion System Part 1; Theoretical considerations and calculations. 2. Review of Advanced Propulsion Systems
Title: Project Daedalus. Authors: Bond, A.; Martin, A. R. Publication: Journal of the British Interplanetary Society Supplement, p. S5-S7 Publication Date: 00/1978 Origin: ARI ARI Keywords: Miscellanea, Philosophical Aspects, Extraterrestrial Life Comment: A&AA ID. AAA021.015.025 Bibliographic Code: 1978JBIS...31S...5B
British Interplanetary Society: Project Daedalus, video rendering by Hazegrayart
Hypothetical spacecraft
Interstellar travel
Nuclear spacecraft propulsion | 0.770913 | 0.991009 | 0.763981 |
Inertial electrostatic confinement | Inertial electrostatic confinement, or IEC, is a class of fusion power devices that use electric fields to confine the plasma rather than the more common approach using magnetic fields found in magnetic confinement fusion (MCF) designs. Most IEC devices directly accelerate their fuel to fusion conditions, thereby avoiding energy losses seen during the longer heating stages of MCF devices. In theory, this makes them more suitable for using alternative aneutronic fusion fuels, which offer a number of major practical benefits and makes IEC devices one of the more widely studied approaches to fusion.
As the negatively charged electrons and positively charged ions in the plasma move in different directions in an electric field, the field has to be arranged in some fashion so that the two particles remain close together. Most IEC designs achieve this by pulling the electrons or ions across a potential well, beyond which the potential drops and the particles continue to move due to their inertia. Fusion occurs in this lower-potential area when ions moving in different directions collide. Because the motion provided by the field creates the energy levels needed for fusion, not random collisions with the rest of the fuel, the bulk of the plasma does not have to be hot and the systems as a whole work at much lower temperatures and energy levels than MCF devices.
One of the simpler IEC devices is the fusor, which consists of two concentric metal wire spherical grids. When the grids are charged to a high voltage, the fuel gas ionizes. The field between the two then accelerates the fuel inward, and when it passes the inner grid the field drops and the ions continue inward toward the center. If they impact with another ion they may undergo fusion. If they do not, they travel out of the reaction area into the charged area again, where they are re-accelerated inward. Overall the physical process is similar to the colliding beam fusion, although beam devices are linear instead of spherical. Other IEC designs, like the polywell, differ largely in the arrangement of the fields used to create the potential well.
A number of detailed theoretical studies have pointed out that the IEC approach is subject to a number of energy loss mechanisms that are not present if the fuel is evenly heated, or "Maxwellian". These loss mechanisms appear to be greater than the rate of fusion in such devices, meaning they can never reach fusion breakeven and thus be used for power production. These mechanisms are more powerful when the atomic mass of the fuel increases, which suggests IEC also does not have any advantage with aneutronic fuels. Whether these critiques apply to specific IEC devices remains highly contentious.
Mechanism
For every volt that an ion is accelerated across, its kinetic energy gain corresponds to an increase of temperature of 11,604 kelvins (K). For example, a typical magnetic confinement fusion plasma is 15 keV, which corresponds to 170 megakelvin (MK). An ion with a charge of one can reach this temperature by being accelerated across a 15,000 V drop. This sort of voltage is easily achieved in common electrical devices; a typical cathode-ray tube operates in this range.
In fusors, the voltage drop is made with a wire cage. However high conduction losses occur in fusors because most ions fall into the cage before fusion can occur. This prevents current fusors from ever producing net power.
History
1930s
Mark Oliphant adapts Cockcroft and Walton's particle accelerator at the Cavendish Laboratory to create tritium and helium-3 by nuclear fusion.
1950s
Three researchers at LANL including Jim Tuck first explored the idea, theoretically, in a 1959 paper. The idea had been proposed by a colleague. The concept was to capture electrons inside a positive cage. The electrons would accelerate the ions to fusion conditions.
Other concepts were being developed which would later merge into the IEC field. These include the publication of the Lawson criterion by John D. Lawson in 1957 in England. This puts on minimum criteria on power plant designs which do fusion using hot Maxwellian plasma clouds. Also, work exploring how electrons behave inside the biconic cusp, done by Harold Grad group at the Courant Institute in 1957. A biconic cusp is a device with two alike magnetic poles facing one another (i.e. north-north). Electrons and ions can be trapped between these.
1960s
In his work with vacuum tubes, Philo Farnsworth observed that electric charge would accumulate in regions of the tube. Today, this effect is known as the multipactor effect. Farnsworth reasoned that if ions were concentrated high enough they could collide, and fuse. In 1962, he filed a patent on a design using a positive inner cage to concentrate plasma, in order to achieve nuclear fusion. During this time, Robert L. Hirsch joined the Farnsworth Television labs and began work on what became the fusor. Hirsch patented the design in 1966 and published the design in 1967. The Hirsch machine was a 17.8 cm diameter machine with 150 kV voltage drop across it and used ion beams to help inject material.
Simultaneously, a key plasma physics text was published by Lyman Spitzer at Princeton in 1963. Spitzer took the ideal gas laws and adapted them to an ionized plasma, developing many of the fundamental equations used to model a plasma. Meanwhile, magnetic mirror theory and direct energy conversion were developed by Richard F. Post's group at LLNL. A magnetic mirror or magnetic bottle is similar to a biconic cusp except that the poles are reversed.
1980s
In 1980 Robert W. Bussard developed a cross between a fusor and magnetic mirror, the polywell. The idea was to confine a non-neutral plasma using magnetic fields. This would, in turn, attract ions. This idea had been published previously, notably by Oleg Lavrentiev in Russia. Bussard patented the design and received funding from Defense Threat Reduction Agency, DARPA and the US Navy to develop the idea.
1990s
Bussard and Nicholas Krall published theory and experimental results in the early nineties. In response, Todd Rider at MIT, under Lawrence Lidsky developed general models of the device. Rider argued that the device was fundamentally limited. That same year, 1995, William Nevins at LLNL published a criticism of the polywell. Nevins argued that the particles would build up angular momentum, causing the dense core to degrade.
In the mid-nineties, Bussard publications prompted the development of fusors at the University of Wisconsin–Madison and at the University of Illinois at Urbana–Champaign. Madison's machine was first built in 1995. George H. Miley's team at Illinois built a 25 cm fusor which has produced 107 neutrons using deuterium gas and discovered the "star mode" of fusor operation in 1994. The following year, the first "US-Japan Workshop on IEC Fusion" was conducted. This is now the premier conference for IEC researchers. At this time in Europe, an IEC device was developed as a commercial neutron source by Daimler-Chrysler Aerospace under the name FusionStar. In the late nineties, hobbyist Richard Hull began building amateur fusors in his home. In March 1999, he achieved a neutron rate of 105 neutrons per second. Hull and Paul Schatzkin started fusor.net in 1998. Through this open forum, a community of amateur fusioneers have done nuclear fusion using homemade fusors.
2000s
Despite demonstration in 2000 of 7200 hours of operation without degradation at high input power as a sealed reaction chamber with automated control the FusionStar project was canceled and the company NSD Ltd was founded. The spherical FusionStar technology was then further developed as a linear geometry system with improved efficiency and higher neutron output by NSD Ltd. which became NSD-Fusion GmbH in 2005.
In early 2000, Alex Klein developed a cross between a polywell and ion beams. Using Gabor lensing, Dr. Klein attempted to focus plasma into non-neutral clouds for fusion. He founded FP generation, which in April 2009 raised $3 million in financing from two venture funds. The company developed the MIX and Marble machine, but ran into technical challenges and closed.
In response to Riders' criticisms, researchers at LANL reasoned that a plasma oscillating could be at local thermodynamic equilibrium; this prompted the POPS and Penning trap machines. At this time, MIT researchers became interested in fusors for space propulsion and powering space vehicles. Specifically, researchers developed fusors with multiple inner cages. In 2005, Greg Piefer founded Phoenix Nuclear Labs to develop the fusor into a neutron source for the mass production of medical isotopes.
Robert Bussard began speaking openly about the Polywell in 2006. He attempted to generate interest in the research, before passing away from multiple myeloma in 2007. His company was able to raise over ten million in funding from the US Navy in 2008 and 2009.
2010s
Bussard's publications prompted the University of Sydney to start research into electron trapping in polywells in 2010. The group has explored theory, modeled devices, built devices, measured trapping and simulated trapping. These machines were all low power and cost and all had a small beta ratio. In 2010, Carl Greninger founded the northwest nuclear consortium, an organization which teaches nuclear engineering principles to high school students, using a 60 kvolt fusor. In 2012, Mark Suppes received attention, for a fusor. Suppes also measured electron trapping inside a polywell. In 2013, the first IEC textbook was published by George H. Miley.
2020s
Avalanche Energy is a start-up with about $51 million in venture/DOD funding that is working on small (tens of centimetres), modular, fusion batteries producing 5kWe. They are targeting 600 kV for their device to achieve certain design goals. Their Orbitron concept electrostatically (magnetron-augmented) confines ions orbiting around a high voltage (100s of kVs) cathode in a high vacuum environment (p< 10 −8 Torr) surrounded by one or two anode shells separated by a dielectric. Concerns include breakdown of the vacuum/dielectric and insulator surface flashover. Permanent magnet/electromagnet magnetic field generators are arranged coaxially around the anode. The magnetic field strength is targeted to exceed a Hull cut-off condition, ranging from 50-4,000 kV. Candidate ions include protons (m/z=1), deuterium (m/z=2), tritium (m/z=3), lithium-6 (m/z=6), and boron-11 (m/z=11). Recent progress includes successful testing of a 300 kV bushing.
Designs with cage
Fusor
The best known IEC device is the fusor. This device typically consists of two wire cages inside a vacuum chamber. These cages are referred to as grids. The inner cage is held at a negative voltage against the outer cage. A small amount of fusion fuel is introduced (deuterium gas being the most common). The voltage between the grids causes the fuel to ionize. The positive ions fall down the voltage drop toward the negative inner cage. As they accelerate, the electric field does work on the ions, accelerating them to fusion conditions. If these ions collide, they can fuse. Fusors can also use ion guns rather than electric grids. Fusors are popular with amateurs, because they can easily be constructed, can regularly produce fusion and are a practical way to study nuclear physics. Fusors have also been used as a commercial neutron generator for industrial applications.
No fusor has come close to producing a significant amount of fusion power. They can be dangerous if proper care is not taken because they require high voltages and can produce harmful radiation (neutrons and X-rays). Often, ions collide with the cages or wall. This conducts energy away from the device limiting its performance. In addition, collisions heat the grids, which limits high-power devices. Collisions also spray high-mass ions into the reaction chamber, pollute the plasma, and cool the fuel.
POPS
In examining nonthermal plasma, workers at LANL realized that scattering was more likely than fusion. This was due to the coulomb scattering cross section being larger than the fusion cross section. In response they built POPS, a machine with a wire cage, where ions are moving at steady-state, or oscillating around. Such plasma can be at local thermodynamic equilibrium.<ref name=Barnes1998} The ion oscillation is predicted to maintain the equilibrium distribution of the ions at all times, which would eliminate any power loss due to Coulomb scattering, resulting in a net energy gain. Working off this design, researchers in Russia simulated the POPS design using particle-in-cell code in 2009. This reactor concept becomes increasingly efficient as the size of the device shrinks. However, very high transparencies (>99.999%) are required for successful operation of the POPS concept. To this end S. Krupakar Murali et al., suggested that carbon nanotubes can be used to construct the cathode grids. This is also the first (suggested) application of carbon nanotubes directly in any fusion reactor.
Designs with fields
Several schemes attempt to combine magnetic confinement and electrostatic fields with IEC. The goal is to eliminate the inner wire cage of the fusor, and the resulting problems.
Polywell
The polywell uses a magnetic field to trap electrons. When electrons or ions move into a dense field, they can be reflected by the magnetic mirror effect. A polywell is designed to trap electrons in the center, with a dense magnetic field surrounding them. This is typically done using six electromagnets in a box. Each magnet is positioned so their poles face inward, creating a null point in the center. The electrons trapped in the center form a "virtual electrode" Ideally, this electron cloud accelerates ions to fusion conditions.
Penning trap
A Penning trap uses both an electric and a magnetic field to trap particles, a magnetic field to confine particles radially and a quadrupole electric field to confine the particles axially.
In a Penning trap fusion reactor, first the magnetic and electric fields are turned on. Then, electrons are emitted into the trap, caught and measured. The electrons form a virtual electrode similar to that in a polywell, described above. These electrons are intended to then attract ions, accelerating them to fusion conditions.
In the 1990s, researchers at LANL built a Penning trap to do fusion experiments. Their device (PFX) was a small (millimeters) and low power (one fifth of a tesla, less than ten thousand volts) machine.
Marble
MARBLE (multiple ambipolar recirculating beam line experiment) was a device which moved electrons and ions back and forth in a line. Particle beams were reflected using electrostatic optics. These optics made static voltage surfaces in free space. Such surfaces reflect only particles with a specific kinetic energy, while higher-energy particles can traverse these surfaces unimpeded, although not unaffected. Electron trapping and plasma behavior was measured by Langmuir probe. Marble kept ions on orbits that do not intersect grid wires—the latter also improves the space charge limitations by multiple nesting of ion beams at several energies. Researchers encountered problems with ion losses at the reflection points. Ions slowed down when turning, spending much time there, leading to high conduction losses.
MIX
The multipole ion-beam experiment (MIX) accelerated ions and electrons into a negatively charged electromagnet. Ions were focused using Gabor lensing. Researcher had problems with a very thin ion-turning region very close to a solid surface where ions could be conducted away.
Magnetically insulated
Devices have been proposed where the negative cage is magnetically insulated from the incoming plasmas.
General criticism
In 1995, Todd Rider critiqued all fusion power schemes using plasma systems not at thermodynamic equilibrium. Rider assumed that plasma clouds at equilibrium had the following properties:
They were quasineutral, where the positives and negatives are equally mixed together.
They had evenly mixed fuel.
They were isotropic, meaning that its behavior was the same in any given direction.
The plasma had a uniform energy and temperature throughout the cloud.
The plasma was an unstructured Gaussian sphere.
Rider argued that if such system was sufficiently heated, it could not be expected to produce net power, due to high X-ray losses.
Other fusion researchers such as Nicholas Krall, Robert W. Bussard, Norman Rostoker, and Monkhorst disagreed with this assessment. They argue that the plasma conditions inside IEC machines are not quasineutral and have non-thermal energy distributions. Because the electron has a mass and diameter much smaller than the ion, the electron temperature can be several orders of magnitude different than the ions. This may allow the plasma to be optimized, whereby cold electrons would reduce radiation losses and hot ions would raise fusion rates.
Thermalization
The primary problem that Rider has raised is the thermalization of ions. Rider argued that, in a quasineutral plasma where all the positives and negatives are distributed equally, the ions will interact. As they do, they exchange energy, causing their energy to spread out (in a Wiener process) heading to a bell curve (or Gaussian function) of energy. Rider focused his arguments within the ion population and did not address electron-to-ion energy exchange or non-thermal plasmas.
This spreading of energy causes several problems. One problem is making more and more cold ions, which are too cold to fuse. This would lower output power. Another problem is higher energy ions which have so much energy that they can escape the machine. This lowers fusion rates while raising conduction losses, because as the ions leave, energy is carried away with them.
Radiation
Rider estimated that once the plasma is thermalized the radiation losses would outpace any amount of fusion energy generated. He focused on a specific type of radiation: X-ray radiation. A particle in a plasma will radiate light anytime it speeds up or slows down. This can be estimated using the Larmor formula. Rider estimated this for D–T (deuterium–tritium fusion), D–D (deuterium fusion), and D–He3 (deuterium–helium 3 fusion), and that breakeven operation with any fuel except D–T is difficult.
Core focus
In 1995, Nevins argued that such machines would need to expend a great deal of energy maintaining ion focus in the center. The ions need to be focused so that they can find one another, collide, and fuse. Over time the positive ions and negative electrons would naturally intermix because of electrostatic attraction. This causes the focus to be lost. This is core degradation. Nevins argued mathematically, that the fusion gain (ratio of fusion power produced to the power required to maintain the non-equilibrium ion distribution function) is limited to 0.1 assuming that the device is fueled with a mixture of deuterium and tritium.
The core focus problem was also identified in fusors by Tim Thorson at the University of Wisconsin–Madison during his 1996 doctoral work. Charged ions would have some motion before they started accelerating in the center. This motion could be a twisting motion, where the ion had angular momentum, or simply a tangential velocity. This initial motion causes the cloud in the center of the fusor to be unfocused.
Brillouin limit
In 1945, Columbia University professor Léon Brillouin, suggested that there was a limit to how many electrons one could pack into a given volume. This limit is commonly referred to as the Brillouin limit or Brillouin density, this is shown below.
Where B is the magnetic field, the permeability of free space, m the mass of confined particles, and c the speed of light. This may limit the charge density inside IEC devices.
Commercial applications
Since fusion reactions generates neutrons, the fusor has been developed into a family of compact sealed reaction chamber neutron generators for a wide range of applications that need moderate neutron output rates at a moderate price. Very high output neutron sources may be used to make products such as molybdenum-99 and nitrogen-13, medical isotopes used for PET scans.
Devices
Government and commercial
Los Alamos National Laboratory Researchers developed POPS and Penning trap
Turkish Atomic Energy Authority In 2013 this team built a fusor at the Saraykoy Nuclear Research and Training center in Turkey. This fusor can reach and do deuterium fusion, producing neutrons per second.
ITT Corporation Hirschs original machine was a 17. diameter machine with voltage drop across it. This machine used ion beams.
Phoenix Nuclear Labs has developed a commercial neutron source based on a fusor, achieving neutrons per second with the deuterium-deuterium fusion reaction for 132 hours of continuous operation.
Energy Matter Conversion Inc Is a company in Santa Fe which has developed large high powered polywell devices for the US Navy.
NSD-Gradel-Fusion sealed IEC neutron generators for DD (2.5 MeV) or DT (14 MeV) with a range of maximum outputs are manufactured by Gradel sárl in Luxembourg.
Atomic Energy Organization of Iran Researchers at Shahid Beheshti University in Iran have built a diameter fusor which can produce neutrons per second at 80 kilovolts using deuterium gas.
Avalanche Energy has received $5 million in venture capital to build their prototype.
CPP-IPR in India, has achieved a significant milestone by pioneering the development of India's first Inertial Electrostatic Confinement Fusion (IECF) neutron source. The device is capable of reaching an energy potential of -92kV. It can generate an neutron yield of up to 107 neutrons per second by deuterium fusion. The primary objective of this program is to propel the advancement of portable and handheld neutron sources, characterized by both linear and spherical geometries.
Universities
Tokyo Institute of Technology has four IEC devices of different shapes: a spherical machine, a cylindrical device, a co-axial double cylinder and a magnetically assisted device.
University of Wisconsin–Madison – A group at Wisconsin–Madison has several large devices, since 1995.
University of Illinois at Urbana–Champaign – The fusion studies laboratory has built a ~25 cm fusor which has produced neutrons using deuterium gas.
Massachusetts Institute of Technology – For his doctoral thesis in 2007, Carl Dietrich built a fusor and studied its potential use in spacecraft propulsion. Also, Thomas McGuire studied multiple well fusors for applications in spaceflight.
University of Sydney has built several IEC devices and also low power, low beta ratio polywells. The first was constructed of Teflon rings and was about the size of a coffee cup. The second has ~12" diameter full casing, metal rings.
Eindhoven Technical University
Amirkabir University of Technology and Atomic Energy Organization of Iran have investigated the effect of strong pulsed magnetic fields on the neutron production rate of IEC device. Their study showed that by 1-2 Tesla magnetic field it is possible to increase the discharge current and neutron production rate more than ten times with respect to the ordinary operation.
The Institute of Space Systems at the University of Stuttgart is developing IEC devices for plasma physics research, and as an electric propulsion device, the IECT (Inertial Electrostatic Confinement Thruster).
See also
Fusor
List of fusion experiments
Northwest Nuclear Consortium
Philo Farnsworth
Phoenix Nuclear labs
Polywell
Robert Bussard
Taylor Wilson
Patents
P.T. Farnsworth, , June 1966 (Electric discharge — Nuclear interaction)
P.T. Farnsworth, . June 1968 (Method and apparatus)
Hirsch, Robert, . September 1970 (Apparatus)
Hirsch, Robert, . September 1970 (Generating apparatus — Hirsch/Meeks)
Hirsch, Robert, . October 1970 (Lithium-Ion source)
Hirsch, Robert, . April 1972 (Reduce plasma leakage)
Hirsch, Robert, . May 1972 (Electrostatic containment)
R.W. Bussard, "Method and apparatus for controlling charged particles", , May 1989 (Method and apparatus — Magnetic grid fields)
R.W. Bussard, "Method and apparatus for creating and controlling nuclear fusion reactions", , November 1992 (Method and apparatus — Ion acoustic waves)
S.T. Brookes, "Nuclear fusion reactor", UK patent GB2461267, May 2012
T.V. Stanko, "Nuclear fusion device", UK patent GB2545882, July 2017
References
External links
Polywell Fusion: Electrostatic Fusion in a Magnetic Cusp, talk at Microsoft Research
University of Wisconsin-Madison IEC homepage
IEC Overview
From Proceedings of the 1999 Fusion Summer Study (Snowmass, Colorado):
Summary of Physics Aspects of Some Emerging Concepts
Inertial-Electrostatic Confinement (IEC) of a Fusion Plasma with Grids
Fusion from Television? (American Scientist Magazine, July-August 1999)
Should Google Go Nuclear? Clean, cheap, nuclear power (no, really)
NSD-Gradel-Fusion, NSD-Gradel-Fusion (Luxembourg)
Fusion power
de:Elektrostatischer Trägheitseinschluss | 0.78089 | 0.978344 | 0.763979 |
Fad | A fad, trend, or craze is any form of collective behavior that develops within a culture, a generation or social group in which a group of people enthusiastically follow an impulse for a short time period.
Fads are objects or behaviors that achieve short-lived popularity but fade away. Fads are often seen as sudden, quick-spreading, and short-lived events. Fads include diets, clothing, hairstyles, toys, and more. Some popular fads throughout history are toys such as yo-yos, hula hoops, and fad dances such as the Macarena, floss and the twist.
Similar to habits or customs but less durable, fads often result from an activity or behavior being perceived as popular or exciting within a peer group, or being deemed "cool" as often promoted by social networks. A fad is said to "catch on" when the number of people adopting it begins to increase to the point of being noteworthy or going viral. Fads often fade quickly when the perception of novelty is gone.
Overview
The specific nature of the behavior associated with a fad can be of any type including unusual language usage, distinctive clothing, fad diets or frauds such as pyramid schemes. Apart from general novelty, mass marketing, emotional blackmail, peer pressure, or the desire to conformity may drive fads. Popular celebrities can also drive fads, for example the highly popularizing effect of Oprah's Book Club.
Though some consider the term trend equivalent to fad, a fad is generally considered a quick and short behavior whereas a trend is one that evolves into a long term or even permanent change.
Economics
In economics, the term is used in a similar way. Fads are mean-reverting deviations from intrinsic value caused by social or psychological forces similar to those that cause fashions in political philosophies or consumerisation.
Formation
Many contemporary fads share similar patterns of social organization. Several different models serve to examine fads and how they spread.
One way of looking at the spread of fads is through the top-down model, which argues that fashion is created for the elite, and from the elite, fashion spreads to lower classes. Early adopters might not necessarily be those of a high status, but they have sufficient resources that allow them to experiment with new innovations. When looking at the top-down model, sociologists like to highlight the role of selection. The elite might be the ones that introduce certain fads, but other people must choose to adopt those fads.
Others may argue that not all fads begin with their adopters. Social life already provides people with ideas that can help create a basis for new and innovative fads. Companies can look at what people are already interested in and create something from that information. The ideas behind fads are not always original; they might stem from what is already popular at the time. Recreation and style faddists may try out variations of a basic pattern or idea already in existence.
Another way of looking at the spread of fads is through a symbolic interaction view. People learn their behaviors from the people around them. When it comes to collective behavior, the emergence of these shared rules, meanings, and emotions are more dependent on the cues of the situation, rather than physiological arousal. This connection to symbolic interactionism, a theory that explains people's actions as being directed by shared meanings and assumptions, explains that fads are spread because people attach meaning and emotion to objects, and not because the object has practical use, for instance. People might adopt a fad because of the meanings and assumptions they share with the other people who have adopted that fad. People may join other adopters of the fad because they enjoy being a part of a group and what that symbolizes. Some people may join because they want to feel like an insider. When multiple people adopt the same fad, they may feel like they have made the right choice because other people have made that same choice.
Termination
Primarily, fads end because all innovative possibilities have been exhausted. Fads begin to fade when people no longer see them as new and unique. As more people follow the fad, some might start to see it as "overcrowded", and it no longer holds the same appeal. Many times, those who first adopt the fad also abandon it first. They begin to recognize that their preoccupation with the fad leads them to neglect some of their routine activities, and they realize the negative aspects of their behavior. Once the faddists are no longer producing new variations of the fad, people begin to realize their neglect of other activities, and the dangers of the fad. Not everyone completely abandons the fad, however, and parts may remain.
A study examined why certain fads die out quicker than others. A marketing professor at the University of Pennsylvania's Wharton School of Business, Jonah Berger and his colleague, Gael Le Mens, studied baby names in the United States and France to help explore the termination of fads. According to their results, the faster the names became popular, the faster they lost their popularity. They also found that the least successful names overall were those that caught on most quickly. Fads, like baby names, often lose their appeal just as quickly as they gained it.
Collective behavior
Fads can fit under the broad umbrella of collective behavior, which are behaviors engaged in by a large but loosely connected group of people. Other than fads, collective behavior includes the activities of people in crowds, panics, fashions, crazes, and more.
Robert E. Park, the man who created the term collective behavior, defined it as "the behavior of individuals under the influence of an impulse that is common and collective, an impulse, in other words, that is the result of social interaction". Fads are seen as impulsive, driven by emotions; however, they can bring together groups of people who may not have much in common other than their investment in the fad.
Collective obsession
Fads can also fit under the umbrella of "collective obsessions". Collective obsessions have three main features in common. The first, and most obvious sign, is an increase in frequency and intensity of a specific belief or behavior. A fad's popularity increases quickly in frequency and intensity, whereas a trend grows more slowly. The second is that the behavior is seen as ridiculous, irrational, or evil to the people who are not a part of the obsession. Some people might see those who follow certain fads as unreasonable and irrational. To these people, the fad is ridiculous, and people's obsession of it is just as ridiculous. The third is, after it has reached a peak, it drops off abruptly and then it is followed by a counter obsession. A counter obsession means that once the fad is over, if one engages in the fad they will be ridiculed. A fad's popularity often decreases at a rapid rate once its novelty wears off. Some people might start to criticize the fad after pointing out that it is no longer popular, so it must not have been "worth the hype".
See also
Bandwagon effect
:Category:Fads (notable fads through history)
Coolhunting
Crowd psychology
Google Trends
Hype
List of Internet phenomena
Market trend
Memetics
Peer pressure
Retro style
Social contagion
Social mania
Viral phenomenon
15 minutes of fame
Bellwether (1996 novel)
Notes
References
Best, Joel (2006). Flavor of the Month: Why Smart People Fall for Fads. University of California Press. .
Burke, Sarah. "5 Marketing Strategies, 1 Question: Fad or Trend?". Spokal.
Conley, Dalton (2015). You may ask yourself: An introduction to thinking like a sociologist. New York: W.W. Norton & Co. .
(review/summary)
Griffith, Benjamin (2013). "College Fads". St. James Encyclopedia of Popular Culture – via Gale Virtual Reference Library.
Heussner, Ki Mae. "7 Fads You Won't Forget". ABC News.
Killian, Lewis M.; Smelser, Neil J.; Turner, Ralph H. "Collective behavior". Encyclopædia Britannica.
External links
Popular culture
Crowd psychology
Types of IoT Security Devices | 0.766926 | 0.996156 | 0.763979 |
Reverse engineering | Reverse engineering (also known as backwards engineering or back engineering) is a process or method through which one attempts to understand through deductive reasoning how a previously made device, process, system, or piece of software accomplishes a task with very little (if any) insight into exactly how it does so. Depending on the system under consideration and the technologies employed, the knowledge gained during reverse engineering can help with repurposing obsolete objects, doing security analysis, or learning how something works.
Although the process is specific to the object on which it is being performed, all reverse engineering processes consist of three basic steps: information extraction, modeling, and review. Information extraction is the practice of gathering all relevant information for performing the operation. Modeling is the practice of combining the gathered information into an abstract model, which can be used as a guide for designing the new object or system. Review is the testing of the model to ensure the validity of the chosen abstract. Reverse engineering is applicable in the fields of computer engineering, mechanical engineering, design, electronic engineering, software engineering, chemical engineering, and systems biology.
Overview
There are many reasons for performing reverse engineering in various fields. Reverse engineering has its origins in the analysis of hardware for commercial or military advantage. However, the reverse engineering process may not always be concerned with creating a copy or changing the artifact in some way. It may be used as part of an analysis to deduce design features from products with little or no additional knowledge about the procedures involved in their original production.
In some cases, the goal of the reverse engineering process can simply be a redocumentation of legacy systems. Even when the reverse-engineered product is that of a competitor, the goal may not be to copy it but to perform competitor analysis. Reverse engineering may also be used to create interoperable products and despite some narrowly-tailored United States and European Union legislation, the legality of using specific reverse engineering techniques for that purpose has been hotly contested in courts worldwide for more than two decades.
Software reverse engineering can help to improve the understanding of the underlying source code for the maintenance and improvement of the software, relevant information can be extracted to make a decision for software development and graphical representations of the code can provide alternate views regarding the source code, which can help to detect and fix a software bug or vulnerability. Frequently, as some software develops, its design information and improvements are often lost over time, but that lost information can usually be recovered with reverse engineering. The process can also help to cut down the time required to understand the source code, thus reducing the overall cost of the software development. Reverse engineering can also help to detect and to eliminate a malicious code written to the software with better code detectors. Reversing a source code can be used to find alternate uses of the source code, such as detecting the unauthorized replication of the source code where it was not intended to be used, or revealing how a competitor's product was built. That process is commonly used for "cracking" software and media to remove their copy protection, or to create a possibly-improved copy or even a knockoff, which is usually the goal of a competitor or a hacker.
Malware developers often use reverse engineering techniques to find vulnerabilities in an operating system to build a computer virus that can exploit the system vulnerabilities. Reverse engineering is also being used in cryptanalysis to find vulnerabilities in substitution cipher, symmetric-key algorithm or public-key cryptography.
There are other uses to reverse engineering:
Interfacing. Reverse engineering can be used when a system is required to interface to another system and how both systems would negotiate is to be established. Such requirements typically exist for interoperability.
Military or commercial espionage. Learning about an enemy's or competitor's latest research by stealing or capturing a prototype and dismantling it may result in the development of a similar product or a better countermeasure against it.
Obsolescence. Integrated circuits are often designed on proprietary systems and built on production lines, which become obsolete in only a few years. When systems using those parts can no longer be maintained since the parts are no longer made, the only way to incorporate the functionality into new technology is to reverse-engineer the existing chip and then to redesign it using newer tools by using the understanding gained as a guide. Another obsolescence originated problem that can be solved by reverse engineering is the need to support (maintenance and supply for continuous operation) existing legacy devices that are no longer supported by their original equipment manufacturer. The problem is particularly critical in military operations.
Product security analysis. That examines how a product works by determining the specifications of its components and estimate costs and identifies potential patent infringement. Also part of product security analysis is acquiring sensitive data by disassembling and analyzing the design of a system component. Another intent may be to remove copy protection or to circumvent access restrictions.
Competitive technical intelligence. That is to understand what one's competitor is actually doing, rather than what it says that it is doing.
Saving money. Finding out what a piece of electronics can do may spare a user from purchasing a separate product.
Repurposing. Obsolete objects are then reused in a different-but-useful manner.
Design. Production and design companies applied Reverse Engineering to practical craft-based manufacturing process. The companies can work on "historical" manufacturing collections through 3D scanning, 3D re-modeling and re-design. In 2013 Italian manufactures Baldi and Savio Firmino together with University of Florence optimized their innovation, design, and production processes.
Common uses
Machines
As computer-aided design (CAD) has become more popular, reverse engineering has become a viable method to create a 3D virtual model of an existing physical part for use in 3D CAD, CAM, CAE, or other software. The reverse-engineering process involves measuring an object and then reconstructing it as a 3D model. The physical object can be measured using 3D scanning technologies like CMMs, laser scanners, structured light digitizers, or industrial CT scanning (computed tomography). The measured data alone, usually represented as a point cloud, lacks topological information and design intent. The former may be recovered by converting the point cloud to a triangular-faced mesh. Reverse engineering aims to go beyond producing such a mesh and to recover the design intent in terms of simple analytical surfaces where appropriate (planes, cylinders, etc.) as well as possibly NURBS surfaces to produce a boundary-representation CAD model. Recovery of such a model allows a design to be modified to meet new requirements, a manufacturing plan to be generated, etc.
Hybrid modeling is a commonly used term when NURBS and parametric modeling are implemented together. Using a combination of geometric and freeform surfaces can provide a powerful method of 3D modeling. Areas of freeform data can be combined with exact geometric surfaces to create a hybrid model. A typical example of this would be the reverse engineering of a cylinder head, which includes freeform cast features, such as water jackets and high-tolerance machined areas.
Reverse engineering is also used by businesses to bring existing physical geometry into digital product development environments, to make a digital 3D record of their own products, or to assess competitors' products. It is used to analyze how a product works, what it does, what components it has; estimate costs; identify potential patent infringement; etc.
Value engineering, a related activity that is also used by businesses, involves deconstructing and analyzing products. However, the objective is to find opportunities for cost-cutting.
Printed circuit boards
Reverse engineering of printed circuit boards involves recreating fabrication data for a particular circuit board. This is done primarily to identify a design, and learn the functional and structural characteristics of a design. It also allows for the discovery of the design principles behind a product, especially if this design information is not easily available.
Outdated PCBs are often subject to reverse engineering, especially when they perform highly critical functions such as powering machinery, or other electronic components. Reverse engineering these old parts can allow the reconstruction of the PCB if it performs some crucial task, as well as finding alternatives which provide the same function, or in upgrading the old PCB.
Reverse engineering PCBs largely follow the same series of steps. First, images are created by drawing, scanning, or taking photographs of the PCB. Then, these images are ported to suitable reverse engineering software in order to create a rudimentary design for the new PCB. The quality of these images that is necessary for suitable reverse engineering is proportional to the complexity of the PCB itself. More complicated PCBs require well lighted photos on dark backgrounds, while fairly simple PCBs can be recreated simply with just basic dimensioning. Each layer of the PCB is carefully recreated in the software with the intent of producing a final design as close to the initial. Then, the schematics for the circuit are finally generated using an appropriate tool.
Software
In 1990, the Institute of Electrical and Electronics Engineers (IEEE) defined (software) reverse engineering (SRE) as "the process of analyzing a
subject system to identify the system's components and their interrelationships and to create representations of the system in another form or at a higher
level of abstraction" in which the "subject system" is the end product of software development. Reverse engineering is a process of examination only, and the software system under consideration is not modified, which would otherwise be re-engineering or restructuring. Reverse engineering can be performed from any stage of the product cycle, not necessarily from the functional end product.
There are two components in reverse engineering: redocumentation and design recovery. Redocumentation is the creation of new representation of the computer code so that it is easier to understand. Meanwhile, design recovery is the use of deduction or reasoning from general knowledge or personal experience of the product to understand the product's functionality fully. It can also be seen as "going backwards through the development cycle". In this model, the output of the implementation phase (in source code form) is reverse-engineered back to the analysis phase, in an inversion of the traditional waterfall model. Another term for this technique is program comprehension. The Working Conference on Reverse Engineering (WCRE) has been held yearly to explore and expand the techniques of reverse engineering. Computer-aided software engineering (CASE) and automated code generation have contributed greatly in the field of reverse engineering.
Software anti-tamper technology like obfuscation is used to deter both reverse engineering and re-engineering of proprietary software and software-powered systems. In practice, two main types of reverse engineering emerge. In the first case, source code is already available for the software, but higher-level aspects of the program, which are perhaps poorly documented or documented but no longer valid, are discovered. In the second case, there is no source code available for the software, and any efforts towards discovering one possible source code for the software are regarded as reverse engineering. The second usage of the term is more familiar to most people. Reverse engineering of software can make use of the clean room design technique to avoid copyright infringement.
On a related note, black box testing in software engineering has a lot in common with reverse engineering. The tester usually has the API but has the goals to find bugs and undocumented features by bashing the product from outside.
Other purposes of reverse engineering include security auditing, removal of copy protection ("cracking"), circumvention of access restrictions often present in consumer electronics, customization of embedded systems (such as engine management systems), in-house repairs or retrofits, enabling of additional features on low-cost "crippled" hardware (such as some graphics card chip-sets), or even mere satisfaction of curiosity.
Binary software
Binary reverse engineering is performed if source code for a software is unavailable. This process is sometimes termed reverse code engineering, or RCE. For example, decompilation of binaries for the Java platform can be accomplished by using Jad. One famous case of reverse engineering was the first non-IBM implementation of the PC BIOS, which launched the historic IBM PC compatible industry that has been the overwhelmingly-dominant computer hardware platform for many years. Reverse engineering of software is protected in the US by the fair use exception in copyright law. The Samba software, which allows systems that do not run Microsoft Windows systems to share files with systems that run it, is a classic example of software reverse engineering since the Samba project had to reverse-engineer unpublished information about how Windows file sharing worked so that non-Windows computers could emulate it. The Wine project does the same thing for the Windows API, and OpenOffice.org is one party doing that for the Microsoft Office file formats. The ReactOS project is even more ambitious in its goals by striving to provide binary (ABI and API) compatibility with the current Windows operating systems of the NT branch, which allows software and drivers written for Windows to run on a clean-room reverse-engineered free software (GPL) counterpart. WindowsSCOPE allows for reverse-engineering the full contents of a Windows system's live memory including a binary-level, graphical reverse engineering of all running processes.
Another classic, if not well-known, example is that in 1987 Bell Laboratories reverse-engineered the Mac OS System 4.1, originally running on the Apple Macintosh SE, so that it could run it on RISC machines of their own.
Binary software techniques
Reverse engineering of software can be accomplished by various methods.
The three main groups of software reverse engineering are
Analysis through observation of information exchange, most prevalent in protocol reverse engineering, which involves using bus analyzers and packet sniffers, such as for accessing a computer bus or computer network connection and revealing the traffic data thereon. Bus or network behavior can then be analyzed to produce a standalone implementation that mimics that behavior. That is especially useful for reverse engineering device drivers. Sometimes, reverse engineering on embedded systems is greatly assisted by tools deliberately introduced by the manufacturer, such as JTAG ports or other debugging means. In Microsoft Windows, low-level debuggers such as SoftICE are popular.
Disassembly using a disassembler, meaning the raw machine language of the program is read and understood in its own terms, only with the aid of machine-language mnemonics. It works on any computer program but can take quite some time, especially for those who are not used to machine code. The Interactive Disassembler is a particularly popular tool.
Decompilation using a decompiler, a process that tries, with varying results, to recreate the source code in some high-level language for a program only available in machine code or bytecode.
Software classification
Software classification is the process of identifying similarities between different software binaries (such as two different versions of the same binary) used to detect code relations between software samples. The task was traditionally done manually for several reasons (such as patch analysis for vulnerability detection and copyright infringement), but it can now be done somewhat automatically for large numbers of samples.
This method is being used mostly for long and thorough reverse engineering tasks (complete analysis of a complex algorithm or big piece of software). In general, statistical classification is considered to be a hard problem, which is also true for software classification, and so few solutions/tools that handle this task well.
Source code
A number of UML tools refer to the process of importing and analysing source code to generate UML diagrams as "reverse engineering". See List of UML tools.
Although UML is one approach in providing "reverse engineering" more recent advances in international standards activities have resulted in the development of the Knowledge Discovery Metamodel (KDM). The standard delivers an ontology for the intermediate (or abstracted) representation of programming language constructs and their interrelationships. An Object Management Group standard (on its way to becoming an ISO standard as well), KDM has started to take hold in industry with the development of tools and analysis environments that can deliver the extraction and analysis of source, binary, and byte code. For source code analysis, KDM's granular standards' architecture enables the extraction of software system flows (data, control, and call maps), architectures, and business layer knowledge (rules, terms, and process). The standard enables the use of a common data format (XMI) enabling the correlation of the various layers of system knowledge for either detailed analysis (such as root cause, impact) or derived analysis (such as business process extraction). Although efforts to represent language constructs can be never-ending because of the number of languages, the continuous evolution of software languages, and the development of new languages, the standard does allow for the use of extensions to support the broad language set as well as evolution. KDM is compatible with UML, BPMN, RDF, and other standards enabling migration into other environments and thus leverage system knowledge for efforts such as software system transformation and enterprise business layer analysis.
Protocols
Protocols are sets of rules that describe message formats and how messages are exchanged: the protocol state machine. Accordingly, the problem of protocol reverse-engineering can be partitioned into two subproblems: message format and state-machine reverse-engineering.
The message formats have traditionally been reverse-engineered by a tedious manual process, which involved analysis of how protocol implementations process messages, but recent research proposed a number of automatic solutions. Typically, the automatic approaches group observe messages into clusters by using various clustering analyses, or they emulate the protocol implementation tracing the message processing.
There has been less work on reverse-engineering of state-machines of protocols. In general, the protocol state-machines can be learned either through a process of offline learning, which passively observes communication and attempts to build the most general state-machine accepting all observed sequences of messages, and online learning, which allows interactive generation of probing sequences of messages and listening to responses to those probing sequences. In general, offline learning of small state-machines is known to be NP-complete, but online learning can be done in polynomial time. An automatic offline approach has been demonstrated by Comparetti et al. and an online approach by Cho et al.
Other components of typical protocols, like encryption and hash functions, can be reverse-engineered automatically as well. Typically, the automatic approaches trace the execution of protocol implementations and try to detect buffers in memory holding unencrypted packets.
Integrated circuits/smart cards
Reverse engineering is an invasive and destructive form of analyzing a smart card. The attacker uses chemicals to etch away layer after layer of the smart card and takes pictures with a scanning electron microscope (SEM). That technique can reveal the complete hardware and software part of the smart card. The major problem for the attacker is to bring everything into the right order to find out how everything works. The makers of the card try to hide keys and operations by mixing up memory positions, such as by bus scrambling.
In some cases, it is even possible to attach a probe to measure voltages while the smart card is still operational. The makers of the card employ sensors to detect and prevent that attack. That attack is not very common because it requires both a large investment in effort and special equipment that is generally available only to large chip manufacturers. Furthermore, the payoff from this attack is low since other security techniques are often used such as shadow accounts. It is still uncertain whether attacks against chip-and-PIN cards to replicate encryption data and then to crack PINs would provide a cost-effective attack on multifactor authentication.
Full reverse engineering proceeds in several major steps.
The first step after images have been taken with a SEM is stitching the images together, which is necessary because each layer cannot be captured by a single shot. A SEM needs to sweep across the area of the circuit and take several hundred images to cover the entire layer. Image stitching takes as input several hundred pictures and outputs a single properly-overlapped picture of the complete layer.
Next, the stitched layers need to be aligned because the sample, after etching, cannot be put into the exact same position relative to the SEM each time. Therefore, the stitched versions will not overlap in the correct fashion, as on the real circuit. Usually, three corresponding points are selected, and a transformation applied on the basis of that.
To extract the circuit structure, the aligned, stitched images need to be segmented, which highlights the important circuitry and separates it from the uninteresting background and insulating materials.
Finally, the wires can be traced from one layer to the next, and the netlist of the circuit, which contains all of the circuit's information, can be reconstructed.
Military applications
Reverse engineering is often used by people to copy other nations' technologies, devices, or information that have been obtained by regular troops in the fields or by intelligence operations. It was often used during the Second World War and the Cold War. Here are well-known examples from the Second World War and later:
Jerry can: British and American forces in WW2 noticed that the Germans had gasoline cans with an excellent design. They reverse-engineered copies of those cans, which cans were popularly known as "Jerry cans".
Nakajima G5N: In 1939, the U.S. Douglas Aircraft Company sold its DC-4E airliner prototype to Imperial Japanese Airways, which was secretly acting as a front for the Imperial Japanese Navy, which wanted a long-range strategic bomber but had been hindered by the Japanese aircraft industry's inexperience with heavy long-range aircraft. The DC-4E was transferred to the Nakajima Aircraft Company and dismantled for study; as a cover story, the Japanese press reported that it had crashed in Tokyo Bay. The wings, engines, and landing gear of the G5N were copied directly from the DC-4E.
Panzerschreck: The Germans captured an American bazooka during the Second World War and reverse engineered it to create the larger Panzerschreck.
Tupolev Tu-4: In 1944, three American B-29 bombers on missions over Japan were forced to land in the Soviet Union. The Soviets, who did not have a similar strategic bomber, decided to copy the B-29. Within three years, they had developed the Tu-4, a nearly-perfect copy.
SCR-584 radar: copied by the Soviet Union after the Second World War, it is known for a few modifications - СЦР-584, Бинокль-Д.
V-2 rocket: Technical documents for the V-2 and related technologies were captured by the Western Allies at the end of the war. The Americans focused their reverse engineering efforts via Operation Paperclip, which led to the development of the PGM-11 Redstone rocket. The Soviets used captured German engineers to reproduce technical documents and plans and worked from captured hardware to make their clone of the rocket, the R-1. Thus began the postwar Soviet rocket program, which led to the R-7 and the beginning of the space race.
K-13/R-3S missile (NATO reporting name AA-2 Atoll), a Soviet reverse-engineered copy of the AIM-9 Sidewinder, was made possible after a Taiwanese (ROCAF) AIM-9B hit a Chinese PLA MiG-17 without exploding in September 1958. The missile became lodged within the airframe, and the pilot returned to base with what Soviet scientists would describe as a university course in missile development.
Toophan missile: In May 1975, negotiations between Iran and Hughes Missile Systems on co-production of the BGM-71 TOW and Maverick missiles stalled over disagreements in the pricing structure, the subsequent 1979 revolution ending all plans for such co-production. Iran was later successful in reverse-engineering the missile and now produces its own copy, the Toophan.
China has reversed engineered many examples of Western and Russian hardware, from fighter aircraft to missiles and HMMWV cars, such as the MiG-15,17,19,21 (which became the J-2,5,6,7) and the Su-33 (which became the J-15).
During the Second World War, Polish and British cryptographers studied captured German "Enigma" message encryption machines for weaknesses. Their operation was then simulated on electromechanical devices, "bombes", which tried all the possible scrambler settings of the "Enigma" machines that helped the breaking of coded messages that had been sent by the Germans.
Also during the Second World War, British scientists analyzed and defeated a series of increasingly-sophisticated radio navigation systems used by the Luftwaffe to perform guided bombing missions at night. The British countermeasures to the system were so effective that in some cases, German aircraft were led by signals to land at RAF bases since they believed that they had returned to German territory.
Gene networks
Reverse engineering concepts have been applied to biology as well, specifically to the task of understanding the structure and function of gene regulatory networks. They regulate almost every aspect of biological behavior and allow cells to carry out physiological processes and responses to perturbations. Understanding the structure and the dynamic behavior of gene networks is therefore one of the paramount challenges of systems biology, with immediate practical repercussions in several applications that are beyond basic research.
There are several methods for reverse engineering gene regulatory networks by using molecular biology and data science methods. They have been generally divided into six classes:
Coexpression methods are based on the notion that if two genes exhibit a similar expression profile, they may be related although no causation can be simply inferred from coexpression.
Sequence motif methods analyze gene promoters to find specific transcription factor binding domains. If a transcription factor is predicted to bind a promoter of a specific gene, a regulatory connection can be hypothesized.
Chromatin ImmunoPrecipitation (ChIP) methods investigate the genome-wide profile of DNA binding of chosen transcription factors to infer their downstream gene networks.
Orthology methods transfer gene network knowledge from one species to another.
Literature methods implement text mining and manual research to identify putative or experimentally-proven gene network connections.
Transcriptional complexes methods leverage information on protein-protein interactions between transcription factors, thus extending the concept of gene networks to include transcriptional regulatory complexes.
Often, gene network reliability is tested by genetic perturbation experiments followed by dynamic modelling, based on the principle that removing one network node has predictable effects on the functioning of the remaining nodes of the network.
Applications of the reverse engineering of gene networks range from understanding mechanisms of plant physiology to the highlighting of new targets for anticancer therapy.
Overlap with patent law
Reverse engineering applies primarily to gaining understanding of a process or artifact in which the manner of its construction, use, or internal processes has not been made clear by its creator.
Patented items do not of themselves have to be reverse-engineered to be studied, for the essence of a patent is that inventors provide a detailed public disclosure themselves, and in return receive legal protection of the invention that is involved. However, an item produced under one or more patents could also include other technology that is not patented and not disclosed. Indeed, one common motivation of reverse engineering is to determine whether a competitor's product contains patent infringement or copyright infringement.
Legality
United States
In the United States, even if an artifact or process is protected by trade secrets, reverse-engineering the artifact or process is often lawful if it has been legitimately obtained.
Reverse engineering of computer software often falls under both contract law as a breach of contract as well as any other relevant laws. That is because most end-user license agreements specifically prohibit it, and US courts have ruled that if such terms are present, they override the copyright law that expressly permits it (see Bowers v. Baystate Technologies). According to Section 103(f) of the Digital Millennium Copyright Act (17 U.S.C. § 1201 (f)), a person in legal possession of a program may reverse-engineer and circumvent its protection if that is necessary to achieve "interoperability", a term that broadly covers other devices and programs that can interact with it, make use of it, and to use and transfer data to and from it in useful ways. A limited exemption exists that allows the knowledge thus gained to be shared and used for interoperability purposes.
European Union
EU Directive 2009/24 on the legal protection of computer programs, which superseded an earlier (1991) directive, governs reverse engineering in the European Union.
See also
Antikythera mechanism
Backward induction
Benchmarking
Bus analyzer
Chonda
Clone (computing)
Clean room design
CMM
Code morphing
Connectix Virtual Game Station
Counterfeiting
Cryptanalysis
Decompile
Deformulation
Digital Millennium Copyright Act (DMCA)
Disassembler
Dongle
Forensic engineering
Industrial CT scanning
Interactive Disassembler
Knowledge Discovery Metamodel
Laser scanner
List of production topics
Listeroid Engines
Logic analyzer
Paycheck
Repurposing
Reverse architecture
Round-trip engineering
Retrodiction
Sega v. Accolade
Software archaeology
Software cracking
Structured light digitizer
Value engineering
AI-assisted reverse engineering
Notes
References
Sources
Elvidge, Julia, "Using Reverse Engineering to Discover Patent Infringement," Chipworks, Sept. 2010. Online: http://www.photonics.com/Article.aspx?AID=44063
Hausi A. Müller and Holger M. Kienle, "A Small Primer on Software Reverse Engineering," Technical Report, University of Victoria, 17 pages, March 2009. Online: http://holgerkienle.wikispaces.com/file/view/MK-UVic-09.pdf
Heines, Henry, "Determining Infringement by X-Ray Diffraction," Chemical Engineering Process, Jan. 1999 (example of reverse engineering used to detect IP infringement)
(introduction to hardware teardowns, including methodology, goals)
Samuelson, Pamela and Scotchmer, Suzanne, "The Law and Economics of Reverse Engineering," 111 Yale L.J. 1575 (2002). Online: http://people.ischool.berkeley.edu/~pam/papers/l&e%20reveng3.pdf
(xviii+856+vi pages, 3.5"-floppy) Errata: (NB. On general methodology of reverse engineering, applied to mass-market software: a program for exploring DOS, disassembling DOS.)
(pp. 59–188 on general methodology of reverse engineering, applied to mass-market software: examining Windows executables, disassembling Windows, tools for exploring Windows)
Schulman, Andrew, "Hiding in Plain Sight: Using Reverse Engineering to Uncover Software Patent Infringement," Intellectual Property Today, Nov. 2010. Online: http://www.iptoday.com/issues/2010/11/hiding-in-plain-sight-using-reverse-engineering-to-uncover-software-patent-infringement.asp
Schulman, Andrew, "Open to Inspection: Using Reverse Engineering to Uncover Software Prior Art," New Matter (Calif. State Bar IP Section), Summer 2011 (Part 1); Fall 2011 (Part 2). Online: http://www.SoftwareLitigationConsulting.com
Computer security
Espionage
Patent law
Industrial engineering
Technical intelligence
Technological races
NP-complete problems | 0.76578 | 0.997633 | 0.763968 |
Time-translation symmetry | Time-translation symmetry or temporal translation symmetry (TTS) is a mathematical transformation in physics that moves the times of events through a common interval. Time-translation symmetry is the law that the laws of physics are unchanged (i.e. invariant) under such a transformation. Time-translation symmetry is a rigorous way to formulate the idea that the laws of physics are the same throughout history. Time-translation symmetry is closely connected, via Noether's theorem, to conservation of energy. In mathematics, the set of all time translations on a given system form a Lie group.
There are many symmetries in nature besides time translation, such as spatial translation or rotational symmetries. These symmetries can be broken and explain diverse phenomena such as crystals, superconductivity, and the Higgs mechanism. However, it was thought until very recently that time-translation symmetry could not be broken. Time crystals, a state of matter first observed in 2017, break time-translation symmetry.
Overview
Symmetries are of prime importance in physics and are closely related to the hypothesis that certain physical quantities are only relative and unobservable. Symmetries apply to the equations that govern the physical laws (e.g. to a Hamiltonian or Lagrangian) rather than the initial conditions, values or magnitudes of the equations themselves and state that the laws remain unchanged under a transformation. If a symmetry is preserved under a transformation it is said to be invariant. Symmetries in nature lead directly to conservation laws, something which is precisely formulated by Noether's theorem.
Newtonian mechanics
To formally describe time-translation symmetry we say the equations, or laws, that describe a system at times and are the same for any value of and .
For example, considering Newton's equation:
One finds for its solutions the combination:
does not depend on the variable . Of course, this quantity describes the total energy whose conservation is due to the time-translation invariance of the equation of motion. By studying the composition of symmetry transformations, e.g. of geometric objects, one reaches the conclusion that they form a group and, more specifically, a Lie transformation group if one considers continuous, finite symmetry transformations. Different symmetries form different groups with different geometries. Time independent Hamiltonian systems form a group of time translations that is described by the non-compact, abelian, Lie group . TTS is therefore a dynamical or Hamiltonian dependent symmetry rather than a kinematical symmetry which would be the same for the entire set of Hamiltonians at issue. Other examples can be seen in the study of time evolution equations of classical and quantum physics.
Many differential equations describing time evolution equations are expressions of invariants associated to some Lie group and the theory of these groups provides a unifying viewpoint for the study of all special functions and all their properties. In fact, Sophus Lie invented the theory of Lie groups when studying the symmetries of differential equations. The integration of a (partial) differential equation by the method of separation of variables or by Lie algebraic methods is intimately connected with the existence of symmetries. For example, the exact solubility of the Schrödinger equation in quantum mechanics can be traced back to the underlying invariances. In the latter case, the investigation of symmetries allows for an interpretation of the degeneracies, where different configurations to have the same energy, which generally occur in the energy spectrum of quantum systems. Continuous symmetries in physics are often formulated in terms of infinitesimal rather than finite transformations, i.e. one considers the Lie algebra rather than the Lie group of transformations
Quantum mechanics
The invariance of a Hamiltonian of an isolated system under time translation implies its energy does not change with the passage of time. Conservation of energy implies, according to the Heisenberg equations of motion, that .
or:
Where is the time-translation operator which implies invariance of the Hamiltonian under the time-translation operation and leads to the conservation of energy.
Nonlinear systems
In many nonlinear field theories like general relativity or Yang–Mills theories, the basic field equations are highly nonlinear and exact solutions are only known for ‘sufficiently symmetric’ distributions of matter (e.g. rotationally or axially symmetric configurations). Time-translation symmetry is guaranteed only in spacetimes where the metric is static: that is, where there is a coordinate system in which the metric coefficients contain no time variable. Many general relativity systems are not static in any frame of reference so no conserved energy can be defined.
Time-translation symmetry breaking (TTSB)
Time crystals, a state of matter first observed in 2017, break discrete time-translation symmetry.
See also
Absolute time and space
Mach's principle
Spacetime
Time reversal symmetry
References
External links
The Feynman Lectures on Physics – Time Translation
Concepts in physics
Conservation laws
Energy (physics)
Laws of thermodynamics
Quantum field theory
Spacetime
Symmetry
Time in physics
Theory of relativity
Thermodynamics | 0.786926 | 0.970794 | 0.763944 |
SAMSON | SAMSON (Software for Adaptive Modeling and Simulation Of Nanosystems) is a computer software platform for molecular design being developed by OneAngstrom and previously by the NANO-D group at the French Institute for Research in Computer Science and Automation (INRIA).
SAMSON has a modular architecture that makes it suitable for different domains of nanoscience, including material science, life science, and drug design.
SAMSON Elements
SAMSON Elements are modules for SAMSON, developed with the SAMSON software development kit (SDK). SAMSON Elements help users perform tasks in SAMSON, including building new models, performing calculations, running interactive or offline simulations, and visualizing and interpreting results.
SAMSON Elements may contain different class types, including for example:
Apps – generic classes with a graphical user interface that extend the functions of SAMSON
Editors – classes that receive user interaction events to provide editing functions (e.g., model generation, structure deformation, etc.)
Models – classes that describe properties of nanosystems (see below)
Parsers – classes that may parse files to add content to SAMSON's data graph (see below)
SAMSON Elements expose their functions to SAMSON and other Elements through an introspection mechanism, and may thus be integrated and pipelined.
Modeling and simulation
SAMSON represents nanosystems using five categories of models:
Structural models – describe geometry and topology
Visual models – provide graphical representations
Dynamical models – describe dynamical degrees of freedom
Interaction models – describe energies and forces
Property models – describe traits that do not enter in the first four model categories
Simulators (potentially interactive ones) are used to build physically-based models, and predict properties.
Data graph
All models and simulators are integrated into a hierarchical, layered structure that form the SAMSON data graph. SAMSON Elements interact with each other and with the data graph to perform modeling and simulation tasks. A signals and slots mechanism makes it possible for data graph nodes to send events when they are updated, which makes it possible to develop e.g., adaptive simulation algorithms.
Node specification language
SAMSON has a node specification language (NSL) that users may employ to select data graph nodes based on their properties. Example NSL expressions include:
Hydrogen – select all hydrogens (short version: H)
atom.chainID > 2 – select all atoms with a chain ID strictly larger than 2 (short version: a.ci > 2)
Carbon in node.selected – select all carbons in the current selection (short version: C in n.s)
bond.order > 1.5 – select all bonds with order strictly larger than 1.5 (short version: b.o > 1.5)
node.type backbone – select all backbone nodes (short version: n.t bb)
O in node.type sidechain – select all oxygens in sidechain nodes (short version: O in n.t sc)
"CA" within 5A of S – select all nodes named CA that are within 5 angstrom of any sulfur atom (short version: "CA" w 5A of S)
node.type residue beyond 5A of node.selected – select all residue nodes beyond 5 angstrom of the current selection (short version: n.t r b 5A of n.s)
residue.secondaryStructure helix – select residue nodes in alpha helices (short version: r.ss h)
node.type sidechain having S – select sidechain nodes that have at least one sulfur atom (short version: n.t sc h S)
H linking O – select all hydrogens bonded to oxygen atoms (short version: H l O)
C or H – select atoms that are carbons or hydrogens
Features
SAMSON is developed in C++ and implements many features to ease developing SAMSON Elements, including:
Managed memory
Signals and slots
Serialization
Multilevel undo-redo
Introspection
Referencing
Unit system
Functors and predicate logic
SAMSON Element source code generators
SAMSON Connect
SAMSON, SAMSON Elements and the SAMSON Software Development Kit are distributed via the SAMSON Connect website. The site acts as a repository for the SAMSON Elements being uploaded by developers, and users of SAMSON choose and add Elements from SAMSON Connect.
See also
Comparison of software for molecular mechanics modeling
Gabedit
Jmol
Molden
Molecular design software
Molekel
PyMol
RasMol
UCSF Chimera
Visual Molecular Dynamics (VMD)
References
Computational chemistry software
Nanotechnology
Simulation software | 0.764326 | 0.999487 | 0.763934 |
Coefficient of restitution | In physics, the coefficient of restitution (COR, also denoted by e), can be thought of as a measure of the elasticity of a collision between two bodies. It is dimensionless parameter defined as the ratio of the relative velocity of separation after a two-body collision to the relative velocity of approach before collision. In most real-word collisions, the value of e lies somewhere between 0 and 1, where 1 represents a perfectly elastic collision (in which the objects rebound with no loss of speed but in the opposite directions) and 0 a perfectly inelastic collision (in which the objects do not rebound at all, and end up touching). The basic equation, sometimes known as Newton's restitution equation was developed by Sir Isaac Newton in 1687.
Introduction
As a property of paired objects
The COR is a property of a pair of objects in a collision, not a single object. If a given object collides with two different objects, each collision has its own COR. When a single object is described as having a given coefficient of restitution, as if it were an intrinsic property without reference to a second object, some assumptions have been made – for example that the collision is with another identical object, or with perfectly rigid wall.
Treated as a constant
In a basic analysis of collisions, e is generally treated as a dimensionless constant, independent of the mass and relative velocities of the two objects, with the collision being treated as effectively instantaneous. An example often used for teaching is the collision of two idealised billiard balls. Real world interactions may be more complicated, for example where the internal structure of the objects needs to be taken into account, or where there are more complex effects happening during the time between initial contact and final separation.
Range of values for e
e is usually a positive, real number between 0 and 1:
e = 0: This is a perfectly inelastic collision in which the objects do not rebound at all and end up touching.
0 < e < 1: This is a real-world inelastic collision, in which some kinetic energy is dissipated. The objects rebound with a lower separation speed than the speed of approach.
e = 1: This is a perfectly elastic collision, in which no kinetic energy is dissipated. The objects rebound with the same relative speed with which they approached.
Values outside that range are in principle possible, though in practice they would not normally be analysed with a basic analysis that takes e to be a constant:e < 0: A COR less than zero implies a collision in which the objects pass through one another, for example a bullet passing through a target.e'' > 1: This implies a superelastic collision in which the objects rebound with a greater relative speed than the speed of approach, due to some additional stored energy being released during the collision.
Equations
In the case of a one-dimensional collision involving two idealised objects, A and B, the coefficient of restitution is given by: where:
is the final velocity of object A after impact
is the final velocity of object B after impact
is the initial velocity of object A before impact
is the initial velocity of object B before impact
This is sometimes known as the restitution equation. For a perfectly elastic collision, e = 1 and the objects rebound with the same relative speed with which they approached. For a perfectly inelastic collision e = 0 and the objects do not rebound at all.
For an object bouncing off a stationary target, e is defined as the ratio of the object's rebound speed after the impact to that prior to impact: where
is the speed of the object before impact
is the speed of the rebounding object (in the opposite direction) after impact
In a case where frictional forces can be neglected and the object is dropped from rest onto a horizontal surface, this is equivalent to: where
is the drop height
is the bounce height
The coefficient of restitution can be thought of as a measure of the extent to which energy is conserved when an object bounces off a surface. In the case of an object bouncing off a stationary target, the change in gravitational potential energy, Ep, during the course of the impact is essentially zero; thus, e is a comparison between the kinetic energy, Ek, of the object immediately before impact with that immediately after impact:In a cases where frictional forces can be neglected (nearly every student laboratory on this subject), and the object is dropped from rest onto a horizontal surface, the above is equivalent to a comparison between the Ep of the object at the drop height with that at the bounce height. In this case, the change in Ek is zero (the object is essentially at rest during the course of the impact and is also at rest at the apex of the bounce); thus:
Speeds after impact
Although e'' does not vary with the masses of the colliding objects, their final velocities are mass-dependent due to conservation of momentum:
and
where
is the velocity of A after impact
is the velocity of B after impact
is the velocity of A before impact
is the velocity of B before impact
is the mass of A
is the mass of B
Practical issues
Measurement
In practical situations, the coefficient of restitution between two bodies may have to be determined experimentally, for example using the Leeb rebound hardness test. This uses a tip of tungsten carbide, one of the hardest substances available, dropped onto test samples from a specific height.
A comprehensive study of coefficients of restitution in dependence on material properties (elastic moduli, rheology), direction of impact, coefficient of friction and adhesive properties of impacting bodies can be found in Willert (2020).
Application in sports
Thin-faced golf club drivers utilize a "trampoline effect" that creates drives of a greater distance as a result of the flexing and subsequent release of stored energy which imparts greater impulse to the ball. The USGA (America's governing golfing body) tests drivers for COR and has placed the upper limit at 0.83. COR is a function of rates of clubhead speeds and diminish as clubhead speed increase. In the report COR ranges from 0.845 for 90 mph to as low as 0.797 at 130 mph. The above-mentioned "trampoline effect" shows this since it reduces the rate of stress of the collision by increasing the time of the collision. According to one article (addressing COR in tennis racquets), "[f]or the Benchmark Conditions, the coefficient of restitution used is 0.85 for all racquets, eliminating the variables of string tension and frame stiffness which could add or subtract from the coefficient of restitution."
The International Table Tennis Federation specifies that the ball shall bounce up 24–26 cm when dropped from a height of 30.5 cm on to a standard steel block, implying a COR of 0.887 to 0.923.
The International Basketball Federation (FIBA) rules require that the ball rebound to a height of between 1035 and 1085 mm when dropped from a height of 1800 mm, implying a COR between 0.758 and 0.776.
See also
Bouncing ball
Collision
Damping capacity
Resilience
References
Works cited
External links
Wolfram Article on COR
Chris Hecker's physics introduction
"Getting an extra bounce" by Chelsea Wald
FIFA Quality Concepts for Footballs – Uniform Rebound
Mechanics
Classical mechanics
Ratios
de:Stoß (Physik)#Realer Stoß | 0.767391 | 0.995488 | 0.763929 |
Integrated assessment modelling | Integrated assessment modelling (IAM) or integrated modelling (IM) is a term used for a type of scientific modelling that tries to link main features of society and economy with the biosphere and atmosphere into one modelling framework. The goal of integrated assessment modelling is to accommodate informed policy-making, usually in the context of climate change though also in other areas of human and social development. While the detail and extent of integrated disciplines varies strongly per model, all climatic integrated assessment modelling includes economic processes as well as processes producing greenhouse gases. Other integrated assessment models also integrate other aspects of human development such as education, health, infrastructure, and governance.
These models are integrated because they span multiple academic disciplines, including economics and climate science and for more comprehensive models also energy systems, land-use change, agriculture, infrastructure, conflict, governance, technology, education, and health. The word assessment comes from the use of these models to provide information for answering policy questions. To quantify these integrated assessment studies, numerical models are used. Integrated assessment modelling does not provide predictions for the future but rather estimates what possible scenarios look like.
There are different types of integrated assessment models. One classification distinguishes between firstly models that quantify future developmental pathways or scenarios and provide detailed, sectoral information on the complex processes modelled. Here they are called process-based models. Secondly, there are models that aggregate the costs of climate change and climate change mitigation to find estimates of the total costs of climate change. A second classification makes a distinction between models that extrapolate verified patterns (via econometrics equations), or models that determine (globally) optimal economic solutions from the perspective of a social planner, assuming (partial) equilibrium of the economy.
Process-based models
Intergovernmental Panel on Climate Change (IPCC) has relied on process-based integrated assessment models to quantify mitigation scenarios. They have been used to explore different pathways for staying within climate policy targets such as the 1.5 °C target agreed upon in the Paris Agreement. Moreover, these models have underpinned research including energy policy assessment and simulate the Shared socioeconomic pathways. Notable modelling frameworks include IMAGE, MESSAGEix, AIM/GCE, GCAM, REMIND-MAgPIE, and WITCH-GLOBIOM. While these scenarios are highly policy-relevant, interpretation of the scenarios should be done with care.
Non-equilibrium models include those based on econometric equations and evolutionary economics (such as E3ME), and agent-based models (such as the agent-based DSK-model). These models typically do not assume rational and representative agents, nor market equilibrium in the long term.
Aggregate cost-benefit models
Cost-benefit integrated assessment models are the main tools for calculating the social cost of carbon, or the marginal social cost of emitting one more tonne of carbon (as carbon dioxide) into the atmosphere at any point in time. For instance, the DICE, PAGE, and FUND models have been used by the US Interagency Working Group to calculate the social cost of carbon and its results have been used for regulatory impact analysis.
This type of modelling is carried out to find the total cost of climate impacts, which are generally considered a negative externality not captured by conventional markets. In order to correct such a market failure, for instance by using a carbon tax, the cost of emissions is required. However, the estimates of the social cost of carbon are highly uncertain and will remain so for the foreseeable future. It has been argued that "IAM-based analyses of climate policy create a perception of knowledge and precision that is illusory, and can fool policy-makers into thinking that the forecasts the models generate have some kind of scientific legitimacy". Still, it has been argued that attempting to calculate the social cost of carbon is useful to gain insight into the effect of certain processes on climate impacts, as well as to better understand one of the determinants international cooperation in the governance of climate agreements.
Integrated assessment models have not been used solely to assess environmental or climate change-related fields. They have also been used to analyze patterns of conflict, the Sustainable Development Goals, trends across issue area in Africa, and food security.
Shortcomings
All numerical models have shortcomings. Integrated Assessment Models for climate change, in particular, have been severely criticized for problematic assumptions that led to greatly overestimating the cost/benefit ratio for mitigating climate change while relying on economic models inappropriate to the problem. In 2021, the integrated assessment modeling community examined gaps in what was termed the "possibility space" and how these might best be consolidated and addressed. In an October2021 working paper, Nicholas Stern argues that existing IAMs are inherently unable to capture the economic realities of the climate crisis under its current state of rapid progress.
Models undertaking optimisation methodologies have received numerous different critiques, a prominent one however, draws on the ideas of dynamical systems theory which understands systems as changing with no deterministic pathway or end-state.
This implies a very large, or even infinite, number of possible states of the system in the future with aspects and dynamics that cannot be known to observers of the current state of the system.
This type of uncertainty around future states of an evolutionary system has been referred to as ‘radical’ or ‘fundamental’ uncertainty.
This has led some researchers to call for more work on the broader array of possible futures and calling for modelling research on those alternative scenarios that have yet to receive substantial attention, for example post-growth scenarios.
Notes
References
External links
Integrated Assessment Society
Integrated Assessment Journal
Climate change policy
Environmental science
Environmental social science
Scientific modelling
Management cybernetics | 0.783222 | 0.975352 | 0.763917 |
Centimetre–gram–second system of units | The centimetre–gram–second system of units (CGS or cgs) is a variant of the metric system based on the centimetre as the unit of length, the gram as the unit of mass, and the second as the unit of time. All CGS mechanical units are unambiguously derived from these three base units, but there are several different ways in which the CGS system was extended to cover electromagnetism.
The CGS system has been largely supplanted by the MKS system based on the metre, kilogram, and second, which was in turn extended and replaced by the International System of Units (SI). In many fields of science and engineering, SI is the only system of units in use, but CGS is still prevalent in certain subfields.
In measurements of purely mechanical systems (involving units of length, mass, force, energy, pressure, and so on), the differences between CGS and SI are straightforward: the unit-conversion factors are all powers of 10 as and . For example, the CGS unit of force is the dyne, which is defined as , so the SI unit of force, the newton, is equal to .
On the other hand, in measurements of electromagnetic phenomena (involving units of charge, electric and magnetic fields, voltage, and so on), converting between CGS and SI is less straightforward. Formulas for physical laws of electromagnetism (such as Maxwell's equations) take a form that depends on which system of units is being used, because the electromagnetic quantities are defined differently in SI and in CGS. Furthermore, within CGS, there are several plausible ways to define electromagnetic quantities, leading to different "sub-systems", including Gaussian units, "ESU", "EMU", and Heaviside–Lorentz units. Among these choices, Gaussian units are the most common today, and "CGS units" is often intended to refer to CGS-Gaussian units.
History
The CGS system goes back to a proposal in 1832 by the German mathematician Carl Friedrich Gauss to base a system of absolute units on the three fundamental units of length, mass and time. Gauss chose the units of millimetre, milligram and second. In 1873, a committee of the British Association for the Advancement of Science, including physicists James Clerk Maxwell and William Thomson recommended the general adoption of centimetre, gram and second as fundamental units, and to express all derived electromagnetic units in these fundamental units, using the prefix "C.G.S. unit of ...".
The sizes of many CGS units turned out to be inconvenient for practical purposes. For example, many everyday objects are hundreds or thousands of centimetres long, such as humans, rooms and buildings. Thus the CGS system never gained wide use outside the field of science. Starting in the 1880s, and more significantly by the mid-20th century, CGS was gradually superseded internationally for scientific purposes by the MKS (metre–kilogram–second) system, which in turn developed into the modern SI standard.
Since the international adoption of the MKS standard in the 1940s and the SI standard in the 1960s, the technical use of CGS units has gradually declined worldwide. CGS units have been deprecated in favor of SI units by NIST, as well as organizations such as the American Physical Society and the International Astronomical Union. SI units are predominantly used in engineering applications and physics education, while Gaussian CGS units are still commonly used in theoretical physics, describing microscopic systems, relativistic electrodynamics, and astrophysics.
The units gram and centimetre remain useful as noncoherent units within the SI system, as with any other prefixed SI units.
Definition of CGS units in mechanics
In mechanics, the quantities in the CGS and SI systems are defined identically. The two systems differ only in the scale of the three base units (centimetre versus metre and gram versus kilogram, respectively), with the third unit (second) being the same in both systems.
There is a direct correspondence between the base units of mechanics in CGS and SI. Since the formulae expressing the laws of mechanics are the same in both systems and since both systems are coherent, the definitions of all coherent derived units in terms of the base units are the same in both systems, and there is an unambiguous relationship between derived units:
(definition of velocity)
(Newton's second law of motion)
(energy defined in terms of work)
(pressure defined as force per unit area)
(dynamic viscosity defined as shear stress per unit velocity gradient).
Thus, for example, the CGS unit of pressure, barye, is related to the CGS base units of length, mass, and time in the same way as the SI unit of pressure, pascal, is related to the SI base units of length, mass, and time:
1 unit of pressure = 1 unit of force / (1 unit of length)2 = 1 unit of mass / (1 unit of length × (1 unit of time)2)
1 Ba = 1 g/(cm⋅s2)
1 Pa = 1 kg/(m⋅s2).
Expressing a CGS derived unit in terms of the SI base units, or vice versa, requires combining the scale factors that relate the two systems:
1 Ba = 1 g/(cm⋅s2) = 10−3 kg / (10−2 m⋅s2) = 10−1 kg/(m⋅s2) = 10−1 Pa.
Definitions and conversion factors of CGS units in mechanics
Derivation of CGS units in electromagnetism
CGS approach to electromagnetic units
The conversion factors relating electromagnetic units in the CGS and SI systems are made more complex by the differences in the formulas expressing physical laws of electromagnetism as assumed by each system of units, specifically in the nature of the constants that appear in these formulas. This illustrates the fundamental difference in the ways the two systems are built:
In SI, the unit of electric current, the ampere (A), was historically defined such that the magnetic force exerted by two infinitely long, thin, parallel wires 1 metre apart and carrying a current of 1 ampere is exactly . This definition results in all SI electromagnetic units being numerically consistent (subject to factors of some integer powers of 10) with those of the CGS-EMU system described in further sections. The ampere is a base unit of the SI system, with the same status as the metre, kilogram, and second. Thus the relationship in the definition of the ampere with the metre and newton is disregarded, and the ampere is not treated as dimensionally equivalent to any combination of other base units. As a result, electromagnetic laws in SI require an additional constant of proportionality (see Vacuum permeability) to relate electromagnetic units to kinematic units. (This constant of proportionality is derivable directly from the above definition of the ampere.) All other electric and magnetic units are derived from these four base units using the most basic common definitions: for example, electric charge q is defined as current I multiplied by time t, resulting in the unit of electric charge, the coulomb (C), being defined as 1 C = 1 A⋅s.
The CGS system variant avoids introducing new base quantities and units, and instead defines all electromagnetic quantities by expressing the physical laws that relate electromagnetic phenomena to mechanics with only dimensionless constants, and hence all units for these quantities are directly derived from the centimetre, gram, and second.
In each of these systems the quantities called "charge" etc. may be a different quantity; they are distinguished here by a superscript. The corresponding quantities of each system are related through a proportionality constant.
Maxwell's equations can be written in each of these systems as:
Electrostatic units (ESU)
In the electrostatic units variant of the CGS system, (CGS-ESU), charge is defined as the quantity that obeys a form of Coulomb's law without a multiplying constant (and current is then defined as charge per unit time):
The ESU unit of charge, franklin (Fr), also known as statcoulomb or esu charge, is therefore defined as follows: Therefore, in CGS-ESU, a franklin is equal to a centimetre times square root of dyne:
The unit of current is defined as:
In the CGS-ESU system, charge q is therefore has the dimension to M1/2L3/2T−1.
Other units in the CGS-ESU system include the statampere (1 statC/s) and statvolt (1 erg/statC).
In CGS-ESU, all electric and magnetic quantities are dimensionally expressible in terms of length, mass, and time, and none has an independent dimension. Such a system of units of electromagnetism, in which the dimensions of all electric and magnetic quantities are expressible in terms of the mechanical dimensions of mass, length, and time, is traditionally called an 'absolute system'.:3
Unit symbols
All electromagnetic units in the CGS-ESU system that have not been given names of their own are named as the corresponding SI name with an attached prefix "stat" or with a separate abbreviation "esu", and similarly with the corresponding symbols.
Electromagnetic units (EMU)
In another variant of the CGS system, electromagnetic units (EMU), current is defined via the force existing between two thin, parallel, infinitely long wires carrying it, and charge is then defined as current multiplied by time. (This approach was eventually used to define the SI unit of ampere as well).
The EMU unit of current, biot (Bi), also known as abampere or emu current, is therefore defined as follows:
Therefore, in electromagnetic CGS units, a biot is equal to a square root of dyne:
The unit of charge in CGS EMU is:
Dimensionally in the CGS-EMU system, charge q is therefore equivalent to M1/2L1/2. Hence, neither charge nor current is an independent physical quantity in the CGS-EMU system.
EMU notation
All electromagnetic units in the CGS-EMU system that do not have proper names are denoted by a corresponding SI name with an attached prefix "ab" or with a separate abbreviation "emu".
Practical CGS units
The practical CGS system is a hybrid system that uses the volt and the ampere as the units of voltage and current respectively. Doing this avoids the inconveniently large and small electrical units that arise in the esu and emu systems. This system was at one time widely used by electrical engineers because the volt and ampere had been adopted as international standard units by the International Electrical Congress of 1881. As well as the volt and ampere, the farad (capacitance), ohm (resistance), coulomb (electric charge), and henry (inductance) are consequently also used in the practical system and are the same as the SI units. The magnetic units are those of the emu system.
The electrical units, other than the volt and ampere, are determined by the requirement that any equation involving only electrical and kinematical quantities that is valid in SI should also be valid in the system. For example, since electric field strength is voltage per unit length, its unit is the volt per centimetre, which is one hundred times the SI unit.
The system is electrically rationalized and magnetically unrationalized; i.e., and , but the above formula for is invalid. A closely related system is the International System of Electric and Magnetic Units, which has a different unit of mass so that the formula for ′ is invalid. The unit of mass was chosen to remove powers of ten from contexts in which they were considered to be objectionable (e.g., and ). Inevitably, the powers of ten reappeared in other contexts, but the effect was to make the familiar joule and watt the units of work and power respectively.
The ampere-turn system is constructed in a similar way by considering magnetomotive force and magnetic field strength to be electrical quantities and rationalizing the system by dividing the units of magnetic pole strength and magnetization by 4. The units of the first two quantities are the ampere and the ampere per centimetre respectively. The unit of magnetic permeability is that of the emu system, and the magnetic constitutive equations are and . Magnetic reluctance is given a hybrid unit to ensure the validity of Ohm's law for magnetic circuits.
In all the practical systems ε0 = 8.8542 × 10−14 A⋅s/(V⋅cm), μ0 = 1 V⋅s/(A⋅cm), and c2 = 1/(4π × 10−9 ε0μ0).
Other variants
There were at various points in time about half a dozen systems of electromagnetic units in use, most based on the CGS system. These include the Gaussian units and the Heaviside–Lorentz units.
Electromagnetic units in various CGS systems
In this table, c = is the numeric value of the speed of light in vacuum when expressed in units of centimetres per second. The symbol "≘" is used instead of "=" as a reminder that the units are corresponding but not equal. For example, according to the capacitance row of the table, if a capacitor has a capacitance of 1 F in SI, then it has a capacitance of (10−9 c2) cm in ESU; but it is incorrect to replace "1 F" with "(10−9 c2) cm" within an equation or formula. (This warning is a special aspect of electromagnetism units. By contrast it is always correct to replace, e.g., "1 m" with "100 cm" within an equation or formula.)
Physical constants in CGS units
Advantages and disadvantages
Lack of unique unit names leads to potential confusion: "15 emu" may mean either 15 abvolts, or 15 emu units of electric dipole moment, or 15 emu units of magnetic susceptibility, sometimes (but not always) per gram, or per mole. With its system of uniquely named units, the SI removes any confusion in usage: 1 ampere is a fixed value of a specified quantity, and so are 1 henry, 1 ohm, and 1 volt.
In the CGS-Gaussian system, electric and magnetic fields have the same units, 40 is replaced by 1, and the only dimensional constant appearing in the Maxwell equations is c, the speed of light. The Heaviside–Lorentz system has these properties as well (with ε0 equaling 1).
In SI, and other rationalized systems (for example, Heaviside–Lorentz), the unit of current was chosen such that electromagnetic equations concerning charged spheres contain 4, those concerning coils of current and straight wires contain 2 and those dealing with charged surfaces lack entirely, which was the most convenient choice for applications in electrical engineering and relates directly to the geometric symmetry of the system being described by the equation.
Specialized unit systems are used to simplify formulas further than either SI or CGS do, by eliminating constants through a convention of normalizing quantities with respect to some system of natural units. For example, in particle physics a system is in use where every quantity is expressed by only one unit of energy, the electronvolt, with lengths, times, and so on all converted into units of energy by inserting factors of speed of light c and the reduced Planck constant ħ. This unit system is convenient for calculations in particle physics, but is impractical in other contexts.
See also
Outline of metrology and measurement
International System of Units
International System of Electrical and Magnetic Units
List of metric units
List of scientific units named after people
Metre–tonne–second system of units
United States customary units
Foot–pound–second system of units
References and notes
General literature
Metrology
Systems of units
Metric system
British Science Association | 0.766741 | 0.996287 | 0.763894 |
Counter-electromotive force | Counter-electromotive force (counter EMF, CEMF, back EMF), is the electromotive force (EMF) manifesting as a voltage that opposes the change in current which induced it. CEMF is the EMF caused by electromagnetic induction.
Details
For example, the voltage appearing across an inductor or coil is due to a change in current which causes a change in the magnetic field within the coil, and therefore the self-induced voltage. The polarity of the voltage at every moment opposes that of the change in applied voltage, to keep the current constant.
The term back electromotive force is also commonly used to refer to the voltage that occurs in electric motors where there is relative motion between the armature and the magnetic field produced by the motor's field coils or permanent magnet field, thus also acting as a generator while running as a motor. This effect is not due to the motor's inductance, which generates a voltage in opposition to a changing current via Faraday's law, but a separate phenomenon. That is, the back-EMF is also due to inductance and Faraday's law, but occurs even when the motor current is not changing, and arises from the geometric considerations of an armature spinning in a magnetic field.
This voltage is in series with and opposes the original applied voltage and is called "back-electromotive force" (by Lenz's law). With a lower overall voltage across the motor's internal resistance as the motor turns faster, the current flowing into the motor decreases. One practical application of this phenomenon is to indirectly measure motor speed and position, as the back-EMF is proportional to the rotational speed of the armature.
In motor control and robotics, back-EMF often refers most specifically to actually using the voltage generated by a spinning motor to infer the speed of the motor's rotation, for use in better controlling the motor in specific ways.
To observe the effect of back-EMF of a motor, one can perform this simple exercise: with an incandescent light on, cause a large motor such as a drill press, saw, air conditioner compressor, or vacuum cleaner to start. The light may dim briefly as the motor starts. When the armature is not turning (called locked rotor) there is no back-EMF and the motor's current draw is quite high. If the motor's starting current is high enough, it will pull the line voltage down enough to cause noticeable dimming of the light.
References
External links
Counter-electromotive-force in access control applications
Electromagnetism | 0.771242 | 0.990465 | 0.763888 |
Stellar engine | Stellar engines are a class of hypothetical megastructures which use the resources of a star to generate available work (also called exergy). For instance, they can use the energy of the star to produce mechanical, electrical or chemical work or they can use the impulse of the light emitted by the star to produce thrust, able to control the motion of a star system. The concept has been introduced by Bădescu and Cathcart. The variants which produce thrust may accelerate a star and anything orbiting it in a given direction. The creation of such a system would make its builders a type-II civilization on the Kardashev scale.
Classes
Three classes of stellar engines have been defined.
Class A (Shkadov thruster)
One of the simplest examples of a stellar engine is the Shkadov thruster (named after Dr. Leonid Shkadov, who first proposed it), or a class-A stellar engine. Such an engine is a stellar propulsion system, consisting of an enormous mirror/light sail—actually a massive type of solar statite large enough to classify as a megastructure—which would balance gravitational attraction towards and radiation pressure away from the star. Since the radiation pressure of the star would now be asymmetrical, i.e. more radiation being emitted in one direction as compared to another, the "excess" radiation pressure acts as net thrust, accelerating the star in the direction of the hovering statite. Such thrust and acceleration would be very slight, but such a system could be stable for millennia. Any planetary system attached to the star would be "dragged" along by its parent star. For a star such as the Sun, with luminosity 3.85 W and mass 1.99 kg, the total thrust produced by reflecting half of the solar output would be 1.28 N. After a period of one million years this would yield an imparted speed of 20 m/s, with a displacement from the original position of 0.03 light-years. After one billion years, the speed would be 20 km/s and the displacement 34,000 light-years, a little over a third of the estimated width of the Milky Way galaxy.
Class B
A class-B stellar engine consists of two concentric spheres around a star. The inner sphere (which may be assimilated with a Dyson shell) receives energy from the star and becomes hotter than the outer sphere. The difference of temperature between the two spheres drives thermal engines able to provide mechanical work.
Unlike the Shkadov thruster, a class-B stellar engine is not propulsive.
Class C
A class-C stellar engine, such as the Badescu–Cathcart engine, combines the two other classes, employing both the propulsive aspects of the Shkadov thruster and the energy generating aspects of a class-B engine. A higher temperature Dyson shell partially covered by a mirror combined with an outer sphere at a lower temperature would be one incarnation of such a system. The non-spherical mirror ensures conversion of light impulse into effective thrust (like a class-A stellar engine) while the difference of temperature may be used to convert star energy into mechanical work (like a class-B stellar engine). Notice that such system suffers from the same stabilization problems as a non-propulsive shell, as would be a Dyson swarm with a large statite mirror (see image above). A Dyson bubble variant is already a Shkadov thruster (provided that the arrangement of statite components is asymmetrical); adding energy extraction capability to the components seems an almost trivial extension.
Caplan thruster
Astronomer Matthew E. Caplan of Illinois State University has proposed a type of stellar engine that uses concentrated stellar energy (repurposing the mirror statites from class A) to excite certain regions of the outer surface of the star and create beams of solar wind for collection by a multi-Bussard ramjet assembly. The ramjets would produce directed plasma to stabilize its orbit and jets of oxygen-14 to push the star. Using rudimentary calculations that assume maximum efficiency, Caplan estimates that the Bussard engine would use 1012 kg of solar material per second to produce a maximum acceleration of 10−9 m/s2, yielding a velocity of 200 km/s after 5 million years and a distance of 10 parsecs over 1 million years. While theoretically the Bussard engine would work for 100 million years, given the mass loss rate of the Sun, Caplan deems 10 million years to be sufficient for a stellar collision avoidance. His proposal was commissioned by the German educational YouTube channel Kurzgesagt.
Svoronos Star Tug
Alexander A. Svoronos of Yale University proposed the 'Star Tug', a concept that combines aspects of the Shkadov thruster and Caplan engine to produce an even more powerful and efficient mechanism for controlling a star's movement. Essentially, it replaces the giant parabolic mirror of the Shkadov thruster with an engine powered by mass lifted from the star, similar to the Caplan engine. However, instead of pushing a star from behind with a beam of thrust, as the Caplan engine does, it pulls the star from the front via its gravitational link to it, same as the Shkadov thruster. As a result, it only needs to produce a single beam of thrust (toward but narrowly missing the star), whereas the Caplan engine must produce two beams of thrust (one to push the star from behind and negate the force of gravity between the engine and the star, and one to propel the system as a whole forward). The result is that the Svoronos Star Tug is a much more efficient engine capable of significantly higher accelerations and max velocities. The Svoronos Star Tug can, in principle (assuming perfect efficiency), accelerate the Sun to ~27% the speed of light (after burning enough of the Sun's mass to transition it to a brown dwarf).
See also
Dyson spheres
References
Stellar engine (article at the website of the Encyclopedia of Astrobiology, Astronomy and Spaceflight)
Solar Travel (Astronomy Today, Exploration Section)
Megastructures
Hypothetical technology
Interstellar travel
Hypothetical spacecraft
Engine | 0.770758 | 0.991082 | 0.763884 |
Thermogravimetric analysis | Thermogravimetric analysis or thermal gravimetric analysis (TGA) is a method of thermal analysis in which the mass of a sample is measured over time as the temperature changes. This measurement provides information about physical phenomena, such as phase transitions, absorption, adsorption and desorption; as well as chemical phenomena including chemisorptions, thermal decomposition, and solid-gas reactions (e.g., oxidation or reduction).
Thermogravimetric analyzer
Thermogravimetric analysis (TGA) is conducted on an instrument referred to as a thermogravimetric analyzer. A thermogravimetric analyzer continuously measures mass while the temperature of a sample is changed over time. Mass, temperature, and time are considered base measurements in thermogravimetric analysis while many additional measures may be derived from these three base measurements.
A typical thermogravimetric analyzer consists of a precision balance with a sample pan located inside a furnace with a programmable control temperature. The temperature is generally increased at constant rate (or for some applications the temperature is controlled for a constant mass loss) to incur a thermal reaction. The thermal reaction may occur under a variety of atmospheres including: ambient air, vacuum, inert gas, oxidizing/reducing gases, corrosive gases, carburizing gases, vapors of liquids or "self-generated atmosphere"; as well as a variety of pressures including: a high vacuum, high pressure, constant pressure, or a controlled pressure.
The thermogravimetric data collected from a thermal reaction is compiled into a plot of mass or percentage of initial mass on the y axis versus either temperature or time on the x-axis. This plot, which is often smoothed, is referred to as a TGA curve. The first derivative of the TGA curve (the DTG curve) may be plotted to determine inflection points useful for in-depth interpretations as well as differential thermal analysis.
A TGA can be used for materials characterization through analysis of characteristic decomposition patterns. It is an especially useful technique for the study of polymeric materials, including thermoplastics, thermosets, elastomers, composites, plastic films, fibers, coatings, paints, and fuels.
Types of TGA
There are three types of thermogravimetry:
Isothermal or static thermogravimetry: In this technique, the sample weight is recorded as a function of time at a constant temperature.
Quasistatic thermogravimetry: In this technique, the sample temperature is raised in sequential steps separated by isothermal intervals, during which the sample mass reaches stability before the start of the next temperature ramp.
Dynamic thermogravimetry: In this technique, the sample is heated in an environment whose temperature is changed in a linear manner.
Applications
Thermal stability
TGA can be used to evaluate the thermal stability of a material. In a desired temperature range, if a species is thermally stable, there will be no observed mass change. Negligible mass loss corresponds to little or no slope in the TGA trace. TGA also gives the upper use temperature of a material. Beyond this temperature the material will begin to degrade.
TGA is used in the analysis of polymers. Polymers usually melt before they decompose, thus TGA is mainly used to investigate the thermal stability of polymers. Most polymers melt or degrade before 200 °C. However, there is a class of thermally stable polymers that are able to withstand temperatures of at least 300 °C in air and 500 °C in inert gases without structural changes or strength loss, which can be analyzed by TGA.
Oxidation and combustion
The simplest materials characterization is the residue remaining after a reaction. For example, a combustion reaction could be tested by loading a sample into a thermogravimetric analyzer at normal conditions. The thermogravimetric analyzer would cause ion combustion in the sample by heating it beyond its ignition temperature. The resultant TGA curve plotted with the y-axis as a percentage of initial mass would show the residue at the final point of the curve.
Oxidative mass losses are the most common observable losses in TGA.
Studying the resistance to oxidation in copper alloys is very important. For example, NASA (National Aeronautics and Space Administration) is conducting research on advanced copper alloys for their possible use in combustion engines. However, oxidative degradation can occur in these alloys as copper oxides form in atmospheres that are rich in oxygen. Resistance to oxidation is significant because NASA wants to be able to reuse shuttle materials. TGA can be used to study the static oxidation of materials such as these for practical use.
Combustion during TG analysis is identifiable by distinct traces made in the TGA thermograms produced. One interesting example occurs with samples of as-produced unpurified carbon nanotubes that have a large amount of metal catalyst present. Due to combustion, a TGA trace can deviate from the normal form of a well-behaved function. This phenomenon arises from a rapid temperature change. When the weight and temperature are plotted versus time, a dramatic slope change in the first derivative plot is concurrent with the mass loss of the sample and the sudden increase in temperature seen by the thermocouple. The mass loss could result from particles of smoke released from burning caused by inconsistencies in the material itself, beyond the oxidation of carbon due to poorly controlled weight loss.
Different weight losses on the same sample at different points can also be used as a diagnosis of the sample's anisotropy. For instance, sampling the top side and the bottom side of a sample with dispersed particles inside can be useful to detect sedimentation, as thermograms will not overlap but will show a gap between them if the particle distribution is different from side to side.
Thermogravimetric kinetics
Thermogravimetric kinetics may be explored for insight into the reaction mechanisms of thermal (catalytic or non-catalytic) decomposition involved in the pyrolysis and combustion processes of different materials.
Activation energies of the decomposition process can be calculated using Kissinger method.
Though a constant heating rate is more common, a constant mass loss rate can illuminate specific reaction kinetics. For example, the kinetic parameters of the carbonization of polyvinyl butyral were found using a constant mass loss rate of 0.2 wt %/min.
Operation in combination with other instruments
Thermogravimetric analysis is often combined with other processes or used in conjunction with other analytical methods.
For example, the TGA instrument continuously weighs a sample as it is heated to temperatures of up to 2000 °C for coupling with Fourier-transform infrared spectroscopy (FTIR) and mass spectrometry gas analysis. As the temperature increases, various components of the sample are decomposed and the weight percentage of each resulting mass change can be measured.
References
Thermodynamics
Materials science
Analytical chemistry | 0.768626 | 0.99382 | 0.763876 |
Augmented Dickey–Fuller test | In statistics, an augmented Dickey–Fuller test (ADF) tests the null hypothesis that a unit root is present in a time series sample. The alternative hypothesis depends on which version of the test is used, but is usually stationarity or trend-stationarity. It is an augmented version of the Dickey–Fuller test for a larger and more complicated set of time series models.
The augmented Dickey–Fuller (ADF) statistic, used in the test, is a negative number. The more negative it is, the stronger the rejection of the hypothesis that there is a unit root at some level of confidence.
Testing procedure
The procedure for the ADF test is the same as for the Dickey–Fuller test but it is applied to the model
where is a constant, the coefficient on a time trend and the lag order of the autoregressive process. Imposing the constraints and corresponds to modelling a random walk and using the constraint corresponds to modeling a random walk with a drift. Consequently, there are three main versions of the test, analogous those of the Dickey–Fuller test. (See that article for a discussion on dealing with uncertainty about including the intercept and deterministic time trend terms in the test equation.)
By including lags of the order p, the ADF formulation allows for higher-order autoregressive processes. This means that the lag length p must be determined in order to use the test. One approach to doing this is to test down from high orders and examine the t-values on coefficients. An alternative approach is to examine information criteria such as the Akaike information criterion, Bayesian information criterion or the Hannan–Quinn information criterion.
The unit root test is then carried out under the null hypothesis against the alternative hypothesis of Once a value for the test statistic
is computed, it can be compared to the relevant critical value for the Dickey–Fuller test. As this test is asymmetric, we are only concerned with negative values of our test statistic . If the calculated test statistic is less (more negative) than the critical value, then the null hypothesis of is rejected and no unit root is present.
Intuition
The intuition behind the test is that if the series is characterised by a unit root process, then the lagged level of the series will provide no relevant information in predicting the change in besides the one obtained in the lagged changes. In this case, the and null hypothesis is not rejected. In contrast, when the process has no unit root, it is stationary and hence exhibits reversion to the mean - so the lagged level will provide relevant information in predicting the change of the series and the null hypothesis of a unit root will be rejected.
Examples
A model that includes a constant and a time trend is estimated using sample of 50 observations and yields the statistic of −4.57. This is more negative than the tabulated critical value of −3.50, so at the 95% level, the null hypothesis of a unit root will be rejected.
Alternatives
There are alternative unit root tests such as the Phillips–Perron test (PP) or the ADF-GLS test procedure (ERS) developed by Elliott, Rothenberg and Stock (1996).
Software implementations
R:
package forecast function ndiffs handles multiple popular unit root tests
package tseries function adf.test
package fUnitRoots function adfTest
package urca
Gretl
Matlab
the Econometrics Toolbox function adfTest
the Spatial Econometrics toolbox (free)
SAS PROC ARIMA
Stata command dfuller
EViews the Unit Root Test
Python
package statsmodels function adfuller
package ARCH
Java project SuanShu package com.numericalmethod.suanshu.stats.test.timeseries.adf class AugmentedDickeyFuller
Julia package HypothesisTests function ADFTest
See also
Kwiatkowski–Phillips–Schmidt–Shin (KPSS) test
References
Further reading
Time series statistical tests | 0.768062 | 0.994535 | 0.763865 |
Selected area diffraction | Selected area (electron) diffraction (abbreviated as SAD or SAED) is a crystallographic experimental technique typically performed using a transmission electron microscope (TEM). It is a specific case of electron diffraction used primarily in material science and solid state physics as one of the most common experimental techniques. Especially with appropriate analytical software, SAD patterns (SADP) can be used to determine crystal orientation, measure lattice constants or examine its defects.
Principle
In transmission electron microscope, a thin crystalline sample is illuminated by parallel beam of electrons accelerated to energy of hundreds of kiloelectron volts. At these energies samples are transparent for the electrons if the sample is thinned enough (typically less than 100 nm). Due to the wave–particle duality, the high-energetic electrons behave as matter waves with wavelength of a few thousandths of a nanometer. The relativistic wavelength is given by
where is the Planck constant, is the electron rest mass, is the elementary charge, is the speed of light and is an electric potential accelerating the electrons (also called acceleration voltage). For instance the acceleration voltage of 200 kV results in a wavelength of 2.508 pm.
Since the spacing between atoms in crystals is about a hundred times larger, the electrons are diffracted on the crystal lattice, acting as a diffraction grating. Due to the diffraction, part of the electrons is scattered at particular angles (diffracted beams), while others pass through the sample without changing their direction (transmitted beams). In order to determine the diffraction angles, the electron beam normally incident to the atomic lattice can be seen as a planar wave, which is re-transmitted by each atom as a spherical wave. Due to the constructive interference, the spherical waves from number of diffracted beams under angles given, approximately, by the Bragg condition
where the integer is an order of diffraction and is the distance between atoms (if only one row of atoms is assumed as in the illustration aside) or a distance between atomic planes parallel to the beam (in a real 3D atomic structure). For finite samples this equation is only approximately correct.
After being deflected by the microscope's magnetic lens, each set of initially parallel beams intersect in the back focal plane forming the diffraction pattern. The transmitted beams intersect right in the optical axis. The diffracted beams intersect at certain distance from the optical axis (corresponding to interplanar distance of the planes diffracting the beams) and under certain azimuth (corresponding to the orientation of the planes diffracting the beams). This allows to form a pattern of bright spots typical for SAD.
SAD is called "selected" because it allows the user to select the sample area from which the diffraction pattern will be acquired. For this purpose, there is a selected area aperture located below the sample holder. It is a metallic sheet with several differently sized holes which can be inserted into the beam. The user can select the aperture of appropriate size and position it so that it only allows to pass the portion of beam corresponding to the selected area. Therefore, the resulting diffraction pattern will only reflect the area selected by the aperture. This allows to study small objects such as crystallites in polycrystalline material with a broad parallel beam.
Character of the resulting diffraction image depends on whether the beam is diffracted by one single crystal or by number of differently oriented crystallites for instance in a polycrystalline material. The single-crystalline diffractogram depicts a regular pattern of bright spots. This pattern can be seen as a two-dimensional projection of reciprocal crystal lattice. If there are more contributing crystallites, the diffraction image becomes a superposition of individual crystals' diffraction patterns. Ultimately, this superposition contains diffraction spots of all possible crystallographic plane systems in all possible orientations. For two reasons, these conditions result in a diffractogram of concentric rings:
There are discrete spacings between various parallel crystallographic planes and therefore the beams satisfying the diffraction condition can only form diffraction spots in discrete distances from the transmitted beam.
There are all possible orientations of crystallographic planes and therefore the diffraction spots are formed around the transmitted beam in the whole 360-degree azimuthal range.
Interpretation and analysis
SAD analysis is widely used in material research for its relative simplicity and high information value. Once the sample is prepared and examined in a modern transmission electron microscope, the device allows for a routine diffraction acquisition in a matter of seconds. If the images are interpreted correctly, they can be used to identify crystal structures, determine their orientations, measure crystal characteristics, examine crystal defects or material textures. The course of analysis depends on whether the diffractogram depicts ring or spot diffraction pattern and on the quantity to be determined.
Software tools based on computer vision algorithms simplifies quantitative analysis.
Spot diffraction pattern
If the SAD is taken from one a or a few single crystals, the diffractogram depicts a regular pattern of bright spots. Since the diffraction pattern can be seen as a two-dimensional projection of reciprocal crystal lattice, the pattern can be used to measure lattice constants, specifically the distances and angles between crystallographic planes. The lattice parameters are typically distinctive for various materials and their phases which allows to identify the examined material or at least differentiate between possible candidates.
Even though the SAD-based analyses were not considered quantitative for a long time, computer tools brought accuracy and repeatability allowing to routinely perform accurate measurements of interplanar distances or angles on appropriately calibrated microscopes. Tools such as CrysTBox are capable of automated analysis achieving sub-pixel precision.
If the sample is tilted against the electron beam, diffraction conditions are satisfied for different set of crystallographic planes yielding different constellation of diffraction spots. This allows to determine the crystal orientation, which can be used for instance to set up the orientation needed for particular experiment, to determine misorientation between adjacent grains or crystal twins. Since different sample orientations provide different projections of the reciprocal lattice, they provide an opportunity to reconstruct the three-dimensional information lost in individual projections. A series of diffractograms varying in tilt can be acquired and processed with diffraction tomography analysis in order to reconstruct an unknown crystal structure.
SAD can also be used to analyze crystal defects such as stacking faults.
Ring diffraction pattern
If the illuminated area selected by the aperture covers many differently oriented crystallites, their diffraction patterns superimpose forming an image of concentric rings. The ring diffractogram is typical for polycrystalline samples, powders or nanoparticles. Diameter of each ring corresponds to interplanar distance of a plane system present in the sample. Instead of information about individual grains or the sample orientation, this diffractogram provides more of a statistical information for instance about overall crystallinity or texture. Textured materials are characteristic by a non-uniform intensity distribution along the ring circumference despite crystallinity sufficient for generating smooth rings. Ring diffractograms can be also used to discriminate between nanocrystalline and amorphous phases.
Not all the features depicted in the diffraction image are necessarily wanted. The transmitted beam is often too strong and needs to be shadowed with a beam-stopper in order to protect the camera. The beam-stopper typically shadows part of the useful information as well. Towards the rings center, the background intensity also gradually increases lowering the contrast of diffraction rings. Modern analytical software allows to minimize such unwanted image features and together with other functionalities improves the image readability it helps with image interpretation.
Relation to other techniques
An SADP is acquired under parallel electron illumination. In the case of convergent beam, a convergent beam electron diffraction (CBED) is achieved. The beam used in SAD is broad illuminating a wide sample area. In order to analyze only a specific sample area, the selected area aperture in the image plane is used. This is in contrast with nanodiffraction, where the site-selectivity is achieved using a beam condensed to a narrow probe. SAD is important in direct imaging for instance when orienting the sample for high resolution microscopy or setting up dark-field imaging conditions.
High-resolution electron microscope images can be transformed into an artificial diffraction pattern using Fourier transform. Then, they can be processed the same way as real diffractograms allowing to determine crystal orientation, measure interplanar angles and distances even with picometric precision.
SAD is similar to X-ray diffraction, but unique in that areas as small as several hundred nanometres in size can be examined, whereas X-ray diffraction typically samples areas much larger.
See also
Diffraction
Electron diffraction
Transmission electron microscope
Electron crystallography
CrysTBox
X-ray (Powder) diffraction
Convergent beam electron diffraction
References
Materials science
Laboratory techniques in condensed matter physics
Diffraction
Crystallography
Electron microscopy | 0.778692 | 0.980957 | 0.763863 |
Yarkovsky effect | The Yarkovsky effect is a force acting on a rotating body in space caused by the anisotropic emission of thermal photons, which carry momentum. It is usually considered in relation to meteoroids or small asteroids (about 10 cm to 10 km in diameter), as its influence is most significant for these bodies.
History of discovery
The effect was discovered by the Polish-Russian civil engineer Ivan Osipovich Yarkovsky (1844–1902), who worked in Russia on scientific problems in his spare time. Writing in a pamphlet around the year 1900, Yarkovsky noted that the daily heating of a rotating object in space would cause it to experience a force that, while tiny, could lead to large long-term effects in the orbits of small bodies, especially meteoroids and small asteroids. Yarkovsky's insight would have been forgotten had it not been for the Estonian astronomer Ernst J. Öpik (1893–1985), who read Yarkovsky's pamphlet sometime around 1909. Decades later, Öpik, recalling the pamphlet from memory, discussed the possible importance of the Yarkovsky effect on movement of meteoroids about the Solar System.
Mechanism
The Yarkovsky effect is a consequence of the fact that change in the temperature of an object warmed by radiation (and therefore the intensity of thermal radiation from the object) lags behind changes in the incoming radiation. That is, the surface of the object takes time to become warm when first illuminated, and takes time to cool down when illumination stops. In general there are two components to the effect:
Diurnal effect: On a rotating body illuminated by the Sun (e.g. an asteroid or the Earth), the surface is warmed by solar radiation during the day, and cools at night. The thermal properties of the surface cause a lag between the absorption of radiation from the Sun and the emission of radiation as heat, so the warmest point on a rotating body occurs around the "2 PM" site on the surface, or slightly after noon. This results in a difference between the directions of absorption and re-emission of radiation, which yields a net force along the direction of motion of the orbit. If the object is a prograde rotator, the force is in the direction of motion of the orbit, and causes the semi-major axis of the orbit to increase steadily; the object spirals away from the Sun. A retrograde rotator spirals inward. The diurnal effect is the dominant component for bodies with diameter greater than about 100 m.
Seasonal effect: This is easiest to understand for the idealised case of a non-rotating body orbiting the Sun, for which each "year" consists of exactly one "day". As it travels around its orbit, the "dusk" hemisphere which has been heated over a long preceding time period is invariably in the direction of orbital motion. The excess of thermal radiation in this direction causes a braking force that always causes spiraling inward toward the Sun. In practice, for rotating bodies, this seasonal effect increases along with the axial tilt. It dominates only if the diurnal effect is small enough. This may occur because of very rapid rotation (no time to cool off on the night side, hence an almost uniform longitudinal temperature distribution), small size (the whole body is heated throughout) or an axial tilt close to 90°. The seasonal effect is more important for smaller asteroid fragments (from a few metres up to about 100 m), provided their surfaces are not covered by an insulating regolith layer and they do not have exceedingly slow rotations. Additionally, on very long timescales over which the spin axis of the body may be repeatedly changed by collisions (and hence also the direction of the diurnal effect changes), the seasonal effect will also tend to dominate.
In general, the effect is size-dependent, and will affect the semi-major axis of smaller asteroids, while leaving large asteroids practically unaffected. For kilometre-sized asteroids, the Yarkovsky effect is minuscule over short periods: the force on asteroid 6489 Golevka has been estimated at 0.25 newtons, for a net acceleration of 10−12 m/s2. But it is steady; over millions of years an asteroid's orbit can be perturbed enough to transport it from the asteroid belt to the inner Solar System.
The mechanism is more complicated for bodies in strongly eccentric orbits.
Measurement
The effect was first measured in 1991–2003 on the asteroid 6489 Golevka. The asteroid drifted 15 km from its predicted position over twelve years (the orbit was established with great precision by a series of radar observations in 1991, 1995 and 1999 from the Arecibo radio telescope).
Without direct measurement, it is very hard to predict the exact result of the Yarkovsky effect on a given asteroid's orbit. This is because the magnitude of the effect depends on many variables that are hard to determine from the limited observational information that is available. These include the exact shape of the asteroid, its orientation, and its albedo. Calculations are further complicated by the effects of shadowing and thermal "reillumination", whether caused by local craters or a possible overall concave shape. The Yarkovsky effect also competes with radiation pressure, whose net effect may cause similar small long-term forces for bodies with albedo variations or non-spherical shapes.
As an example, even for the simple case of the pure seasonal Yarkovsky effect on a spherical body in a circular orbit with 90° obliquity, semi-major axis changes could differ by as much as a factor of two between the case of a uniform albedo and the case of a strong north–south albedo asymmetry. Depending on the object's orbit and spin axis, the Yarkovsky change of the semi-major axis may be reversed simply by changing from a spherical to a non-spherical shape.
Despite these difficulties, utilizing the Yarkovsky effect is one scenario under investigation to alter the course of potentially Earth-impacting near-Earth asteroids. Possible asteroid deflection strategies include "painting" the surface of the asteroid or focusing solar radiation onto the asteroid to alter the intensity of the Yarkovsky effect and so alter the orbit of the asteroid away from a collision with Earth. The OSIRIS-REx mission, launched in September 2016, studied the Yarkovsky effect on asteroid Bennu.
In 2020, astronomers confirmed Yarkovsky acceleration of the asteroid 99942 Apophis. The findings are relevant to asteroid impact avoidance as 99942 Apophis was thought to have a very small chance of Earth impact in 2068, and the Yarkovsky effect was a significant source of prediction uncertainty.
In 2021, a multidisciplinary professional-amateur collaboration combined Gaia satellite and ground-based radar measurements with amateur stellar occultation observations to further refine 99942 Apophis's orbit and measure the Yarkovsky acceleration with high precision, to within 0.5%. With these, astronomers were able to eliminate the possibility of a collision with the Earth for at least the next 100 years.
See also
Asteroid
Poynting–Robertson effect
Radiation pressure
YORP effect
References
External links
Asteroid Nudged by Sunlight: Most Precise Measurement of Yarkovsky Effect – (ScienceDaily 2012-05-24)
Asteroids
Concepts in astrophysics
Orbital perturbations
Radiation effects
Rotation | 0.775522 | 0.984941 | 0.763843 |
Archimedes' principle | Archimedes' principle (also spelled Archimedes's principle) states that the upward buoyant force that is exerted on a body immersed in a fluid, whether fully or partially, is equal to the weight of the fluid that the body displaces. Archimedes' principle is a law of physics fundamental to fluid mechanics. It was formulated by Archimedes of Syracuse.
Explanation
In On Floating Bodies, Archimedes suggested that (c. 246 BC):
Archimedes' principle allows the buoyancy of any floating object partially or fully immersed in a fluid to be calculated. The downward force on the object is simply its weight. The upward, or buoyant, force on the object is that stated by Archimedes' principle above. Thus, the net force on the object is the difference between the magnitudes of the buoyant force and its weight. If this net force is positive, the object rises; if negative, the object sinks; and if zero, the object is neutrally buoyant—that is, it remains in place without either rising or sinking. In simple words, Archimedes' principle states that, when a body is partially or completely immersed in a fluid, it experiences an apparent loss in weight that is equal to the weight of the fluid displaced by the immersed part of the body(s).
Formula
Consider a cuboid immersed in a fluid, its top and bottom faces orthogonal to the direction of gravity (assumed constant across the cube's stretch). The fluid will exert a normal force on each face, but only the normal forces on top and bottom will contribute to buoyancy. The pressure difference between the bottom and the top face is directly proportional to the height (difference in depth of submersion). Multiplying the pressure difference by the area of a face gives a net force on the cuboid — the buoyancy — equaling in size the weight of the fluid displaced by the cuboid. By summing up sufficiently many arbitrarily small cuboids this reasoning may be extended to irregular shapes, and so, whatever the shape of the submerged body, the buoyant force is equal to the weight of the displaced fluid.
The weight of the displaced fluid is directly proportional to the volume of the displaced fluid (if the surrounding fluid is of uniform density). The weight of the object in the fluid is reduced, because of the force acting on it, which is called upthrust. In simple terms, the principle states that the buoyant force (Fb) on an object is equal to the weight of the fluid displaced by the object, or the density (ρ) of the fluid multiplied by the submerged volume (V) times the gravity (g)
We can express this relation in the equation:
where denotes the buoyant force applied onto the submerged object, denotes the density of the fluid, represents the volume of the displaced fluid and is the acceleration due to gravity.
Thus, among completely submerged objects with equal masses, objects with greater volume have greater buoyancy.
Suppose a rock's weight is measured as 10 newtons when suspended by a string in a vacuum with gravity acting on it. Suppose that, when the rock is lowered into the water, it displaces water of weight 3 newtons. The force it then exerts on the string from which it hangs would be 10 newtons minus the 3 newtons of buoyant force: 10 − 3 = 7 newtons. Buoyancy reduces the apparent weight of objects that have sunk completely to the sea-floor. It is generally easier to lift an object through the water than it is to pull it out of the water.
For a fully submerged object, Archimedes' principle can be reformulated as follows:
then inserted into the quotient of weights, which has been expanded by the mutual volume
yields the formula below. The density of the immersed object relative to the density of the fluid can easily be calculated without measuring any volume is
(This formula is used for example in describing the measuring principle of a dasymeter and of hydrostatic weighing.)
Example: If you drop wood into water, buoyancy will keep it afloat.
Example: A helium balloon in a moving car. When increasing speed or driving in a curve, the air moves in the opposite direction to the car's acceleration. However, due to buoyancy, the balloon is pushed "out of the way" by the air and will drift in the same direction as the car's acceleration.
When an object is immersed in a liquid, the liquid exerts an upward force, which is known as the buoyant force, that is proportional to the weight of the displaced liquid. The sum force acting on the object, then, is equal to the difference between the weight of the object ('down' force) and the weight of displaced liquid ('up' force). Equilibrium, or neutral buoyancy, is achieved when these two weights (and thus forces) are equal.
Forces and equilibrium
The equation to calculate the pressure inside a fluid in equilibrium is:
where f is the force density exerted by some outer field on the fluid, and σ is the Cauchy stress tensor. In this case the stress tensor is proportional to the identity tensor:
Here δij is the Kronecker delta. Using this the above equation becomes:
Assuming the outer force field is conservative, that is it can be written as the negative gradient of some scalar valued function:
Then:
Therefore, the shape of the open surface of a fluid equals the equipotential plane of the applied outer conservative force field. Let the z-axis point downward. In this case the field is gravity, so Φ = −ρfgz where g is the gravitational acceleration, ρf is the mass density of the fluid. Taking the pressure as zero at the surface, where z is zero, the constant will be zero, so the pressure inside the fluid, when it is subject to gravity, is
So pressure increases with depth below the surface of a liquid, as z denotes the distance from the surface of the liquid into it. Any object with a non-zero vertical depth will have different pressures on its top and bottom, with the pressure on the bottom being greater. This difference in pressure causes the upward buoyancy force.
The buoyancy force exerted on a body can now be calculated easily, since the internal pressure of the fluid is known. The force exerted on the body can be calculated by integrating the stress tensor over the surface of the body which is in contact with the fluid:
The surface integral can be transformed into a volume integral with the help of the Gauss theorem:
where V is the measure of the volume in contact with the fluid, that is the volume of the submerged part of the body, since the fluid doesn't exert force on the part of the body which is outside of it.
The magnitude of buoyancy force may be appreciated a bit more from the following argument. Consider any object of arbitrary shape and volume V surrounded by a liquid. The force the liquid exerts on an object within the liquid is equal to the weight of the liquid with a volume equal to that of the object. This force is applied in a direction opposite to gravitational force, that is of magnitude:
where ρf is the density of the fluid, Vdisp is the volume of the displaced body of liquid, and g is the gravitational acceleration at the location in question.
If this volume of liquid is replaced by a solid body of exactly the same shape, the force the liquid exerts on it must be exactly the same as above. In other words, the "buoyancy force" on a submerged body is directed in the opposite direction to gravity and is equal in magnitude to
The net force on the object must be zero if it is to be a situation of fluid statics such that Archimedes principle is applicable, and is thus the sum of the buoyancy force and the object's weight
If the buoyancy of an (unrestrained and unpowered) object exceeds its weight, it tends to rise. An object whose weight exceeds its buoyancy tends to sink. Calculation of the upwards force on a submerged object during its accelerating period cannot be done by the Archimedes principle alone; it is necessary to consider dynamics of an object involving buoyancy. Once it fully sinks to the floor of the fluid or rises to the surface and settles, Archimedes principle can be applied alone. For a floating object, only the submerged volume displaces water. For a sunken object, the entire volume displaces water, and there will be an additional force of reaction from the solid floor.
In order for Archimedes' principle to be used alone, the object in question must be in equilibrium (the sum of the forces on the object must be zero), therefore;
and therefore
showing that the depth to which a floating object will sink, and the volume of fluid it will displace, is independent of the gravitational field regardless of geographic location.
(Note: If the fluid in question is seawater, it will not have the same density (ρ) at every location. For this reason, a ship may display a Plimsoll line.)
It can be the case that forces other than just buoyancy and gravity come into play. This is the case if the object is restrained or if the object sinks to the solid floor. An object which tends to float requires a tension restraint force T in order to remain fully submerged. An object which tends to sink will eventually have a normal force of constraint N exerted upon it by the solid floor. The constraint force can be tension in a spring scale measuring its weight in the fluid, and is how apparent weight is defined.
If the object would otherwise float, the tension to restrain it fully submerged is:
When a sinking object settles on the solid floor, it experiences a normal force of:
Another possible formula for calculating buoyancy of an object is by finding the apparent weight of that particular object in the air (calculated in Newtons), and apparent weight of that object in the water (in Newtons). To find the force of buoyancy acting on the object when in air, using this particular information, this formula applies:
Buoyancy force = weight of object in empty space − weight of object immersed in fluid
The final result would be measured in Newtons.
Air's density is very small compared to most solids and liquids. For this reason, the weight of an object in air is approximately the same as its true weight in a vacuum. The buoyancy of air is neglected for most objects during a measurement in air because the error is usually insignificant (typically less than 0.1% except for objects of very low average density such as a balloon or light foam).
Simplified model
A simplified explanation for the integration of the pressure over the contact area may be stated as follows:
Consider a cube immersed in a fluid with the upper surface horizontal.
The sides are identical in area, and have the same depth distribution, therefore they also have the same pressure distribution, and consequently the same total force resulting from hydrostatic pressure, exerted perpendicular to the plane of the surface of each side.
There are two pairs of opposing sides, therefore the resultant horizontal forces balance in both orthogonal directions, and the resultant force is zero.
The upward force on the cube is the pressure on the bottom surface integrated over its area. The surface is at constant depth, so the pressure is constant. Therefore, the integral of the pressure over the area of the horizontal bottom surface of the cube is the hydrostatic pressure at that depth multiplied by the area of the bottom surface.
Similarly, the downward force on the cube is the pressure on the top surface integrated over its area. The surface is at constant depth, so the pressure is constant. Therefore, the integral of the pressure over the area of the horizontal top surface of the cube is the hydrostatic pressure at that depth multiplied by the area of the top surface.
As this is a cube, the top and bottom surfaces are identical in shape and area, and the pressure difference between the top and bottom of the cube is directly proportional to the depth difference, and the resultant force difference is exactly equal to the weight of the fluid that would occupy the volume of the cube in its absence.
This means that the resultant upward force on the cube is equal to the weight of the fluid that would fit into the volume of the cube, and the downward force on the cube is its weight, in the absence of external forces.
This analogy is valid for variations in the size of the cube.
If two cubes are placed alongside each other with a face of each in contact, the pressures and resultant forces on the sides or parts thereof in contact are balanced and may be disregarded, as the contact surfaces are equal in shape, size and pressure distribution, therefore the buoyancy of two cubes in contact is the sum of the buoyancies of each cube. This analogy can be extended to an arbitrary number of cubes.
An object of any shape can be approximated as a group of cubes in contact with each other, and as the size of the cube is decreased, the precision of the approximation increases. The limiting case for infinitely small cubes is the exact equivalence.
Angled surfaces do not nullify the analogy as the resultant force can be split into orthogonal components and each dealt with in the same way.
Refinements
Archimedes' principle does not consider the surface tension (capillarity) acting on the body. Moreover, Archimedes' principle has been found to break down in complex fluids.
There is an exception to Archimedes' principle known as the bottom (or side) case. This occurs when a side of the object is touching the bottom (or side) of the vessel it is submerged in, and no liquid seeps in along that side. In this case, the net force has been found to be different from Archimedes' principle, as, since no fluid seeps in on that side, the symmetry of pressure is broken.
Principle of flotation
Archimedes' principle shows the buoyant force and displacement of fluid. However, the concept of Archimedes' principle can be applied when considering why objects float. Proposition 5 of Archimedes' treatise On Floating Bodies states that
In other words, for an object floating on a liquid surface (like a boat) or floating submerged in a fluid (like a submarine in water or dirigible in air) the weight of the displaced liquid equals the weight of the object. Thus, only in the special case of floating does the buoyant force acting on an object equal the objects weight. Consider a 1-ton block of solid iron. As iron is nearly eight times as dense as water, it displaces only 1/8 ton of water when submerged, which is not enough to keep it afloat. Suppose the same iron block is reshaped into a bowl. It still weighs 1 ton, but when it is put in water, it displaces a greater volume of water than when it was a block. The deeper the iron bowl is immersed, the more water it displaces, and the greater the buoyant force acting on it. When the buoyant force equals 1 ton, it will sink no farther.
When any boat displaces a weight of water equal to its own weight, it floats. This is often called the "principle of flotation": A floating object displaces a weight of fluid equal to its own weight. Every ship, submarine, and dirigible must be designed to displace a weight of fluid at least equal to its own weight. A 10,000-ton ship's hull must be built wide enough, long enough and deep enough to displace 10,000 tons of water and still have some hull above the water to prevent it from sinking. It needs extra hull to fight waves that would otherwise fill it and, by increasing its mass, cause it to submerge. The same is true for vessels in air: a dirigible that weighs 100 tons needs to displace 100 tons of air. If it displaces more, it rises; if it displaces less, it falls. If the dirigible displaces exactly its weight, it hovers at a constant altitude.
While they are related to it, the principle of flotation and the concept that a submerged object displaces a volume of fluid equal to its own volume are not Archimedes' principle. Archimedes' principle, as stated above, equates the buoyant force to the weight of the fluid displaced.
One common point of confusion regarding Archimedes' principle is the meaning of displaced volume. Common demonstrations involve measuring the rise in water level when an object floats on the surface in order to calculate the displaced water. This measurement approach fails with a buoyant submerged object because the rise in the water level is directly related to the volume of the object and not the mass (except if the effective density of the object equals exactly the fluid density).
Eureka
Archimedes reportedly exclaimed "Eureka" after he realized how to detect whether a crown is made of impure gold. While he did not use Archimedes' principle in the widespread tale and used displaced water only for measuring the volume of the crown, there is an alternative approach using the principle: Balance the crown and pure gold on a scale in the air and then put the scale into water. According to Archimedes' principle, if the density of the crown differs from the density of pure gold, the scale will get out of balance under water.
References
External links
Fluid dynamics
Principle
Force
Buoyancy
Scientific laws | 0.764607 | 0.998987 | 0.763832 |
Tautomer | Tautomers are structural isomers (constitutional isomers) of chemical compounds that readily interconvert. The chemical reaction interconverting the two is called tautomerization. This conversion commonly results from the relocation of a hydrogen atom within the compound. The phenomenon of tautomerization is called tautomerism, also called desmotropism. Tautomerism is for example relevant to the behavior of amino acids and nucleic acids, two of the fundamental building blocks of life.
Care should be taken not to confuse tautomers with depictions of "contributing structures" in chemical resonance. Tautomers are distinct chemical species that can be distinguished by their differing atomic connectivities, molecular geometries, and physicochemical and spectroscopic properties, whereas resonance forms are merely alternative Lewis structure (valence bond theory) depictions of a single chemical species, whose true structure is a quantum superposition, essentially the "average" of the idealized, hypothetical geometries implied by these resonance forms.
Examples
Tautomerization is pervasive in organic chemistry. It is typically associated with polar molecules and ions containing functional groups that are at least weakly acidic. Most common tautomers exist in pairs, which means that the hydrogen is located at one of two positions, and even more specifically the most common form involves a hydrogen changing places with a double bond: . Common tautomeric pairs include:
ketone – enol: , see keto–enol tautomerism
enamine – imine:
cyanamide – carbodiimide
guanidine – guanidine – guanidine: With a central carbon surrounded by three nitrogens, a guanidine group allows this transform in three possible orientations
amide – imidic acid: (e.g., the latter is encountered during nitrile hydrolysis reactions)
lactam – lactim, a cyclic form of amide-imidic acid tautomerism in 2-pyridone and derived structures such as the nucleobases guanine, thymine, and cytosine
imine – imine, e.g., during pyridoxal phosphate catalyzed enzymatic reactions
nitro – aci-nitro (nitronic acid):
nitroso – oxime:
ketene – ynol, which involves a triple bond:
amino acid – ammonium carboxylate, which applies to the building blocks of the proteins. This shifts the proton more than two atoms away, producing a zwitterion rather than shifting a double bond:
phosphite – phosphonate: between trivalent and pentavalent phosphorus.
Prototropy
Prototropy is the most common form of tautomerism and refers to the relocation of a hydrogen atom. Prototropic tautomerism may be considered a subset of acid-base behavior. Prototropic tautomers are sets of isomeric protonation states with the same empirical formula and total charge. Tautomerizations are catalyzed by:
bases, involving a series of steps: deprotonation, formation of a delocalized anion (e.g., an enolate), and protonation at a different position of the anion; and
acids, involving a series of steps: protonation, formation of a delocalized cation, and deprotonation at a different position adjacent to the cation).
Two specific further subcategories of tautomerizations:
Annular tautomerism is a type of prototropic tautomerism wherein a proton can occupy two or more positions of the heterocyclic systems found in many drugs, for example, 1H- and 3H-imidazole; 1H-, 2H- and 4H- 1,2,4-triazole; 1H- and 2H- isoindole.
Ring–chain tautomers occur when the movement of the proton is accompanied by a change from an open structure to a ring, such as the open chain and cyclic hemiacetal (typically pyranose or furanose forms) of many sugars. (See .) The tautomeric shift can be described as H−O ⋅ C=O ⇌ O−C−O−H, where the "⋅" indicates the initial absence of a bond.
Valence tautomerism
Valence tautomerism is a type of tautomerism in which single and/or double bonds are rapidly formed and ruptured, without migration of atoms or groups. It is distinct from prototropic tautomerism, and involves processes with rapid reorganisation of bonding electrons.
A pair of valence tautomers with formula C6H6O are benzene oxide and oxepin.
Other examples of this type of tautomerism can be found in bullvalene, and in open and closed forms of certain heterocycles, such as organic azides and tetrazoles, or mesoionic münchnone and acylamino ketene.
Valence tautomerism requires a change in molecular geometry and should not be confused with canonical resonance structures or mesomers.
Inorganic materials
In inorganic extended solids, valence tautomerism can manifest itself in the change of oxidation states its spatial distribution upon the change of macroscopic thermodynamic conditions. Such effects have been called charge ordering or valence mixing to describe the behavior in inorganic oxides.
Consequences for chemical databases
The existence of multiple possible tautomers for individual chemical substances can lead to confusion. For example, samples of 2-pyridone and 2-hydroxypyridine do not exist as separate isolatable materials: the two tautomeric forms are interconvertible and the proportion of each depends on factors such as temperature, solvent, and additional substituents attached to the main ring.
Historically, each form of the substance was entered into databases such as those maintained by the Chemical Abstracts Service and given separate CAS Registry Numbers. 2-Pyridone was assigned [142-08-5] and 2-hydroxypyridine [109-10-4]. The latter is now a "replaced" registry number so that look-up by either identifier reaches the same entry. The facility to automatically recognise such potential tautomerism and ensure that all tautomers are indexed together has been greatly facilitated by the creation of the International Chemical Identifier (InChI) and associated software. Thus the standard InChI for either tautomer is InChI=1S/C5H5NO/c7-5-3-1-2-4-6-5/h1-4H,(H,6,7).
See also
Fluxional molecule
References
External links
Isomerism | 0.768818 | 0.993474 | 0.763801 |
Alternatives to general relativity | Alternatives to general relativity are physical theories that attempt to describe the phenomenon of gravitation in competition with Einstein's theory of general relativity. There have been many different attempts at constructing an ideal theory of gravity.
These attempts can be split into four broad categories based on their scope. In this article, straightforward alternatives to general relativity are discussed, which do not involve quantum mechanics or force unification. Other theories which do attempt to construct a theory using the principles of quantum mechanics are known as theories of quantized gravity. Thirdly, there are theories which attempt to explain gravity and other forces at the same time; these are known as classical unified field theories. Finally, the most ambitious theories attempt to both put gravity in quantum mechanical terms and unify forces; these are called theories of everything.
None of these alternatives to general relativity have gained wide acceptance. General relativity has withstood many tests, remaining consistent with all observations so far. In contrast, many of the early alternatives have been definitively disproven. However, some of the alternative theories of gravity are supported by a minority of physicists, and the topic remains the subject of intense study in theoretical physics.
Notation in this article
is the speed of light, is the gravitational constant. "Geometric variables" are not used.
Latin indices go from 1 to 3, Greek indices go from 0 to 3. The Einstein summation convention is used.
is the Minkowski metric. is a tensor, usually the metric tensor. These have signature (−,+,+,+).
Partial differentiation is written or . Covariant differentiation is written or .
General relativity
For comparison with alternatives, the formulas of General Relativity are:
which can also be written
The Einstein–Hilbert action for general relativity is:
where is Newton's gravitational constant, is the Ricci curvature of space, and is the action due to mass.
General relativity is a tensor theory, the equations all contain tensors. Nordström's theories, on the other hand, are scalar theories because the gravitational field is a scalar. Other proposed alternatives include scalar–tensor theories that contain a scalar field in addition to the tensors of general relativity, and other variants containing vector fields as well have been developed recently.
Classification of theories
Theories of gravity can be classified, loosely, into several categories. Most of the theories described here have:
an 'action' (see the principle of least action, a variational principle based on the concept of action)
a Lagrangian density
a metric
If a theory has a Lagrangian density for gravity, say , then the gravitational part of the action is the integral of that:
.
In this equation it is usual, though not essential, to have at spatial infinity when using Cartesian coordinates. For example, the Einstein–Hilbert action uses
where R is the scalar curvature, a measure of the curvature of space.
Almost every theory described in this article has an action. It is the most efficient known way to guarantee that the necessary conservation laws of energy, momentum and angular momentum are incorporated automatically; although it is easy to construct an action where those conservation laws are violated. Canonical methods provide another way to construct systems that have the required conservation laws, but this approach is more cumbersome to implement. The original 1983 version of MOND did not have an action.
A few theories have an action but not a Lagrangian density. A good example is Whitehead, the action there is termed non-local.
A theory of gravity is a "metric theory" if and only if it can be given a mathematical representation in which two conditions hold:
Condition 1: There exists a symmetric metric tensor of signature (−, +, +, +), which governs proper-length and proper-time measurements in the usual manner of special and general relativity:
where there is a summation over indices and .
Condition 2: Stressed matter and fields being acted upon by gravity respond in accordance with the equation:
where is the stress–energy tensor for all matter and non-gravitational fields, and where is the covariant derivative with respect to the metric and is the Christoffel symbol. The stress–energy tensor should also satisfy an energy condition.
Metric theories include (from simplest to most complex):
Scalar field theories (includes conformally flat theories & Stratified theories with conformally flat space slices)
Bergman
Coleman
Einstein (1912)
Einstein–Fokker theory
Lee–Lightman–Ni
Littlewood
Ni
Nordström's theory of gravitation (first metric theory of gravity to be developed)
Page–Tupper
Papapetrou
Rosen (1971)
Whitrow–Morduch
Yilmaz theory of gravitation (attempted to eliminate event horizons from the theory.)
Quasilinear theories (includes Linear fixed gauge)
Bollini–Giambiagi–Tiomno
Deser–Laurent
Whitehead's theory of gravity (intended to use only retarded potentials)
Tensor theories
Einstein's general relativity
Fourth-order gravity (allows the Lagrangian to depend on second-order contractions of the Riemann curvature tensor)
f(R) gravity (allows the Lagrangian to depend on higher powers of the Ricci scalar)
Gauss–Bonnet gravity
Lovelock theory of gravity (allows the Lagrangian to depend on higher-order contractions of the Riemann curvature tensor)
Infinite derivative gravity
Scalar–tensor theories
Bekenstein
Bergmann–Wagoner
Brans–Dicke theory (the most well-known alternative to general relativity, intended to be better at applying Mach's principle)
Jordan
Nordtvedt
Thiry
Chameleon
Pressuron
Vector–tensor theories
Hellings–Nordtvedt
Will–Nordtvedt
Bimetric theories
Lightman–Lee
Rastall
Rosen (1975)
Other metric theories
(see section Modern theories below)
Non-metric theories include
Belinfante–Swihart
Einstein–Cartan theory (intended to handle spin-orbital angular momentum interchange)
Kustaanheimo (1967)
Teleparallelism
Gauge theory gravity
A word here about Mach's principle is appropriate because a few of these theories rely on Mach's principle (e.g. Whitehead), and many mention it in passing (e.g. Einstein–Grossmann, Brans–Dicke). Mach's principle can be thought of a half-way-house between Newton and Einstein. It goes this way:
Newton: Absolute space and time.
Mach: The reference frame comes from the distribution of matter in the universe.
Einstein: There is no reference frame.
Theories from 1917 to the 1980s
At the time it was published in the 17th century, Isaac Newton's theory of gravity was the most accurate theory of gravity. Since then, a number of alternatives were proposed. The theories which predate the formulation of general relativity in 1915 are discussed in history of gravitational theory.
This section includes alternatives to general relativity published after general relativity but before the observations of galaxy rotation that led to the hypothesis of "dark matter". Those considered here include (see Will Lang):
These theories are presented here without a cosmological constant or added scalar or vector potential unless specifically noted, for the simple reason that the need for one or both of these was not recognized before the supernova observations by the Supernova Cosmology Project and High-Z Supernova Search Team. How to add a cosmological constant or quintessence to a theory is discussed under Modern Theories (see also Einstein–Hilbert action).
Scalar field theories
The scalar field theories of Nordström have already been discussed. Those of Littlewood, Bergman, Yilmaz, Whitrow and Morduch and Page and Tupper follow the general formula give by Page and Tupper.
According to Page and Tupper, who discuss all these except Nordström, the general scalar field theory comes from the principle of least action:
where the scalar field is,
and may or may not depend on .
In Nordström,
In Littlewood and Bergmann,
In Whitrow and Morduch,
In Whitrow and Morduch,
In Page and Tupper,
Page and Tupper matches Yilmaz's theory to second order when .
The gravitational deflection of light has to be zero when c is constant. Given that variable c and zero deflection of light are both in conflict with experiment, the prospect for a successful scalar theory of gravity looks very unlikely. Further, if the parameters of a scalar theory are adjusted so that the deflection of light is correct then the gravitational redshift is likely to be wrong.
Ni summarized some theories and also created two more. In the first, a pre-existing special relativity space-time and universal time coordinate acts with matter and non-gravitational fields to generate a scalar field. This scalar field acts together with all the rest to generate the metric.
The action is:
Misner et al. gives this without the term. is the matter action.
is the universal time coordinate. This theory is self-consistent and complete. But the motion of the solar system through the universe leads to serious disagreement with experiment.
In the second theory of Ni there are two arbitrary functions and that are related to the metric by:
Ni quotes Rosen as having two scalar fields and that are related to the metric by:
In Papapetrou the gravitational part of the Lagrangian is:
In Papapetrou there is a second scalar field . The gravitational part of the Lagrangian is now:
Bimetric theories
Bimetric theories contain both the normal tensor metric and the Minkowski metric (or a metric of constant curvature), and may contain other scalar or vector fields.
Rosen (1975) bimetric theory
The action is:
Lightman–Lee developed a metric theory based on the non-metric theory of Belinfante and Swihart. The result is known as BSLL theory. Given a tensor field , , and two constants and the action is:
and the stress–energy tensor comes from:
In Rastall, the metric is an algebraic function of the Minkowski metric and a Vector field. The Action is:
where
and
(see Will for the field equation for and ).
Quasilinear theories
In Whitehead, the physical metric is constructed (by Synge) algebraically from the Minkowski metric and matter variables, so it doesn't even have a scalar field. The construction is:
where the superscript (−) indicates quantities evaluated along the past light cone of the field point and
Nevertheless, the metric construction (from a non-metric theory) using the "length contraction" ansatz is criticised.
Deser and Laurent and Bollini–Giambiagi–Tiomno are Linear Fixed Gauge theories. Taking an approach from quantum field theory, combine a Minkowski spacetime with the gauge invariant action of a spin-two tensor field (i.e. graviton) to define
The action is:
The Bianchi identity associated with this partial gauge invariance is wrong. Linear Fixed Gauge theories seek to remedy this by breaking the gauge invariance of the gravitational action through the introduction of auxiliary gravitational fields that couple to .
A cosmological constant can be introduced into a quasilinear theory by the simple expedient of changing the Minkowski background to a de Sitter or anti-de Sitter spacetime, as suggested by G. Temple in 1923. Temple's suggestions on how to do this were criticized by C. B. Rayner in 1955.
Tensor theories
Einstein's general relativity is the simplest plausible theory of gravity that can be based on just one symmetric tensor field (the metric tensor). Others include: Starobinsky (R+R^2) gravity, Gauss–Bonnet gravity, f(R) gravity, and Lovelock theory of gravity.
Starobinsky
Starobinsky gravity, proposed by Alexei Starobinsky has the Lagrangian
and has been used to explain inflation, in the form of Starobinsky inflation. Here is a constant.
Gauss–Bonnet
Gauss–Bonnet gravity has the action
where the coefficients of the extra terms are chosen so that the action reduces to general relativity in 4 spacetime dimensions and the extra terms are only non-trivial when more dimensions are introduced.
Stelle's 4th derivative gravity
Stelle's 4th derivative gravity, which is a generalization of Gauss–Bonnet gravity, has the action
f(R)
f(R) gravity has the action
and is a family of theories, each defined by a different function of the Ricci scalar. Starobinsky gravity is actually an theory.
Infinite derivative gravity
Infinite derivative gravity is a covariant theory of gravity, quadratic in curvature, torsion free and parity invariant,
and
in order to make sure that only massless spin −2 and spin −0 components propagate in the graviton propagator around Minkowski background. The action becomes non-local beyond the scale , and recovers to general relativity in the infrared, for energies below the non-local scale . In the ultraviolet regime, at distances and time scales below non-local scale, , the gravitational interaction weakens enough to resolve point-like singularity, which means Schwarzschild's singularity can be potentially resolved in infinite derivative theories of gravity.
Lovelock
Lovelock gravity has the action
and can be thought of as a generalization of general relativity.
Scalar–tensor theories
These all contain at least one free parameter, as opposed to general relativity which has no free parameters.
Although not normally considered a Scalar–Tensor theory of gravity, the 5 by 5 metric of Kaluza–Klein reduces to a 4 by 4 metric and a single scalar. So if the 5th element is treated as a scalar gravitational field instead of an electromagnetic field then Kaluza–Klein can be considered the progenitor of Scalar–Tensor theories of gravity. This was recognized by Thiry.
Scalar–Tensor theories include Thiry, Jordan, Brans and Dicke, Bergman, Nordtveldt (1970), Wagoner, Bekenstein and Barker.
The action is based on the integral of the Lagrangian .
where is a different dimensionless function for each different scalar–tensor theory. The function plays the same role as the cosmological constant in general relativity. is a dimensionless normalization constant that fixes the present-day value of . An arbitrary potential can be added for the scalar.
The full version is retained in Bergman and Wagoner. Special cases are:
Nordtvedt,
Since was thought to be zero at the time anyway, this would not have been considered a significant difference. The role of the cosmological constant in more modern work is discussed under Cosmological constant.
Brans–Dicke, is constant
Bekenstein variable mass theory
Starting with parameters and , found from a cosmological solution,
determines function then
Barker constant G theory
Adjustment of allows Scalar Tensor Theories to tend to general relativity in the limit of in the current epoch. However, there could be significant differences from general relativity in the early universe.
So long as general relativity is confirmed by experiment, general Scalar–Tensor theories (including Brans–Dicke) can never be ruled out entirely, but as experiments continue to confirm general relativity more precisely and the parameters have to be fine-tuned so that the predictions more closely match those of general relativity.
The above examples are particular cases of Horndeski's theory, the most general Lagrangian constructed out of the metric tensor and a scalar field leading to second order equations of motion in 4-dimensional space. Viable theories beyond Horndeski (with higher order equations of motion) have been shown to exist.
Vector–tensor theories
Before we start, Will (2001) has said: "Many alternative metric theories developed during the 1970s and 1980s could be viewed as "straw-man" theories, invented to prove that such theories exist or to illustrate particular properties. Few of these could be regarded as well-motivated theories from the point of view, say, of field theory or particle physics. Examples are the vector–tensor theories studied by Will, Nordtvedt and Hellings."
Hellings and Nordtvedt and Will and Nordtvedt are both vector–tensor theories. In addition to the metric tensor there is a timelike vector field The gravitational action is:
where are constants and
(See Will for the field equations for and )
Will and Nordtvedt is a special case where
Hellings and Nordtvedt is a special case where
These vector–tensor theories are semi-conservative, which means that they satisfy the laws of conservation of momentum and angular momentum but can have preferred frame effects. When they reduce to general relativity so, so long as general relativity is confirmed by experiment, general vector–tensor theories can never be ruled out.
Other metric theories
Others metric theories have been proposed; that of Bekenstein is discussed under Modern Theories.
Non-metric theories
Cartan's theory is particularly interesting both because it is a non-metric theory and because it is so old. The status of Cartan's theory is uncertain. Will claims that all non-metric theories are eliminated by Einstein's Equivalence Principle. Will (2001) tempers that by explaining experimental criteria for testing non-metric theories against Einstein's Equivalence Principle. Misner et al. claims that Cartan's theory is the only non-metric theory to survive all experimental tests up to that date and Turyshev lists Cartan's theory among the few that have survived all experimental tests up to that date. The following is a quick sketch of Cartan's theory as restated by Trautman.
Cartan suggested a simple generalization of Einstein's theory of gravitation. He proposed a model of space time with a metric tensor and a linear "connection" compatible with the metric but not necessarily symmetric. The torsion tensor of the connection is related to the density of intrinsic angular momentum. Independently of Cartan, similar ideas were put forward by Sciama, by Kibble in the years 1958 to 1966, culminating in a 1976 review by Hehl et al.
The original description is in terms of differential forms, but for the present article that is replaced by the more familiar language of tensors (risking loss of accuracy). As in general relativity, the Lagrangian is made up of a massless and a mass part. The Lagrangian for the massless part is:
The is the linear connection. is the completely antisymmetric pseudo-tensor (Levi-Civita symbol) with , and is the metric tensor as usual. By assuming that the linear connection is metric, it is possible to remove the unwanted freedom inherent in the non-metric theory. The stress–energy tensor is calculated from:
The space curvature is not Riemannian, but on a Riemannian space-time the Lagrangian would reduce to the Lagrangian of general relativity.
Some equations of the non-metric theory of Belinfante and Swihart have already been discussed in the section on bimetric theories.
A distinctively non-metric theory is given by gauge theory gravity, which replaces the metric in its field equations with a pair of gauge fields in flat spacetime. On the one hand, the theory is quite conservative because it is substantially equivalent to Einstein–Cartan theory (or general relativity in the limit of vanishing spin), differing mostly in the nature of its global solutions. On the other hand, it is radical because it replaces differential geometry with geometric algebra.
Modern theories 1980s to present
This section includes alternatives to general relativity published after the observations of galaxy rotation that led to the hypothesis of "dark matter". There is no known reliable list of comparison of these theories. Those considered here include: Bekenstein, Moffat, Moffat, Moffat. These theories are presented with a cosmological constant or added scalar or vector potential.
Motivations
Motivations for the more recent alternatives to general relativity are almost all cosmological, associated with or replacing such constructs as "inflation", "dark matter" and "dark energy". The basic idea is that gravity agrees with general relativity at the present epoch but may have been quite different in the early universe.
In the 1980s, there was a slowly dawning realisation in the physics world that there were several problems inherent in the then-current big-bang scenario, including the horizon problem and the observation that at early times when quarks were first forming there was not enough space on the universe to contain even one quark. Inflation theory was developed to overcome these difficulties. Another alternative was constructing an alternative to general relativity in which the speed of light was higher in the early universe. The discovery of unexpected rotation curves for galaxies took everyone by surprise. Could there be more mass in the universe than we are aware of, or is the theory of gravity itself wrong? The consensus now is that the missing mass is "cold dark matter", but that consensus was only reached after trying alternatives to general relativity, and some physicists still believe that alternative models of gravity may hold the answer.
In the 1990s, supernova surveys discovered the accelerated expansion of the universe, now usually attributed to dark energy. This led to the rapid reinstatement of Einstein's cosmological constant, and quintessence arrived as an alternative to the cosmological constant. At least one new alternative to general relativity attempted to explain the supernova surveys' results in a completely different way. The measurement of the speed of gravity with the gravitational wave event GW170817 ruled out many alternative theories of gravity as explanations for the accelerated expansion. Another observation that sparked recent interest in alternatives to General Relativity is the Pioneer anomaly. It was quickly discovered that alternatives to general relativity could explain this anomaly. This is now believed to be accounted for by non-uniform thermal radiation.
Cosmological constant and quintessence
The cosmological constant is a very old idea, going back to Einstein in 1917. The success of the Friedmann model of the universe in which led to the general acceptance that it is zero, but the use of a non-zero value came back when data from supernovae indicated that the expansion of the universe is accelerating.
In Newtonian gravity, the addition of the cosmological constant changes the Newton–Poisson equation from:
to
In general relativity, it changes the Einstein–Hilbert action from
to
which changes the field equation from:
to:
In alternative theories of gravity, a cosmological constant can be added to the action in the same way.
More generally a scalar potential can be added to scalar tensor theories. This can be done in every alternative the general relativity that contains a scalar field by adding the term inside the Lagrangian for the gravitational part of the action, the part of
Because is an arbitrary function of the scalar field rather than a constant, it can be set to give an acceleration that is large in the early universe and small at the present epoch. This is known as quintessence.
A similar method can be used in alternatives to general relativity that use vector fields, including Rastall and vector–tensor theories. A term proportional to
is added to the Lagrangian for the gravitational part of the action.
Farnes' theories
In December 2018, the astrophysicist Jamie Farnes from the University of Oxford proposed a dark fluid theory, related to notions of gravitationally repulsive negative masses that were presented earlier by Albert Einstein. The theory may help to better understand the considerable amounts of unknown dark matter and dark energy in the universe.
The theory relies on the concept of negative mass and reintroduces Fred Hoyle's creation tensor in order to allow matter creation for only negative mass particles. In this way, the negative mass particles surround galaxies and apply a pressure onto them, thereby resembling dark matter. As these hypothesised particles mutually repel one another, they push apart the Universe, thereby resembling dark energy. The creation of matter allows the density of the exotic negative mass particles to remain constant as a function of time, and so appears like a cosmological constant. Einstein's field equations are modified to:
According to Occam's razor, Farnes' theory is a simpler alternative to the conventional LambdaCDM model, as both dark energy and dark matter (two hypotheses) are solved using a single negative mass fluid (one hypothesis). The theory will be directly testable using the world's largest radio telescope, the Square Kilometre Array which should come online in 2022.
Relativistic MOND
The original theory of MOND by Milgrom was developed in 1983 as an alternative to "dark matter". Departures from Newton's law of gravitation are governed by an acceleration scale, not a distance scale. MOND successfully explains the Tully–Fisher observation that the luminosity of a galaxy should scale as the fourth power of the rotation speed. It also explains why the rotation discrepancy in dwarf galaxies is particularly large.
There were several problems with MOND in the beginning.
It did not include relativistic effects
It violated the conservation of energy, momentum and angular momentum
It was inconsistent in that it gives different galactic orbits for gas and for stars
It did not state how to calculate gravitational lensing from galaxy clusters.
By 1984, problems 2 and 3 had been solved by introducing a Lagrangian (AQUAL). A relativistic version of this based on scalar–tensor theory was rejected because it allowed waves in the scalar field to propagate faster than light. The Lagrangian of the non-relativistic form is:
The relativistic version of this has:
with a nonstandard mass action. Here and are arbitrary functions selected to give Newtonian and MOND behaviour in the correct limits, and is the MOND length scale. By 1988, a second scalar field (PCC) fixed problems with the earlier scalar–tensor version but is in conflict with the perihelion precession of Mercury and gravitational lensing by galaxies and clusters. By 1997, MOND had been successfully incorporated in a stratified relativistic theory [Sanders], but as this is a preferred frame theory it has problems of its own. Bekenstein introduced a tensor–vector–scalar model (TeVeS). This has two scalar fields and and vector field . The action is split into parts for gravity, scalars, vector and mass.
The gravity part is the same as in general relativity.
where
are constants, square brackets in indices represent anti-symmetrization, is a Lagrange multiplier (calculated elsewhere), and is a Lagrangian translated from flat spacetime onto the metric . Note that need not equal the observed gravitational constant . is an arbitrary function, and
is given as an example with the right asymptotic behaviour; note how it becomes undefined when
The Parametric post-Newtonian parameters of this theory are calculated in, which shows that all its parameters are equal to general relativity's, except for
both of which expressed in geometric units where ; so
Moffat's theories
J. W. Moffat developed a non-symmetric gravitation theory. This is not a metric theory. It was first claimed that it does not contain a black hole horizon, but Burko and Ori have found that nonsymmetric gravitational theory can contain black holes. Later, Moffat claimed that it has also been applied to explain rotation curves of galaxies without invoking "dark matter". Damour, Deser & MaCarthy have criticised nonsymmetric gravitational theory, saying that it has unacceptable asymptotic behaviour.
The mathematics is not difficult but is intertwined so the following is only a brief sketch. Starting with a non-symmetric tensor , the Lagrangian density is split into
where is the same as for matter in general relativity.
where is a curvature term analogous to but not equal to the Ricci curvature in general relativity, and are cosmological constants, is the antisymmetric part of .
is a connection, and is a bit difficult to explain because it's defined recursively. However,
Haugan and Kauffmann used polarization measurements of the light emitted by galaxies to impose sharp constraints on the magnitude of some of nonsymmetric gravitational theory's parameters. They also used Hughes-Drever experiments to constrain the remaining degrees of freedom. Their constraint is eight orders of magnitude sharper than previous estimates.
Moffat's metric-skew-tensor-gravity (MSTG) theory is able to predict rotation curves for galaxies without either dark matter or MOND, and claims that it can also explain gravitational lensing of galaxy clusters without dark matter. It has variable , increasing to a final constant value about a million years after the big bang.
The theory seems to contain an asymmetric tensor field and a source current vector. The action is split into:
Both the gravity and mass terms match those of general relativity with cosmological constant. The skew field action and the skew field matter coupling are:
where
and is the Levi-Civita symbol. The skew field coupling is a Pauli coupling and is gauge invariant for any source current. The source current looks like a matter fermion field associated with baryon and lepton number.
Scalar–tensor–vector gravity
Moffat's Scalar–tensor–vector gravity contains a tensor, vector and three scalar fields. But the equations are quite straightforward. The action is split into: with terms for gravity, vector field scalar fields and mass. is the standard gravity term with the exception that is moved inside the integral.
The potential function for the vector field is chosen to be:
where is a coupling constant. The functions assumed for the scalar potentials are not stated.
Infinite derivative gravity
In order to remove ghosts in the modified propagator, as well as to obtain asymptotic freedom, Biswas, Mazumdar and Siegel (2005) considered a string-inspired infinite set of higher derivative terms
where is the exponential of an entire function of the D'Alembertian operator. This avoids a black hole singularity near the origin, while recovering the 1/r fall of the general relativity potential at large distances. Lousto and Mazzitelli (1997) found an exact solution to this theories representing a gravitational shock-wave.
General relativity self-interaction (GRSI)
The General Relativity Self-interaction or GRSI model is an attempt to explain astrophysical and cosmological observations without dark matter, dark energy by adding self-interaction terms when calculating the gravitational effects in general relativity, analogous to the self-interaction terms in quantum chromodynamics.
Additionally, the model explains the Tully-Fisher relation,
the radial acceleration relation, observations that are currently challenging to understand within Lambda-CDM.
The model was proposed in a series of articles, the first dating from 2003. The basic point is that since within General Relativity, gravitational fields couple to each other, this can effectively increase the gravitational interaction between massive objects. The additional gravitational strength then avoid the need for dark matter. This field coupling is the origin of General Relativity's non-linear behavior. It can be understood, in particle language, as gravitons interacting with each other (despite being massless) because they carry energy-momentum.
A natural implication of this model is its explanation of the accelerating expansion of the universe without resorting to dark energy. The increased binding energy within a galaxy requires, by energy conservation, a weakening of gravitational attraction outside said galaxy. This mimics the repulsion of dark energy.
The GRSI model is inspired from the Strong Nuclear Force, where a comparable phenomenon occurs. The interaction between gluons emitted by static or nearly static quarks dramatically strengthens quark-quark interaction, ultimately leading to quark confinement on the one hand (analogous to the need of stronger gravity to explain away dark matter) and the suppression of the Strong Nuclear Force outside hadrons (analogous to the repulsion of dark energy that balances gravitational attraction at large scales.) Two other parallel phenomena are the Tully-Fisher relation in galaxy dynamics that is analogous to the Regge trajectories emerging from the strong force. In both cases, the phenomenological formulas describing these observations are similar, albeit with different numerical factors.
These parallels are expected from a theoretical point of view: General Relativity and the Strong Interaction Lagrangians have the same form. The validity of the GRSI model then simply hinges on whether the coupling of the gravitational fields is large enough so that the same effects that occur in hadrons also occur in very massive systems. This coupling is effectively given by , where is the gravitational constant, is the mass of the system, and is a characteristic length of the system. The claim of the GRSI proponents, based either on lattice calculations, a background-field model. or the coincidental phenomenologies in galactic or hadronic dynamics mentioned in the previous paragraph, is that is indeed sufficiently large for large systems such as galaxies.
List of topics studied in the Model
The main observations that appear to require dark matter and/or dark energy can be explained within this model. Namely,
The flat rotation curves of galaxies. These results, however, have been challenged.
The Cosmic Microwave Background anisotropies.
The fainter luminosities of distant supernovae and their consequence on the accelerating expansion of the universe.
The formation of the Universe's large structures.
The matter power spectrum.
The internal dynamics of galaxy clusters, including that of the Bullet Cluster.
Additionally, the model explains observations that are currently challenging to understand within Lambda-CDM:
The Tully-Fisher relation.
The radial acceleration relation.
The Hubble tension.
The cosmic coincidence, that is the fact that at present time, the purported repulsion of dark energy nearly exactly cancels the action of gravity in the overall dynamics of the universe.
Finally, the model made a prediction that the amount of missing mass (i.e., the dark mass in dark matter approaches) in elliptical galaxies correlates with the ellipticity of the galaxies. This was tested and verified.
Testing of alternatives to general relativity
Any putative alternative to general relativity would need to meet a variety of tests for it to become accepted. For in-depth coverage of these tests, see Misner et al. Ch.39, Will Table 2.1, and Ni. Most such tests can be categorized as in the following subsections.
Self-consistency
Self-consistency among non-metric theories includes eliminating theories allowing tachyons, ghost poles and higher order poles, and those that have problems with behaviour at infinity. Among metric theories, self-consistency is best illustrated by describing several theories that fail this test. The classic example is the spin-two field theory of Fierz and Pauli; the field equations imply that gravitating bodies move in straight lines, whereas the equations of motion insist that gravity deflects bodies away from straight line motion. Yilmaz (1971) contains a tensor gravitational field used to construct a metric; it is mathematically inconsistent because the functional dependence of the metric on the tensor field is not well defined.
Completeness
To be complete, a theory of gravity must be capable of analysing the outcome of every experiment of interest. It must therefore mesh with electromagnetism and all other physics. For instance, any theory that cannot predict from first principles the movement of planets or the behaviour of atomic clocks is incomplete.
Many early theories are incomplete in that it is unclear whether the density used by the theory should be calculated from the stress–energy tensor as or as , where is the four-velocity, and is the Kronecker delta. The theories of Thirry (1948) and Jordan are incomplete unless Jordan's parameter is set to -1, in which case they match the theory of Brans–Dicke and so are worthy of further consideration. Milne is incomplete because it makes no gravitational red-shift prediction. The theories of Whitrow and Morduch, Kustaanheimo and Kustaanheimo and Nuotio are either incomplete or inconsistent. The incorporation of Maxwell's equations is incomplete unless it is assumed that they are imposed on the flat background space-time, and when that is done they are inconsistent, because they predict zero gravitational redshift when the wave version of light (Maxwell theory) is used, and nonzero redshift when the particle version (photon) is used. Another more obvious example is Newtonian gravity with Maxwell's equations; light as photons is deflected by gravitational fields (by half that of general relativity) but light as waves is not.
Classical tests
There are three "classical" tests (dating back to the 1910s or earlier) of the ability of gravity theories to handle relativistic effects; they are gravitational redshift, gravitational lensing (generally tested around the Sun), and anomalous perihelion advance of the planets. Each theory should reproduce the observed results in these areas, which have to date always aligned with the predictions of general relativity. In 1964, Irwin I. Shapiro found a fourth test, called the Shapiro delay. It is usually regarded as a "classical" test as well.
Agreement with Newtonian mechanics and special relativity
As an example of disagreement with Newtonian experiments, Birkhoff theory predicts relativistic effects fairly reliably but demands that sound waves travel at the speed of light. This was the consequence of an assumption made to simplify handling the collision of masses.
The Einstein equivalence principle
Einstein's Equivalence Principle has three components. The first is the uniqueness of free fall, also known as the Weak Equivalence Principle. This is satisfied if inertial mass is equal to gravitational mass. η is a parameter used to test the maximum allowable violation of the Weak Equivalence Principle. The first tests of the Weak Equivalence Principle were done by Eötvös before 1900 and limited η to less than 5. Modern tests have reduced that to less than 5. The second is Lorentz invariance. In the absence of gravitational effects the speed of light is constant. The test parameter for this is δ. The first tests of Lorentz invariance were done by Michelson and Morley before 1890 and limited δ to less than 5. Modern tests have reduced this to less than 1. The third is local position invariance, which includes spatial and temporal invariance. The outcome of any local non-gravitational experiment is independent of where and when it is performed. Spatial local position invariance is tested using gravitational redshift measurements. The test parameter for this is α. Upper limits on this found by Pound and Rebka in 1960 limited α to less than 0.1. Modern tests have reduced this to less than 1.
Schiff's conjecture states that any complete, self-consistent theory of gravity that embodies the Weak Equivalence Principle necessarily embodies Einstein's Equivalence Principle. This is likely to be true if the theory has full energy conservation. Metric theories satisfy the Einstein Equivalence Principle. Extremely few non-metric theories satisfy this. For example, the non-metric theory of Belinfante & Swihart is eliminated by the THεμ formalism for testing Einstein's Equivalence Principle. Gauge theory gravity is a notable exception, where the strong equivalence principle is essentially the minimal coupling of the gauge covariant derivative.
Parametric post-Newtonian formalism
See also Tests of general relativity, Misner et al. and Will for more information.
Work on developing a standardized rather than ad hoc set of tests for evaluating alternative gravitation models began with Eddington in 1922 and resulted in a standard set of Parametric post-Newtonian numbers in Nordtvedt and Will and Will and Nordtvedt. Each parameter measures a different aspect of how much a theory departs from Newtonian gravity. Because we are talking about deviation from Newtonian theory here, these only measure weak-field effects. The effects of strong gravitational fields are examined later.
These ten are:
is a measure of space curvature, being zero for Newtonian gravity and one for general relativity.
is a measure of nonlinearity in the addition of gravitational fields, one for general relativity.
is a check for preferred location effects.
measure the extent and nature of "preferred-frame effects". Any theory of gravity in which at least one of the three is nonzero is called a preferred-frame theory.
measure the extent and nature of breakdowns in global conservation laws. A theory of gravity possesses 4 conservation laws for energy-momentum and 6 for angular momentum only if all five are zero.
Strong gravity and gravitational waves
Parametric post-Newtonian is only a measure of weak field effects. Strong gravity effects can be seen in compact objects such as white dwarfs, neutron stars, and black holes. Experimental tests such as the stability of white dwarfs, spin-down rate of pulsars, orbits of binary pulsars and the existence of a black hole horizon can be used as tests of alternative to general relativity. General relativity predicts that gravitational waves travel at the speed of light. Many alternatives to general relativity say that gravitational waves travel faster than light, possibly breaking causality. After the multi-messaging detection of the GW170817 coalescence of neutron stars, where light and gravitational waves were measured to travel at the same speed with an error of 1/1015, many of those modified theories of gravity were excluded.
Cosmological tests
Useful cosmological scale tests are just beginning to become available. Given the limited astronomical data and the complexity of the theories, comparisons involve complex parameters. For example, Reyes et al., analyzed 70,205 luminous red galaxies with a cross-correlation involving galaxy velocity estimates and gravitational potentials estimated from lensing and yet results are still tentative.
For those theories that aim to replace dark matter, observations like the galaxy rotation curve, the Tully–Fisher relation, the faster rotation rate of dwarf galaxies, and the gravitational lensing due to galactic clusters act as constraints. For those theories that aim to replace inflation, the size of ripples in the spectrum of the cosmic microwave background radiation is the strictest test. For those theories that incorporate or aim to replace dark energy, the supernova brightness results and the age of the universe can be used as tests. Another test is the flatness of the universe. With general relativity, the combination of baryonic matter, dark matter and dark energy add up to make the universe exactly flat.
Results of testing theories
Parametric post-Newtonian parameters for a range of theories
(See Will and Ni for more details. Misner et al. gives a table for translating parameters from the notation of Ni to that of Will)
General Relativity is now more than 100 years old, during which one alternative theory of gravity after another has failed to agree with ever more accurate observations. One illustrative example is Parameterized post-Newtonian formalism. The following table lists Parametric post-Newtonian values for a large number of theories. If the value in a cell matches that in the column heading then the full formula is too complicated to include here.
† The theory is incomplete, and can take one of two values. The value closest to zero is listed.
All experimental tests agree with general relativity so far, and so Parametric post-Newtonian analysis immediately eliminates all the scalar field theories in the table. A full list of Parametric post-Newtonian parameters is not available for Whitehead, Deser-Laurent, Bollini–Giambiagi–Tiomino, but in these three cases , which is in strong conflict with general relativity and experimental results. In particular, these theories predict incorrect amplitudes for the Earth's tides. (A minor modification of Whitehead's theory avoids this problem. However, the modification predicts the Nordtvedt effect, which has been experimentally constrained.)
Theories that fail other tests
The stratified theories of Ni, Lee Lightman and Ni are non-starters because they all fail to explain the perihelion advance of Mercury. The bimetric theories of Lightman and Lee, Rosen, Rastall all fail some of the tests associated with strong gravitational fields. The scalar–tensor theories include general relativity as a special case, but only agree with the Parametric post-Newtonian values of general relativity when they are equal to general relativity to within experimental error. As experimental tests get more accurate, the deviation of the scalar–tensor theories from general relativity is being squashed to zero. The same is true of vector–tensor theories, the deviation of the vector–tensor theories from general relativity is being squashed to zero. Further, vector–tensor theories are semi-conservative; they have a nonzero value for which can have a measurable effect on the Earth's tides. Non-metric theories, such as Belinfante and Swihart, usually fail to agree with experimental tests of Einstein's equivalence principle. And that leaves, as a likely valid alternative to general relativity, nothing except possibly Cartan. That was the situation until cosmological discoveries pushed the development of modern alternatives.
References
External links
Carroll, Sean. Video lecture discussion on the possibilities and constraints to revision of the General Theory of Relativity.
Theories of gravity
General relativity | 0.773283 | 0.987727 | 0.763792 |
Dielectric loss | In electrical engineering, dielectric loss quantifies a dielectric material's inherent dissipation of electromagnetic energy (e.g. heat). It can be parameterized in terms of either the loss angle or the corresponding loss tangent . Both refer to the phasor in the complex plane whose real and imaginary parts are the resistive (lossy) component of an electromagnetic field and its reactive (lossless) counterpart.
Electromagnetic field perspective
For time-varying electromagnetic fields, the electromagnetic energy is typically viewed as waves propagating either through free space, in a transmission line, in a microstrip line, or through a waveguide. Dielectrics are often used in all of these environments to mechanically support electrical conductors and keep them at a fixed separation, or to provide a barrier between different gas pressures yet still transmit electromagnetic power. Maxwell’s equations are solved for the electric and magnetic field components of the propagating waves that satisfy the boundary conditions of the specific environment's geometry. In such electromagnetic analyses, the parameters permittivity , permeability , and conductivity represent the properties of the media through which the waves propagate. The permittivity can have real and imaginary components (the latter excluding effects, see below) such that
If we assume that we have a wave function such that
then Maxwell's curl equation for the magnetic field can be written as:
where is the imaginary component of permittivity attributed to bound charge and dipole relaxation phenomena, which gives rise to energy loss that is indistinguishable from the loss due to the free charge conduction that is quantified by . The component represents the familiar lossless permittivity given by the product of the free space permittivity and the relative real/absolute permittivity, or
Loss tangent
The loss tangent is then defined as the ratio (or angle in a complex plane) of the lossy reaction to the electric field in the curl equation to the lossless reaction:
Solution for the electric field of the electromagnetic wave is
where:
is the angular frequency of the wave, and
is the wavelength in the dielectric material.
For dielectrics with small loss, square root can be approximated using only zeroth and first order terms of binomial expansion. Also, for small .
Since power is electric field intensity squared, it turns out that the power decays with propagation distance as
where:
is the initial power
There are often other contributions to power loss for electromagnetic waves that are not included in this expression, such as due to the wall currents of the conductors of a transmission line or waveguide. Also, a similar analysis could be applied to the magnetic permeability where
with the subsequent definition of a magnetic loss tangent
The electric loss tangent can be similarly defined:
upon introduction of an effective dielectric conductivity (see relative permittivity#Lossy medium).
Discrete circuit perspective
A capacitor is a discrete electrical circuit component typically made of a dielectric placed between conductors. One lumped element model of a capacitor includes a lossless ideal capacitor in series with a resistor termed the equivalent series resistance (ESR), as shown in the figure below. The ESR represents losses in the capacitor. In a low-loss capacitor the ESR is very small (the conduction is high leading to a low resistivity), and in a lossy capacitor the ESR can be large. Note that the ESR is not simply the resistance that would be measured across a capacitor by an ohmmeter. The ESR is a derived quantity representing the loss due to both the dielectric's conduction electrons and the bound dipole relaxation phenomena mentioned above. In a dielectric, one of the conduction electrons or the dipole relaxation typically dominates loss in a particular dielectric and manufacturing method. For the case of the conduction electrons being the dominant loss, then
where C is the lossless capacitance.
When representing the electrical circuit parameters as vectors in a complex plane, known as phasors, a capacitor's loss tangent is equal to the tangent of the angle between the capacitor's impedance vector and the negative reactive axis, as shown in the adjacent diagram. The loss tangent is then
.
Since the same AC current flows through both ESR and Xc, the loss tangent is also the ratio of the resistive power loss in the ESR to the reactive power oscillating in the capacitor. For this reason, a capacitor's loss tangent is sometimes stated as its dissipation factor, or the reciprocal of its quality factor Q, as follows
References
Electromagnetism
Electrical engineering
External links
Loss in dielectrics, frequency dependence | 0.771464 | 0.990038 | 0.763779 |
On Growth and Form | On Growth and Form is a book by the Scottish mathematical biologist D'Arcy Wentworth Thompson (1860–1948). The book is long – 793 pages in the first edition of 1917, 1116 pages in the second edition of 1942.
The book covers many topics including the effects of scale on the shape of animals and plants, large ones necessarily being relatively thick in shape; the effects of surface tension in shaping soap films and similar structures such as cells; the logarithmic spiral as seen in mollusc shells and ruminant horns; the arrangement of leaves and other plant parts (phyllotaxis); and Thompson's own method of transformations, showing the changes in shape of animal skulls and other structures on a Cartesian grid.
The work is widely admired by biologists, anthropologists and architects among others, but is often not read by people who cite it. Peter Medawar explains this as being because it clearly pioneered the use of mathematics in biology, and helped to defeat mystical ideas of vitalism; but that the book is weakened by Thompson's failure to understand the role of evolution and evolutionary history in shaping living structures. Philip Ball and Michael Ruse, on the other hand, suspect that while Thompson argued for physical mechanisms, his rejection of natural selection bordered on vitalism.
Overview
D'Arcy Wentworth Thompson was a Scottish biologist and pioneer of mathematical biology. His most famous work, On Growth and Form was written in Dundee, mostly in 1915, but publication was put off until 1917 because of the delays of wartime and Thompson's many late alterations to the text. The central theme of the book is that biologists of its author's day overemphasized evolution as the fundamental determinant of the form and structure of living organisms, and underemphasized the roles of physical laws and mechanics. At a time when vitalism was still being considered as a biological theory, he advocated structuralism as an alternative to natural selection in governing the form of species, with the smallest hint of vitalism as the unseen driving force.
Thompson had previously criticized Darwinism in his paper Some Difficulties of Darwinism. On Growth and Form explained in detail why he believed Darwinism to be an inadequate explanation for the origin of new species. He did not reject natural selection, but regarded it as secondary to physical influences on biological form.
Using a mass of examples, Thompson pointed out correlations between biological forms and mechanical phenomena. He showed the similarity in the forms of jellyfish and the forms of drops of liquid falling into viscous fluid, and between the internal supporting structures in the hollow bones of birds and well-known engineering truss designs. He described phyllotaxis (numerical relationships between spiral structures in plants) and its relationship to the Fibonacci sequence.
Perhaps the most famous part of the book is Chapter 17, "The Comparison of Related Forms," where Thompson explored the degree to which differences in the forms of related animals could be described, in work inspired by the German engraver Albrecht Dürer (1471–1528), by mathematical transformations.
The book is descriptive rather than experimental science: Thompson did not articulate his insights in the form of hypotheses that can be tested. He was aware of this, saying that "This book of mine has little need of preface, for indeed it is 'all preface' from beginning to end."
Editions
The first edition appeared in 1917 in a single volume of 793 pages published by Cambridge University Press. A second edition, enlarged to 1116 pages, was published in two volumes in 1942. Thompson wrote in the preface to the 1942 edition that he had written "this book in wartime, and its revision has employed me during another war. It gave me solace and occupation, when service was debarred me by my years. Few are left of the friends who helped me write it." An edition of 346 pages was abridged by John Tyler Bonner, and is widely published under the same title. The book, often in the abridged edition, has been reprinted more than 40 times, and has been translated into Chinese, French, German, Greek, Italian, and Spanish.
Contents
The contents of the chapters in the first edition are summarized below. All but Chapter 11 have the same titles in the second edition, but many are longer, as indicated by the page numbering of the start of each chapter. Bonner's abridgment shortened all the chapters, and removed some completely, again as indicated at the start of each chapter's entry below.
1. Introductory
(1st edition p. 1 – 2nd edition p. 1 – Bonner p. 1)
Thompson names the progress of chemistry towards Kant's goal of a mathematical science able to explain reactions by molecular mechanics, and points out that zoology has been slow to look to mathematics. He agrees that zoologists rightly seek for reasons in animals' adaptations, and reminds readers of the related but far older philosophical search for teleology, explanation by some Aristotelian final cause. His analysis of "growth and form" will try to show how these can be explained with ordinary physical laws.
2. On Magnitude
(1st p. 16 – 2nd p. 22 – Bonner p. 15)
Thompson begins by showing that an animal's surface and volume (or weight) increase with the square and cube of its length, respectively, and deducing simple rules for how bodies will change with size. He shows in a few short equations that the speed of a fish or ship rises with the square root of its length. He then derives the slightly more complex scaling laws for birds or aircraft in flight. He shows that an organism thousands of times smaller than a bacterium is essentially impossible.
3. The Rate of Growth
(1st p. 50 – 2nd p. 78 – Bonner removed)
Thompson points out that all changes of form are phenomena of growth. He analyses growth curves for man, noting rapid growth before birth and again in the teens; and then curves for other animals. In plants, growth is often in pulses, as in Spirogyra, peaks at a specific temperature, and below that value roughly doubles every 10 degrees Celsius. Tree growth varies cyclically with season (less strongly in evergreens), preserving a record of historic climates. Tadpole tails regenerate rapidly at first, slowing exponentially.
4. On the Internal Form and Structure of the Cell
(1st p. 156 – 2nd p. 286 – Bonner removed)
Thompson argues for the need to study cells with physical methods, as morphology alone had little explanatory value. He notes that in mitosis the dividing cells look like iron filings between the poles of a magnet, in other words like a force field.
5. The Forms of Cells
(1st p. 201 – 2nd p. 346 – Bonner p. 49)
He considers the forces such as surface tension acting on cells, and Plateau's experiments on soap films. He illustrates the way a splash breaks into droplets and compares this to the shapes of Campanularian zoophytes (Hydrozoa). He looks at the flask-like shapes of single-celled organisms such as species of Vorticella, considering teleological and physical explanations of their having minimal areas; and at the hanging drop shapes of some Foraminifera such as Lagena. He argues that the cells of trypanosomes are similarly shaped by surface tension.
6. A Note on Adsorption
(1st p. 277 – 2nd p. 444 – Bonner removed)
Thompson notes that surface tension in living cells is reduced by substances resembling oils and soaps; where the concentrations of these vary locally, the shapes of cells are affected. In the green alga Pleurocarpus (Zygnematales), potassium is concentrated near growing points in the cell.
7. The Forms of Tissues, or Cell-aggregates
(1st p. 293 – 2nd p. 465 – Bonner p. 88)
Thompson observes that in multicellular organisms, cells influence each other's shapes with triangles of forces. He analyses parenchyma and the cells in a frog's egg as soap films, and considers the symmetries bubbles meeting at points and edges. He compares the shapes of living and fossil corals such as Cyathophyllum and Comoseris, and the hexagonal structure of honeycomb, to such soap bubble structures.
8. The same (continued)
(1st p. 346 – 2nd p. 566 – Bonner merged with previous chapter)
Thompson considers the laws governing the shapes of cells, at least in simple cases such as the fine hairs (a cell thick) in the rhizoids of mosses. He analyses the geometry of cells in a frog's egg when it has divided into 4, 8 and even 64 cells. He shows that uniform growth can lead to unequal cell sizes, and argues that the way cells divide is driven by the shape of the dividing structure (and not vice versa).
9. On Concretions, Spicules, and Spicular Skeletons
(1st p. 411 – 2nd p. 645 – Bonner p. 132)
Thompson considers the skeletal structures of diatoms, radiolarians, foraminifera and sponges, many of which contain hard spicules with geometric shapes. He notes that these structures form outside living cells, so that physical forces must be involved.
10. A Parenthetic Note on Geodetics
(1st p. 488 – 2nd p. 741 – Bonner removed)
Thompson applies the use of the geodetic line, "the shortest distance between two points on the surface of a solid of revolution", to the spiral thickening of plant cell walls and other cases.
11. The Logarithmic Spiral ['The Equiangular Spiral' in 2nd Ed.]
(1st p. 493 – 2nd p. 748 – Bonner p. 172)
Thompson observes that there are many spirals in nature, from the horns of ruminants to the shells of molluscs; other spirals are found among the florets of the sunflower. He notes that the mathematics of these are similar but the biology differs. He describes the spiral of Archimedes before moving on to the logarithmic spiral, which has the property of never changing its shape: it is equiangular and is continually self-similar. Shells as diverse as Haliotis, Triton, Terebra and Nautilus (illustrated with a halved shell and a radiograph) have this property; different shapes are generated by sweeping out curves (or arbitrary shapes) by rotation, and if desired also by moving downwards. Thompson analyses both living molluscs and fossils such as ammonites.
12. The Spiral Shells of the Foraminifera
(1st p. 587 – 2nd p. 850 – Bonner merged with previous chapter)
Thompson analyses diverse forms of minute spiral shells of the foraminifera, many of which are logarithmic, others irregular, in a manner similar to the previous chapter.
13. The Shapes of Horns, and of Teeth or Tusks: with A Note on Torsion
(1st p. 612 – 2nd p. 874 – Bonner p. 202)
Thompson considers the three types of horn that occur in quadrupeds: the keratin horn of the rhinoceros; the paired horns of sheep or goats; and the bony antlers of deer.
In a note on torsion, Thompson mentions Charles Darwin's treatment of climbing plants which often spiral around a support, noting that Darwin also observed that the spiralling stems were themselves twisted. Thompson disagrees with Darwin's teleological explanation, that the twisting makes the stems stiffer in the same way as the twisting of a rope; Thompson's view is that the mechanical adhesion of the climbing stem to the support sets up a system of forces which act as a 'couple' offset from the centre of the stem, making it twist.
14. On Leaf-arrangement, or Phyllotaxis
(1st p. 635 – 2nd p. 912 – Bonner removed)
Thompson analyses phyllotaxis, the arrangement of plant parts around an axis. He notes that such parts include leaves around a stem; fir cones made of scales; sunflower florets forming an elaborate crisscrossing pattern of different spirals (parastichies). He recognises their beauty but dismisses any mystical notions; instead he remarks that
The numbers that result from such spiral arrangements are the Fibonacci sequence of ratios 1/2, 2/3, 3.5 ... converging on 0.61803..., the golden ratio which is
15. On the Shapes of Eggs, and of certain other Hollow Structures
(1st p. 652 – 2nd p. 934 – Bonner removed)
Eggs are what Thompson calls simple solids of revolution, varying from the nearly spherical eggs of owls through more typical ovoid eggs like chickens, to the markedly pointed eggs of cliff-nesting birds like the guillemot. He shows that the shape of the egg favours its movement along the oviduct, a gentle pressure on the trailing end sufficing to push it forwards. Similarly, sea urchin shells have teardrop shapes, such as would be taken up by a flexible bag of liquid.
16. On Form and Mechanical Efficiency
(1st p. 670 – 2nd p. 958 – Bonner p. 221)
Thompson criticizes talk of adaptation by coloration in animals for presumed purposes of crypsis, warning and mimicry (referring readers to E. B. Poulton's The Colours of Animals, and more sceptically to Abbott Thayer's Concealing-coloration in the Animal Kingdom). He considers the mechanical engineering of bone to be a far more definite case. He compares the strength of bone and wood to materials such as steel and cast iron; illustrates the "cancellous" structure of the bone of the human femur with thin trabeculae which formed "nothing more nor less than a diagram of the lines of stress ... in the loaded structure", and compares the femur to the head of a building crane. He similarly compares the cantilevered backbone of a quadruped or dinosaur to the girder structure of the Forth Railway Bridge.
17. On the Theory of Transformations, or the Comparison of Related Forms
(1st p. 719 – 2nd p. 1026 – Bonner p. 268)
Inspired by the work of Albrecht Dürer, Thompson explores how the forms of organisms and their parts, whether leaves, the bones of the foot, human faces or the body shapes of copepods, crabs or fish, can be explained by geometrical transformations. For example:
In similar style he transforms the shape of the carapace of the crab Geryon variously to that of Corystes by a simple shear mapping, and to Scyramathia, Paralomis, Lupa, and Chorinus (Pisinae) by stretching the top or bottom of the grid sideways. The same process changes Crocodilus porosus to Crocodilus americanus and Notosuchus terrestris; relates the hip-bones of fossil reptiles and birds such as Archaeopteryx and Apatornis; the skulls of various fossil horses, and even the skulls of a horse and a rabbit. A human skull is stretched into those of the chimpanzee and baboon, and with "the mode of deformation .. on different lines" (page 773), of a dog.
Epilogue
(1st p. 778 – 2nd p. 1093 – Bonner p. 326)
In the brief epilogue, Thompson writes that he will have succeeded "if I have been able to shew [the morphologist] that a certain mathematical aspect of morphology ... is ... complementary to his descriptive task, and helpful, nay essential, to his proper study and comprehension of Form." More lyrically, he writes that "For the harmony of the world is made manifest in Form and Number, and the heart and soul and all the poetry of Natural Philosophy are embodied in the concept of mathematical beauty" and quotes Isaiah 40:12 on measuring out the waters and heavens and the dust of the earth. He ends with a paragraph praising the French entomologist Jean-Henri Fabre who "being of the same blood and marrow with Plato and Pythagoras, saw in Number 'la clef de voute' [the key to the vault (of the universe)] and found in it 'le comment et le pourquoi des choses' [the how and the why of things]".
Reception
Contemporary
"J. P. McM[urrich]", reviewing the book in Science in 1917, wrote that "the book is one of the strongest documents in support of the mechanistic view of life that has yet been put forth", contrasting this with "vitalism". The reviewer was interested in the "discussion of the physical factors determining the size of organisms, especially interesting being the consideration of the conditions which may determine the minimum size".
J. W. Buchanan, reviewing the second edition in Physiological Zoology in 1943, described it as "an imposing extension of his earlier attempt to formulate a geometry of Growth and Form" and "beautifully written", but warned that "the reading will not be easy" and that "A vast store of literature has here been assembled and assimilated". Buchanan summarizes the book, and notes that Chapter 17 "seems to the reviewer to contain the essence of the long and more or less leisurely thesis... The chapter is devoted to comparison of related forms, largely by the method of co-ordinates. Fundamental differences in these forms are thus revealed", and Buchanan concludes that the large "gaps" indicate that Darwin's endless series of continuous variations is not substantiated. But he does have some criticisms: Thompson should have referenced the effects of hormones on growth; and the relation of molecular configuration and form; genetics is barely mentioned, and experimental embryology and regeneration [despite Thompson's analysis of the latter] are overlooked. The mathematics used consists of statistics and geometry, while thermodynamics is "largely absent".
Edmund Mayer, reviewing the second edition in The Anatomical Record in 1943, noted that the "scope of the book and the general approach to the problems dealt with have remained unchanged, but considerable additions have been made and large parts have been recast". He was impressed at the extent to which Thompson had kept up with developments in many sciences, though he thought the mentions of quantum theory and Heisenberg uncertainty unwise.
George C. Williams, reviewing the 1942 edition and Bonner's abridged edition for the Quarterly Review of Biology (of which he was the editor), writes that the book is "a work widely praised, but seldom used. It contains neither original insights
that have formed a basis for later advances nor instructive fallacies that have stimulated fruitful attack. This seeming paradox is brilliantly discussed by P. B. Medawar [in] Pluto's Republic." Williams then attempts a "gross simplification" of Medawar's evaluation:
Modern
The architects Philip Beesley and Sarah Bonnemaison write that Thompson's book at once became a classic "for its exploration of natural geometries in the dynamics of growth and physical processes." They note the "extraordinary optimism" in the book, its vision of the world as "a symphony of harmonious forces", and its huge range, including:
Beesley and Bonnemaison observe that Thompson saw form "as a product of dynamic forces .. shaped by flows of energy and stages of growth." They praise his "eloquent writing and exquisite illustrations" which have provided inspiration for artists and architects as well as scientists.
The statistician Cosma Shalizi writes that the book "has haunted all discussion of these matters ever since."
Shalizi states that Thompson's goal is to show that biology follows inevitably from physics, and to a degree also from chemistry. He argues that when Thompson says "the form of an object is a 'diagram of forces'", Thompson means that we can infer from an object the physical forces that act (or once acted) upon it. Shalizi calls Thompson's account of the physics of morphogenesis
Shalizi notes Thompson's simplicity, explaining the processes of life "using little that a second-year physics undergrad wouldn't know. (Thompson's anti-reductionist admirers seldom put it this way.)". He notes that Thompson deliberately avoided invoking natural selection as an explanation, and left history, whether of species or of an individual's life, out of his account. He quotes Thompson's "A snow-crystal is the same today as when the first snows fell": adding "so, too, the basic forces acting upon organisms", and comments that we have forgotten other early twentieth century scientists who scorned evolution. In contrast, he argues,
The anthropologist Barry Bogin writes that Thompson's book
Bogin observes that Thompson originated the use of transformational grids to measure growth in two dimensions, but that without modern computers the method was tedious to apply and was not often used. Even so, the book stimulated and lent intellectual validity to the new field of growth and development research.
Peter Coates recalls that
Coates argues however that the book goes far beyond expressing knowledge elegantly and influentially, in a form "that can be read for pleasure by scientists and nonscientists"; it is in his view
The science writer Philip Ball observes that
Ball quotes the 2nd Edition's epigraph by the statistician Karl Pearson: "I believe the day must come when the biologist will—without being a mathematician—not hesitate to use mathematical analysis when he requires it." Ball argues that Thompson "presents mathematical principles as a shaping agency that may supersede natural selection, showing how the structures of the living world often echo those in inorganic nature", and notes his "frustration at the 'Just So' explanations of morphology offered by Darwinians." Instead, Ball argues, Thompson elaborates on how not heredity but physical forces govern biological form. Ball suggests that "The book's central motif is the logarithmic spiral", evidence in Thompson's eyes of the universality of form and the reduction of many phenomena to a few principles of mathematics.
The philosopher of biology Michael Ruse wrote that Thompson "had little time for natural selection." Instead, Thompson emphasised "the formal aspects of organisms", trying to make a case for self-organization through normal physical and chemical processes. Ruse notes that, following Aristotle, Thompson used as an example the morphology of jellyfish, which he explained entirely mechanically with the physics of a heavy liquid falling through a lighter liquid, avoiding natural selection as an explanation. Ruse is not sure whether Thompson believed he was actually breaking with "mechanism", in other words adopting a vitalist (ghost in the machine) view of the world. In Ruse's opinion, Thompson can be interpreted as arguing that "we can have completely mechanical explanations of the living world" – with the important proviso that Thompson apparently felt there was no need for natural selection. Ruse at once adds that "people like Darwin and Dawkins undoubtedly would disagree"; they would insist that
Influence
For his revised On Growth and Form, Thompson was awarded the Daniel Giraud Elliot Medal from the United States National Academy of Sciences in 1942.
On Growth and Form has inspired thinkers including the biologists Julian Huxley and Conrad Hal Waddington, the mathematician Alan Turing and the anthropologist Claude Lévi-Strauss. The book has powerfully influenced architecture and has long been a set text on architecture courses.
On Growth and Form has inspired artists including Richard Hamilton, Eduardo Paolozzi, and Ben Nicholson. In 2011 the University of Dundee was awarded a £100,000 grant by The Art Fund to build a collection of art inspired by his ideas and collections, much of which is displayed in the D'Arcy Thompson Zoology Museum in Dundee.
To celebrate the centenary of On Growth and Form numerous events were staged around the world, including New York, Amsterdam, Singapore, London, Edinburgh, St Andrews and in Dundee where the book was written. The On Growth and Form 100 website was set up in late 2016 to map all of this activity.
See also
Evolutionary developmental biology
Kunstformen der Natur
References
Bibliography
Thompson, D. W., 1917. On Growth and Form. Cambridge University Press.
1945 reprint at Internet Archive
1961 abridged edition at Google Books
External links
D'Arcy Wentworth Thompson
D'Arcy Thompson Zoology Museum
Using a computer to visualise change in organisms
D'Arcy Thompson 150th anniversary homepage
1917 non-fiction books
Mathematical and theoretical biology | 0.781774 | 0.976974 | 0.763773 |
Entity component system | Entity–component–system (ECS) is a software architectural pattern mostly used in video game development for the representation of game world objects. An ECS comprises entities composed from components of data, with systems which operate on the components.
ECS follows the principle of composition over inheritance, meaning that every entity is defined not by a type hierarchy, but by the components that are associated with it. Systems act globally over all entities which have the required components.
Especially when written “Entity Component System”, due to an ambiguity in the English language, a common interpretation of the name is that an ECS is a system comprising entities and components. For example, in the 2013 talk at GDC, Scott Bilas compares a C++ object system and his new custom component system. This is consistent with a traditional use of system term in general systems engineering with Common Lisp Object System and type system as examples.
Characteristics
ECS combines orthogonal, well-established ideas in general computer science and programming language theory. For example, components can be seen as a mixin idiom in various programming languages. Components are a specialized case under the general delegation approach and meta-object protocol. That is, any complete component object system can be expressed with the templates and empathy model within The Orlando Treaty vision of object-oriented programming.
Entity: An entity represents a general-purpose object. In a game engine context, for example, every coarse game object is represented as an entity. Usually, it only consists of a unique id. Implementations typically use a plain integer for this.
Component: A component characterizes an entity as possessing a particular aspect, and holds the data needed to model that aspect. For example, every game object that can take damage might have a Health component associated with its entity. Implementations typically use structs, classes, or associative arrays.
System: A system is a process which acts on all entities with the desired components. For example, a physics system may query for entities having mass, velocity and position components, and iterate over the results doing physics calculations on the set of components for each entity.
The behavior of an entity can be changed at runtime by systems that add, remove or modify components. This eliminates the ambiguity problems of deep and wide inheritance hierarchies often found in Object Oriented Programming techniques that are difficult to understand, maintain, and extend. Common ECS approaches are highly compatible with, and are often combined with, data-oriented design techniques. Data for all instances of a component are contiguously stored together in physical memory, enabling efficient memory access for systems which operate over many entities.
History
In 1998, Thief: The Dark Project pioneered an ECS. The engine was later used for its sequel, as well as System Shock 2.
In 2002, Scott Bilas of Gas Powered Games (Dungeon Siege) gave a seminal talk on ECS. This inspired numerous later well-known implementations.
In early January 2007, Mick West who worked on the Tony Hawk series, shared his experiences on the process of ECS adoption at Neversoft.
Also in 2007, the team working on Operation Flashpoint: Dragon Rising experimented with ECS designs, including those inspired by Bilas/Dungeon Siege, and Adam Martin later wrote a detailed account of ECS design, including definitions of core terminology and concepts. In particular, Martin's work popularized the ideas of systems as a first-class element, entities as identifiers, components as raw data, and code stored in systems, not in components or entities.
In 2015, Apple Inc. introduced GameplayKit, an API framework for iOS, macOS and tvOS game development that includes an implementation of ECS.
In August 2018 Sander Mertens created the popular flecs ECS framework.
In October 2018 the company Unity released its megacity demo that utilized a tech stack built on an ECS. Unity's ECS runs on a powerful optimised architecture known as DOTS which "empowers creators to scale processing in a highly performant manner".
Variations
The data layout of different ECS's can differ as well as can the definition of components, how they relate to entities, and how systems access entities' components.
Martin's ECS
Adam Martin defines in his blog series what he considers an Entity–Component–System.
An entity only consists of an ID for accessing components. It is a common practice to use a unique ID for each entity. This is not a requirement, but it has several advantages:
The entity can be referred using the ID instead of a pointer. This is more robust, as it would allow for the entity to be destroyed without leaving dangling pointers.
It helps for saving state externally. When the state is loaded again, there is no need for pointers to be reconstructed.
Data can be shuffled around in memory as needed.
Entity ids can be used when communicating over a network to uniquely identify the entity.
Some of these advantages can also be achieved using smart pointers.
Components have no game code (behavior) inside of them. The components don't have to be located physically together with the entity, but should be easy to find and access using the entity.
"Each System runs continuously (as though each System had its own private thread) and performs global actions on every Entity that possesses a Component or Components that match that System's query."
The Unity game engine
Unity's layout has tables each with columns of components. In this system an entity type is based on the components it holds. For every entity type there is a table (called an archetype) holding columns of components that match the components used in the entity. To access a particular entity one must find the correct archetype (table) and index into each column to get each corresponding component for that entity.
Apparatus ECS
Apparatus is a third-party ECS implementation for Unreal Engine that has introduced some additional features to the common ECS paradigm. One of those features is the support of the type hierarchy for the components. Each component can have a base component type (or a base class) much like in OOP. A system can then query with the base class and get all of its descendants matched in the resulting entities selection. This can be very useful for some common logic to be implemented on a set of different components and adds an additional dimension to the paradigm.
FLECS
Flecs is a fast and lightweight ECS implementation for C & C++ that lets you build games and simulations with millions of entities.
Common patterns in ECS use
The normal way to transmit data between systems is to store the data in components, and then have each system access the component sequentially. For example, the position of an object can be updated regularly. This position is then used by other systems. If there are a lot of different infrequent events, a lot of flags will be needed in one or more components. Systems will then have to monitor these flags every iteration, which can become inefficient. A solution could be to use the observer pattern. All systems that depend on an event subscribe to it. The action from the event will thus only be executed once, when it happens, and no polling is needed.
The ECS has no trouble with dependency problems commonly found in Object-oriented Programming since components are simple data buckets, they have no dependencies. Each system will typically query the set of components an entity must have for the system to operate on it. For example, a render system might register the model, transform, and drawable components. When it runs, the system will perform its logic on any entity that has all of those components. Other entities are simply skipped, with no need for complex dependency trees. However this can be a place for bugs to hide, since propagating values from one system to another through components may be hard to debug. ECS may be used where uncoupled data needs to be bound to a given lifetime.
The ECS uses composition, rather than inheritance trees. An entity will be typically made up of an ID and a list of components that are attached to it. Any game object can be created by adding the correct components to an entity. This allows the developer to easily add features to an entity, without any dependency issues. For example, a player entity could have a bullet component added to it, and then it would meet the requirements to be manipulated by some bulletHandler system, which could result in that player doing damage to things by running into them.
The merits of using ECSs for storing the game state have been proclaimed by many game developers like Adam Martin. One good example is the blog posts by Richard Lord where he discusses the merits and why ECS designed game data storage systems are so useful.
Usage outside of games
Although mostly found in video game development, the ECS can be useful in other domains.
See also
Model–view–controller
Observer pattern
Strategy pattern
Relational model
Notes
References
External links
Anatomy of a knockout
Evolve Your Hierarchy
Entity Systems Wiki
Component - Game Programming Patterns
ECS design to achieve true Inversion of Flow Control
Architectural pattern (computer science)
Software design patterns | 0.765875 | 0.997242 | 0.763764 |
Coulomb barrier | The Coulomb barrier, named after Coulomb's law, which is in turn named after physicist Charles-Augustin de Coulomb, is the energy barrier due to electrostatic interaction that two nuclei need to overcome so they can get close enough to undergo a nuclear reaction.
Potential energy barrier
This energy barrier is given by the electric potential energy:
where
ε0 is the permittivity of free space;
q1, q2 are the charges of the interacting particles;
r is the interaction radius.
A positive value of U is due to a repulsive force, so interacting particles are at higher energy levels as they get closer. A negative potential energy indicates a bound state (due to an attractive force).
The Coulomb barrier increases with the atomic numbers (i.e. the number of protons) of the colliding nuclei:
where e is the elementary charge, and Zi the corresponding atomic numbers.
To overcome this barrier, nuclei have to collide at high velocities, so their kinetic energies drive them close enough for the strong interaction to take place and bind them together.
According to the kinetic theory of gases, the temperature of a gas is just a measure of the average kinetic energy of the particles in that gas. For classical ideal gases the velocity distribution of the gas particles is given by Maxwell–Boltzmann. From this distribution, the fraction of particles with a velocity high enough to overcome the Coulomb barrier can be determined.
In practice, temperatures needed to overcome the Coulomb barrier turned out to be smaller than expected due to quantum mechanical tunnelling, as established by Gamow. The consideration of barrier-penetration through tunnelling and the speed distribution gives rise to a limited range of conditions where fusion can take place, known as the Gamow window.
The absence of the Coulomb barrier enabled the discovery of the neutron by James Chadwick in 1932.
Modeling a potential energy barrier
There is keen interest in the mechanics and parameters of nuclear fusion, including methods of modeling the Coulomb barrier for scientific and educational purposes. The Coulomb barrier is a type of potential energy barrier, and is central to nuclear fusion. It results from the interplay of two fundamental interactions: the strong interaction at close-range within ≈ 1 fm, and the electromagnetic interaction at far-range beyond the Coulomb barrier. The microscopic range of the strong interaction, on the order of one femtometre, makes it challenging to model and no classical examples exist on the human scale. A visual and tactile classroom model of strong close-range attraction and far-range repulsion characteristic of the fusion potential curve is modeled in the magnetic “Coulomb” barrier apparatus. The apparatus won first place in the 2023 national apparatus competition of the American Academy of Physics Teachers in Sacramento, California. Essentially, a pair of opposing permanent magnet arrays generate asymmetric alternating N/S magnetic fields that result in repulsion at a distance and attraction within ≈ 1cm. A related patent method (US11,087,910 B2) further describes the apparatus and outlines criteria for more generally modeling an electromagnetic potential energy barrier. Magnetic and electric forces were unified within the electromagnetic fundamental force by James Clerk Maxwell in 1873 in A Treatise on Electricity and Magnetism. In the case of the magnetic “Coulomb” barrier, the patent describes alternating/unequal or asymmetric North and South magnetic poles but the patent method language is broad enough to include positive and negative electrostatic poles as well. The implication is that regularly spaced opposite and unequal electrostatic point charges possess the capacity to model an electrostatic potential energy barrier as well.
References
Nuclear physics
Nuclear fusion
Nuclear chemistry | 0.778221 | 0.981419 | 0.763761 |
Lorentz ether theory | What is now often called Lorentz ether theory (LET) has its roots in Hendrik Lorentz's "theory of electrons", which marked the end of the development of the classical aether theories at the end of the 19th and at the beginning of the 20th century.
Lorentz's initial theory was created between 1892 and 1895 and was based on removing assumptions about aether motion. It explained the failure of the negative aether drift experiments to first order in v/c by introducing an auxiliary variable called "local time" for connecting systems at rest and in motion in the aether. In addition, the negative result of the Michelson–Morley experiment led to the introduction of the hypothesis of length contraction in 1892. However, other experiments also produced negative results and (guided by Henri Poincaré's principle of relativity) Lorentz tried in 1899 and 1904 to expand his theory to all orders in v/c by introducing the Lorentz transformation. In addition, he assumed that non-electromagnetic forces (if they exist) transform like electric forces. However, Lorentz's expression for charge density and current were incorrect, so his theory did not fully exclude the possibility of detecting the aether. Eventually, it was Henri Poincaré who in 1905 corrected the errors in Lorentz's paper and actually incorporated non-electromagnetic forces (including gravitation) within the theory, which he called "The New Mechanics". Many aspects of Lorentz's theory were incorporated into special relativity (SR) with the works of Albert Einstein and Hermann Minkowski.
Today LET is often treated as some sort of "Lorentzian" or "neo-Lorentzian" interpretation of special relativity. The introduction of length contraction and time dilation for all phenomena in a "preferred" frame of reference, which plays the role of Lorentz's immobile aether, leads to the complete Lorentz transformation (see the Robertson–Mansouri–Sexl test theory as an example), so Lorentz covariance doesn't provide any experimentally verifiable distinctions between LET and SR. The absolute simultaneity in the Mansouri–Sexl test theory formulation of LET implies that a one-way speed of light experiment could in principle distinguish between LET and SR, but it is now widely held that it is impossible to perform such a test. In the absence of any way to experimentally distinguish between LET and SR, SR is widely preferred over LET, due to the superfluous assumption of an undetectable aether in LET, and the validity of the relativity principle in LET seeming ad hoc or coincidental.
Historical development
Basic concept
The Lorentz ether theory, which was developed mainly between 1892 and 1906 by Lorentz and Poincaré, was based on the aether theory of Augustin-Jean Fresnel, Maxwell's equations and the electron theory of Rudolf Clausius. Lorentz's 1895 paper rejected the aether drift theories, and refused to express assumptions about the nature of the aether. It said:
As Max Born later said, it was natural (though not logically necessary) for scientists of that time to identify the rest frame of the Lorentz aether with the absolute space of Isaac Newton. The condition of this aether can be described by the electric field E and the magnetic field H, where these fields represent the "states" of the aether (with no further specification), related to the charges of the electrons. Thus an abstract electromagnetic aether replaces the older mechanistic aether models. Contrary to Clausius, who accepted that the electrons operate by actions at a distance, the electromagnetic field of the aether appears as a mediator between the electrons, and changes in this field can propagate not faster than the speed of light. Lorentz theoretically explained the Zeeman effect on the basis of his theory, for which he received the Nobel Prize in Physics in 1902. Joseph Larmor found a similar theory simultaneously, but his concept was based on a mechanical aether. A fundamental concept of Lorentz's theory in 1895 was the "theorem of corresponding states" for terms of order v/c. This theorem states that a moving observer with respect to the aether can use the same electrodynamic equations as an observer in the stationary aether system, thus they are making the same observations.
Length contraction
A big challenge for the Lorentz ether theory was the Michelson–Morley experiment in 1887. According to the theories of Fresnel and Lorentz, a relative motion to an immobile aether had to be determined by this experiment; however, the result was negative. Michelson himself thought that the result confirmed the aether drag hypothesis, in which the aether is fully dragged by matter. However, other experiments like the Fizeau experiment and the effect of aberration disproved that model.
A possible solution came in sight, when in 1889 Oliver Heaviside derived from Maxwell's equations that the magnetic vector potential field around a moving body is altered by a factor of . Based on that result, and to bring the hypothesis of an immobile aether into accordance with the Michelson–Morley experiment, George FitzGerald in 1889 (qualitatively) and, independently of him, Lorentz in 1892 (already quantitatively), suggested that not only the electrostatic fields, but also the molecular forces, are affected in such a way that the dimension of a body in the line of motion is less by the value than the dimension perpendicularly to the line of motion. However, an observer co-moving with the earth would not notice this contraction because all other instruments contract at the same ratio. In 1895 Lorentz proposed three possible explanations for this relative contraction:
The body contracts in the line of motion and preserves its dimension perpendicularly to it.
The dimension of the body remains the same in the line of motion, but it expands perpendicularly to it.
The body contracts in the line of motion and expands at the same time perpendicularly to it.
Although the possible connection between electrostatic and intermolecular forces was used by Lorentz as a plausibility argument, the contraction hypothesis was soon considered as purely ad hoc. It is also important that this contraction would only affect the space between the electrons but not the electrons themselves; therefore the name "intermolecular hypothesis" was sometimes used for this effect. The so-called Length contraction without expansion perpendicularly to the line of motion and by the precise value (where l0 is the length at rest in the aether) was given by Larmor in 1897 and by Lorentz in 1904. In the same year, Lorentz also argued that electrons themselves are also affected by this contraction. For further development of this concept, see the section .
Local time
An important part of the theorem of corresponding states in 1892 and 1895 was the local time , where t is the time coordinate for an observer resting in the aether, and t' is the time coordinate for an observer moving in the aether. (Woldemar Voigt had previously used the same expression for local time in 1887 in connection with the Doppler effect and an incompressible medium.) With the help of this concept Lorentz could explain the aberration of light, the Doppler effect and the Fizeau experiment (i.e. measurements of the Fresnel drag coefficient) by Hippolyte Fizeau in moving and also resting liquids. While for Lorentz length contraction was a real physical effect, he considered the time transformation only as a heuristic working hypothesis and a mathematical stipulation to simplify the calculation from the resting to a "fictitious" moving system. Contrary to Lorentz, Poincaré saw more than a mathematical trick in the definition of local time, which he called Lorentz's "most ingenious idea". In The Measure of Time he wrote in 1898:
In 1900 Poincaré interpreted local time as the result of a synchronization procedure based on light signals. He assumed that two observers, A and B, who are moving in the aether, synchronize their clocks by optical signals. Since they treat themselves as being at rest, they must consider only the transmission time of the signals and then crossing their observations to examine whether their clocks are synchronous. However, from the point of view of an observer at rest in the aether the clocks are not synchronous and indicate the local time . But because the moving observers don't know anything about their movement, they don't recognize this. In 1904, he illustrated the same procedure in the following way: A sends a signal at time 0 to B, which arrives at time t. B also sends a signal at time 0 to A, which arrives at time t. If in both cases t has the same value, the clocks are synchronous, but only in the system in which the clocks are at rest in the aether. So, according to Darrigol, Poincaré understood local time as a physical effect just like length contraction – in contrast to Lorentz, who did not use the same interpretation before 1906. However, contrary to Einstein, who later used a similar synchronization procedure which was called Einstein synchronisation, Darrigol says that Poincaré had the opinion that clocks resting in the aether are showing the true time.
However, at the beginning it was unknown that local time includes what is now known as time dilation. This effect was first noticed by Larmor (1897), who wrote that "individual electrons describe corresponding parts of their orbits in times shorter for the [aether] system in the ratio or ". And in 1899 also Lorentz noted for the frequency of oscillating electrons "that in S the time of vibrations be times as great as in S0", where S0 is the aether frame, S the mathematical-fictitious frame of the moving observer, k is , and is an undetermined factor.
Lorentz transformation
While local time could explain the negative aether drift experiments to first order to v/c, it was necessary – due to other unsuccessful aether drift experiments like the Trouton–Noble experiment – to modify the hypothesis to include second-order effects. The mathematical tool for that is the so-called Lorentz transformation. Voigt in 1887 had already derived a similar set of equations (although with a different scale factor). Afterwards, Larmor in 1897 and Lorentz in 1899 derived equations in a form algebraically equivalent to those which are used up to this day, although Lorentz used an undetermined factor l in his transformation. In his paper Electromagnetic phenomena in a system moving with any velocity smaller than that of light (1904) Lorentz attempted to create such a theory, according to which all forces between the molecules are affected by the Lorentz transformation (in which Lorentz set the factor l to unity) in the same manner as electrostatic forces. In other words, Lorentz attempted to create a theory in which the relative motion of earth and aether is (nearly or fully) undetectable. Therefore, he generalized the contraction hypothesis and argued that not only the forces between the electrons, but also the electrons themselves are contracted in the line of motion. However, Max Abraham (1904) quickly noted a defect of that theory: Within a purely electromagnetic theory the contracted electron-configuration is unstable and one has to introduce non-electromagnetic force to stabilize the electrons – Abraham himself questioned the possibility of including such forces within the theory of Lorentz.
So it was Poincaré, on 5 June 1905, who introduced the so-called "Poincaré stresses" to solve that problem. Those stresses were interpreted by him as an external, non-electromagnetic pressure, which stabilize the electrons and also served as an explanation for length contraction. Although he argued that Lorentz succeeded in creating a theory which complies to the postulate of relativity, he showed that Lorentz's equations of electrodynamics were not fully Lorentz covariant. So by pointing out the group characteristics of the transformation, Poincaré demonstrated the Lorentz covariance of the Maxwell–Lorentz equations and corrected Lorentz's transformation formulae for charge density and current density. He went on to sketch a model of gravitation (incl. gravitational waves) which might be compatible with the transformations. It was Poincaré who, for the first time, used the term "Lorentz transformation", and he gave them a form which is used up to this day. (Where is an arbitrary function of , which must be set to unity to conserve the group characteristics. He also set the speed of light to unity.)
A substantially extended work (the so-called "Palermo paper") was submitted by Poincaré on 23 July 1905, but was published in January 1906 because the journal appeared only twice a year. He spoke literally of "the postulate of relativity", he showed that the transformations are a consequence of the principle of least action; he demonstrated in more detail the group characteristics of the transformation, which he called Lorentz group, and he showed that the combination is invariant. While elaborating his gravitational theory, he noticed that the Lorentz transformation is merely a rotation in four-dimensional space about the origin by introducing as a fourth, imaginary, coordinate, and he used an early form of four-vectors. However, Poincaré later said the translation of physics into the language of four-dimensional geometry would entail too much effort for limited profit, and therefore he refused to work out the consequences of this notion. This was later done, however, by Minkowski; see "The shift to relativity".
Electromagnetic mass
J. J. Thomson (1881) and others noticed that electromagnetic energy contributes to the mass of charged bodies by the amount , which was called electromagnetic or "apparent mass". Another derivation of some sort of electromagnetic mass was conducted by Poincaré (1900). By using the momentum of electromagnetic fields, he concluded that these fields contribute a mass of to all bodies, which is necessary to save the center of mass theorem.
As noted by Thomson and others, this mass increases also with velocity. Thus in 1899, Lorentz calculated that the ratio of the electron's mass in the moving frame and that of the aether frame is parallel to the direction of motion, and perpendicular to the direction of motion, where and is an undetermined factor. And in 1904, he set , arriving at the expressions for the masses in different directions (longitudinal and transverse):
where
Many scientists now believed that the entire mass and all forms of forces were electromagnetic in nature. This idea had to be given up, however, in the course of the development of relativistic mechanics. Abraham (1904) argued (as described in the preceding section #Lorentz transformation), that non-electrical binding forces were necessary within Lorentz's electrons model. But Abraham also noted that different results occurred, dependent on whether the em-mass is calculated from the energy or from the momentum. To solve those problems, Poincaré in 1905 and 1906 introduced some sort of pressure of non-electrical nature, which contributes the amount to the energy of the bodies, and therefore explains the 4/3-factor in the expression for the electromagnetic mass-energy relation. However, while Poincaré's expression for the energy of the electrons was correct, he erroneously stated that only the em-energy contributes to the mass of the bodies.
The concept of electromagnetic mass is not considered anymore as the cause of mass per se, because the entire mass (not only the electromagnetic part) is proportional to energy, and can be converted into different forms of energy, which is explained by Einstein's mass–energy equivalence.
Gravitation
Lorentz's theories
In 1900 Lorentz tried to explain gravity on the basis of the Maxwell equations. He first considered a Le Sage type model and argued that there possibly exists a universal radiation field, consisting of very penetrating em-radiation, and exerting a uniform pressure on every body. Lorentz showed that an attractive force between charged particles would indeed arise, if it is assumed that the incident energy is entirely absorbed. This was the same fundamental problem which had afflicted the other Le Sage models, because the radiation must vanish somehow and any absorption must lead to an enormous heating. Therefore, Lorentz abandoned this model.
In the same paper, he assumed like Ottaviano Fabrizio Mossotti and Johann Karl Friedrich Zöllner that the attraction of opposite charged particles is stronger than the repulsion of equal charged particles. The resulting net force is exactly what is known as universal gravitation, in which the speed of gravity is that of light. This leads to a conflict with the law of gravitation by Isaac Newton, in which it was shown by Pierre Simon Laplace that a finite speed of gravity leads to some sort of aberration and therefore makes the orbits unstable. However, Lorentz showed that the theory is not concerned by Laplace's critique, because due to the structure of the Maxwell equations only effects in the order v2/c2 arise. But Lorentz calculated that the value for the perihelion advance of Mercury was much too low. He wrote:
In 1908 Poincaré examined the gravitational theory of Lorentz and classified it as compatible with the relativity principle, but (like Lorentz) he criticized the inaccurate indication of the perihelion advance of Mercury. Contrary to Poincaré, Lorentz in 1914 considered his own theory as incompatible with the relativity principle and rejected it.
Lorentz-invariant gravitational law
Poincaré argued in 1904 that a propagation speed of gravity which is greater than c is contradicting the concept of local time and the relativity principle. He wrote:
However, in 1905 and 1906 Poincaré pointed out the possibility of a gravitational theory, in which changes propagate with the speed of light and which is Lorentz covariant. He pointed out that in such a theory the gravitational force not only depends on the masses and their mutual distance, but also on their velocities and their position due to the finite propagation time of interaction. On that occasion Poincaré introduced four-vectors. Following Poincaré, also Minkowski (1908) and Arnold Sommerfeld (1910) tried to establish a Lorentz-invariant gravitational law. However, these attempts were superseded because of Einstein's theory of general relativity, see "The shift to relativity".
The non-existence of a generalization of the Lorentz ether to gravity was a major reason for the preference for the spacetime interpretation. A viable generalization to gravity has been proposed only 2012 by Schmelzer. The preferred frame is defined by the harmonic coordinate condition. The gravitational field is defined by density, velocity and stress tensor of the Lorentz ether, so that the harmonic conditions become continuity and Euler equations. The Einstein Equivalence Principle is derived. The Strong Equivalence Principle is violated, but is recovered in a limit, which gives the Einstein equations of general relativity in harmonic coordinates.
Principles and conventions
Constancy of the speed of light
Already in his philosophical writing on time measurements (1898), Poincaré wrote that astronomers like Ole Rømer, in determining the speed of light, simply assume that light has a constant speed, and that this speed is the same in all directions. Without this postulate it would not be possible to infer the speed of light from astronomical observations, as Rømer did based on observations of the moons of Jupiter. Poincaré went on to note that Rømer also had to assume that Jupiter's moons obey Newton's laws, including the law of gravitation, whereas it would be possible to reconcile a different speed of light with the same observations if we assumed some different (probably more complicated) laws of motion. According to Poincaré, this illustrates that we adopt for the speed of light a value that makes the laws of mechanics as simple as possible. (This is an example of Poincaré's conventionalist philosophy.) Poincaré also noted that the propagation speed of light can be (and in practice often is) used to define simultaneity between spatially separate events. However, in that paper he did not go on to discuss the consequences of applying these "conventions" to multiple relatively moving systems of reference. This next step was done by Poincaré in 1900, when he recognized that synchronization by light signals in earth's reference frame leads to Lorentz's local time. (See the section on "local time" above). And in 1904 Poincaré wrote:
Principle of relativity
In 1895 Poincaré argued that experiments like that of Michelson–Morley show that it seems to be impossible to detect the absolute motion of matter or the relative motion of matter in relation to the aether. And although most physicists had other views, Poincaré in 1900 stood to his opinion and alternately used the expressions "principle of relative motion" and "relativity of space". He criticized Lorentz by saying, that it would be better to create a more fundamental theory, which explains the absence of any aether drift, than to create one hypothesis after the other. In 1902 he used for the first time the expression "principle of relativity". In 1904 he appreciated the work of the mathematicians, who saved what he now called the "principle of relativity" with the help of hypotheses like local time, but he confessed that this venture was possible only by an accumulation of hypotheses. And he defined the principle in this way (according to Miller based on Lorentz's theorem of corresponding states): "The principle of relativity, according to which the laws of physical phenomena must be the same for a stationary observer as for one carried along in a uniform motion of translation, so that we have no means, and can have none, of determining whether or not we are being carried along in such a motion."
Referring to the critique of Poincaré from 1900, Lorentz wrote in his famous paper in 1904, where he extended his theorem of corresponding states: "Surely, the course of inventing special hypotheses for each new experimental result is somewhat artificial. It would be more satisfactory, if it were possible to show, by means of certain fundamental assumptions, and without neglecting terms of one order of magnitude or another, that many electromagnetic actions are entirely independent of the motion of the system."
One of the first assessments of Lorentz's paper was by Paul Langevin in May 1905. According to him, this extension of the electron theories of Lorentz and Larmor led to "the physical impossibility to demonstrate the translational motion of the earth". However, Poincaré noticed in 1905 that Lorentz's theory of 1904 was not perfectly "Lorentz invariant" in a few equations such as Lorentz's expression for current density (Lorentz admitted in 1921 that these were defects). As this required just minor modifications of Lorentz's work, also Poincaré asserted that Lorentz had succeeded in harmonizing his theory with the principle of relativity: "It appears that this impossibility of demonstrating the absolute motion of the earth is a general law of nature. [...] Lorentz tried to complete and modify his hypothesis in order to harmonize it with the postulate of complete impossibility of determining absolute motion. It is what he has succeeded in doing in his article entitled Electromagnetic phenomena in a system moving with any velocity smaller than that of light [Lorentz, 1904b]."
In his Palermo paper (1906), Poincaré called this "the postulate of relativity“, and although he stated that it was possible this principle might be disproved at some point (and in fact he mentioned at the paper's end that the discovery of magneto-cathode rays by Paul Ulrich Villard (1904) seems to threaten it), he believed it was interesting to consider the consequences if we were to assume the postulate of relativity was valid without restriction. This would imply that all forces of nature (not just electromagnetism) must be invariant under the Lorentz transformation. In 1921 Lorentz credited Poincaré for establishing the principle and postulate of relativity and wrote: "I have not established the principle of relativity as rigorously and universally true. Poincaré, on the other hand, has obtained a perfect invariance of the electro-magnetic equations, and he has formulated 'the postulate of relativity', terms which he was the first to employ."
Aether
Poincaré wrote in the sense of his conventionalist philosophy in 1889: "Whether the aether exists or not matters little – let us leave that to the metaphysicians; what is essential for us is, that everything happens as if it existed, and that this hypothesis is found to be suitable for the explanation of phenomena. After all, have we any other reason for believing in the existence of material objects? That, too, is only a convenient hypothesis; only, it will never cease to be so, while some day, no doubt, the aether will be thrown aside as useless."
He also denied the existence of absolute space and time by saying in 1901: "1. There is no absolute space, and we only conceive of relative motion; and yet in most cases mechanical facts are enunciated as if there is an absolute space to which they can be referred. 2. There is no absolute time. When we say that two periods are equal, the statement has no meaning, and can only acquire a meaning by a convention. 3. Not only have we no direct intuition of the equality of two periods, but we have not even direct intuition of the simultaneity of two events occurring in two different places. I have explained this in an article entitled "Mesure du Temps" [1898]. 4. Finally, is not our Euclidean geometry in itself only a kind of convention of language?"
However, Poincaré himself never abandoned the aether hypothesis and stated in 1900: "Does our aether actually exist ? We know the origin of our belief in the aether. If light takes several years to reach us from a distant star, it is no longer on the star, nor is it on the earth. It must be somewhere, and supported, so to speak, by some material agency." And referring to the Fizeau experiment, he even wrote: "The aether is all but in our grasp." He also said the aether is necessary to harmonize Lorentz's theory with Newton's third law. Even in 1912 in a paper called "The Quantum Theory", Poincaré ten times used the word "aether", and described light as "luminous vibrations of the aether".
And although he admitted the relative and conventional character of space and time, he believed that the classical convention is more "convenient" and continued to distinguish between "true" time in the aether and "apparent" time in moving systems. Addressing the question if a new convention of space and time is needed he wrote in 1912: "Shall we be obliged to modify our conclusions? Certainly not; we had adopted a convention because it seemed convenient and we had said that nothing could constrain us to abandon it. Today some physicists want to adopt a new convention. It is not that they are constrained to do so; they consider this new convention more convenient; that is all. And those who are not of this opinion can legitimately retain the old one in order not to disturb their old habits, I believe, just between us, that this is what they shall do for a long time to come."
Also Lorentz argued during his lifetime that in all frames of reference this one has to be preferred, in which the aether is at rest. Clocks in this frame are showing the "real“ time and simultaneity is not relative. However, if the correctness of the relativity principle is accepted, it is impossible to find this system by experiment.
The shift to relativity
Special relativity
In 1905, Albert Einstein published his paper on what is now called special relativity. In this paper, by examining the fundamental meanings of the space and time coordinates used in physical theories, Einstein showed that the "effective" coordinates given by the Lorentz transformation were in fact the inertial coordinates of relatively moving frames of reference. From this followed all of the physically observable consequences of LET, along with others, all without the need to postulate an unobservable entity (the aether). Einstein identified two fundamental principles, each founded on experience, from which all of Lorentz's electrodynamics follows:
The laws by which physical processes occur are the same with respect to any system of inertial coordinates (the principle of relativity)
In empty space light propagates at an absolute speed c in any system of inertial coordinates (the principle of the constancy of light)
Taken together (along with a few other tacit assumptions such as isotropy and homogeneity of space), these two postulates lead uniquely to the mathematics of special relativity. Lorentz and Poincaré had also adopted these same principles, as necessary to achieve their final results, but didn't recognize that they were also sufficient, and hence that they obviated all the other assumptions underlying Lorentz's initial derivations (many of which later turned out to be incorrect). Therefore, special relativity very quickly gained wide acceptance among physicists, and the 19th century concept of a luminiferous aether was no longer considered useful.
Poincare (1905) and Hermann Minkowski (1905) showed that special relativity had a very natural interpretation in terms of a unified four-dimensional "spacetime" in which absolute intervals are seen to be given by an extension of the Pythagorean theorem. The utility and naturalness of the spacetime representation contributed to the rapid acceptance of special relativity, and to the corresponding loss of interest in Lorentz's aether theory.
In 1909 and 1912 Einstein explained:
In 1907 Einstein criticized the "ad hoc" character of Lorentz's contraction hypothesis in his theory of electrons, because according to him it was an artificial assumption to make the Michelson–Morley experiment conform to Lorentz's stationary aether and the relativity principle. Einstein argued that Lorentz's "local time" can simply be called "time", and he stated that the immobile aether as the theoretical foundation of electrodynamics was unsatisfactory. He wrote in 1920:
Minkowski argued that Lorentz's introduction of the contraction hypothesis "sounds rather fantastical", since it is not the product of resistance in the aether but a "gift from above". He said that this hypothesis is "completely equivalent with the new concept of space and time", though it becomes much more comprehensible in the framework of the new spacetime geometry. However, Lorentz disagreed that it was "ad-hoc" and he argued in 1913 that there is little difference between his theory and the negation of a preferred reference frame, as in the theory of Einstein and Minkowski, so that it is a matter of taste which theory one prefers.
Mass–energy equivalence
It was derived by Einstein (1905) as a consequence of the relativity principle, that inertia of energy is actually represented by , but in contrast to Poincaré's 1900-paper, Einstein recognized that matter itself loses or gains mass during the emission or absorption. So the mass of any form of matter is equal to a certain amount of energy, which can be converted into and re-converted from other forms of energy. This is the mass–energy equivalence, represented by . So Einstein didn't have to introduce "fictitious" masses and also avoided the perpetual motion problem, because according to Darrigol, Poincaré's radiation paradox can simply be solved by applying Einstein's equivalence. If the light source loses mass during the emission by , the contradiction in the momentum law vanishes without the need of any compensating effect in the aether.
Similar to Poincaré, Einstein concluded in 1906 that the inertia of (electromagnetic) energy is a necessary condition for the center of mass theorem to hold in systems, in which electromagnetic fields and matter are acting on each other. Based on the mass–energy equivalence, he showed that emission and absorption of em-radiation, and therefore the transport of inertia, solves all problems. On that occasion, Einstein referred to Poincaré's 1900-paper and wrote:
Also Poincaré's rejection of the reaction principle due to the violation of the mass conservation law can be avoided through Einstein's , because mass conservation appears as a special case of the energy conservation law.
General relativity
The attempts of Lorentz and Poincaré (and other attempts like those of Abraham and Gunnar Nordström) to formulate a theory of gravitation were superseded by Einstein's theory of general relativity. This theory is based on principles like the equivalence principle, the general principle of relativity, the principle of general covariance, geodesic motion, local Lorentz covariance (the laws of special relativity apply locally for all inertial observers), and that spacetime curvature is created by stress-energy within the spacetime.
In 1920, Einstein compared Lorentz's aether with the "gravitational aether" of general relativity. He said that immobility is the only mechanical property of which the aether has not been deprived by Lorentz, but, contrary to the luminiferous and Lorentz's aether, the aether of general relativity has no mechanical property, not even immobility:
Priority
Some claim that Poincaré and Lorentz are the true founders of special relativity, not Einstein. For more details see the article on this dispute.
Later activity
Viewed as a theory of elementary particles, Lorentz's electron/ether theory was superseded during the first few decades of the 20th century, first by quantum mechanics and then by quantum field theory. As a general theory of dynamics, Lorentz and Poincare had already (by about 1905) found it necessary to invoke the principle of relativity itself in order to make the theory match all the available empirical data. By this point, most vestiges of a substantial aether had been eliminated from Lorentz's "aether" theory, and it became both empirically and deductively equivalent to special relativity. The main difference was the metaphysical postulate of a unique absolute rest frame, which was empirically undetectable and played no role in the physical predictions of the theory, as Lorentz wrote in 1909, 1910 (published 1913), 1913 (published 1914), or in 1912 (published 1922).
As a result, the term "Lorentz aether theory" is sometimes used today to refer to a neo-Lorentzian interpretation of special relativity. The prefix "neo" is used in recognition of the fact that the interpretation must now be applied to physical entities and processes (such as the standard model of quantum field theory) that were unknown in Lorentz's day.
Subsequent to the advent of special relativity, only a small number of individuals have advocated the Lorentzian approach to physics. Many of these, such as Herbert E. Ives (who, along with G. R. Stilwell, performed the first experimental confirmation of time dilation) have been motivated by the belief that special relativity is logically inconsistent, and so some other conceptual framework is needed to reconcile the relativistic phenomena. For example, Ives wrote "The 'principle' of the constancy of the velocity of light is not merely 'ununderstandable', it is not supported by 'objective matters of fact'; it is untenable...". However, the logical consistency of special relativity (as well as its empirical success) is well established, so the views of such individuals are considered unfounded within the mainstream scientific community.
John Stewart Bell advocated teaching special relativity first from the viewpoint of a single Lorentz inertial frame, then showing that Poincare invariance of the laws of physics such as Maxwell's equations is equivalent to the frame-changing arguments often used in teaching special relativity. Because a single Lorentz inertial frame is one of a preferred class of frames, he called this approach Lorentzian in spirit.
Also some test theories of special relativity use some sort of Lorentzian framework. For instance, the Robertson–Mansouri–Sexl test theory introduces a preferred aether frame and includes parameters indicating different combinations of length and times changes. If time dilation and length contraction of bodies moving in the aether have their exact relativistic values, the complete Lorentz transformation can be derived and the aether is hidden from any observation, which makes it kinematically indistinguishable from the predictions of special relativity. Using this model, the Michelson–Morley experiment, Kennedy–Thorndike experiment, and Ives–Stilwell experiment put sharp constraints on violations of Lorentz invariance.
References
For a more complete list with sources of many other authors, see History of special relativity#References.
Works of Lorentz, Poincaré, Einstein, Minkowski (group A)
Preface partly reprinted in "Science and Hypothesis", Ch. 12.
. Reprinted in Poincaré, Oeuvres, tome IX, pp. 395–413
. Reprinted in "Science and Hypothesis", Ch. 9–10.
. See also the English translation.
. Reprinted in "Science and Hypothesis", Ch. 6–7.
Reprinted in Poincaré 1913, Ch. 6.
. See also: English translation.
. English Translation:
Secondary sources (group B)
In English:
Other notes and comments (group C)
External links
Mathpages: Corresponding States, The End of My Latin, Who Invented Relativity?, Poincaré Contemplates Copernicus, Whittaker and the Aether, Another Derivation of Mass-Energy Equivalence
Aether theories
Hendrik Lorentz
Special relativity | 0.781017 | 0.977867 | 0.763731 |
Torr | The torr (symbol: Torr) is a unit of pressure based on an absolute scale, defined as exactly of a standard atmosphere (101325 Pa). Thus one torr is exactly (≈ ).
Historically, one torr was intended to be the same as one "millimeter of mercury", but subsequent redefinitions of the two units made them slightly different (by less than 0.000015%). The torr is not part of the International System of Units (SI). Even so, it is often combined with the metric prefix milli to name one millitorr (mTorr) or 0.001 Torr.
The unit was named after Evangelista Torricelli, an Italian physicist and mathematician who discovered the principle of the barometer in 1644.
Nomenclature and common errors
The unit name torr is written in lower case, while its symbol ("Torr") is always written with an uppercase initial; including in combinations with prefixes and other unit symbols, as in "mTorr" (millitorr) or "Torr⋅L/s" (torr-litres per second). The symbol (uppercase) should be used with prefix symbols (thus, mTorr and millitorr are correct, but mtorr and milliTorr are not).
The torr is sometimes incorrectly denoted by the symbol "T", which is the SI symbol for the tesla, the unit measuring the strength of a magnetic field. Although frequently encountered, the alternative spelling "Tor" is incorrect.
History
Torricelli attracted considerable attention when he demonstrated the first mercury barometer to the general public. He is credited with giving the first modern explanation of atmospheric pressure. Scientists at the time were familiar with small fluctuations in height that occurred in barometers. When these fluctuations were explained as a manifestation of changes in atmospheric pressure, the science of meteorology was born.
Over time, 760 millimeters of mercury at 0 °C came to be regarded as the standard atmospheric pressure. In honour of Torricelli, the torr was defined as a unit of pressure equal to one millimeter of mercury at 0 °C. However, since the acceleration due to gravity – and thus the weight of a column of mercury – is a function of elevation and latitude (due to the rotation and non-sphericity of the Earth), this definition is imprecise and varies by location.
In 1954, the definition of the atmosphere was revised by the 10th General Conference on Weights and Measures to the currently accepted definition: one atmosphere is equal to 101325 pascals. The torr was then redefined as of one atmosphere. This yields a precise definition that is unambiguous and independent of measurements of the density of mercury or the acceleration due to gravity on Earth.
Manometric units of pressure
Manometric units are units such as millimeters of mercury or centimeters of water that depend on an assumed density of a fluid and an assumed acceleration due to gravity. The use of these units is discouraged. Nevertheless, manometric units are routinely used in medicine and physiology, and they continue to be used in areas as diverse as weather reporting and scuba diving.
Conversion factors
The millimeter of mercury by definition is 133.322387415 Pa (13.5951 g/cm3 × 9.80665 m/s2 × 1 mm), which is approximated with known accuracies of density of mercury and standard gravity.
The torr is defined as of one standard atmosphere, while the atmosphere is defined as 101325 pascals. Therefore, 1 Torr is equal to Pa. The decimal form of this fraction is an infinitely long, periodically repeating decimal (repetend length: 18).
The relationship between the torr and the millimeter of mercury is:
1 Torr = mmHg
1 mmHg = Torr
The difference between one millimeter of mercury and one torr, as well as between one atmosphere (101.325 kPa) and 760 mmHg (101.3250144354 kPa), is less than one part in seven million (or less than 0.000015%). This small difference is negligible for all practical purposes.
In the European Union, the millimeter of mercury is defined as
1 mmHg = 133.322 Pa
hence
1 Torr = mmHg
1 mmHg = Torr
Other units of pressure include:
The bar (symbol: bar), defined as 100 kPa exactly.
The atmosphere (symbol: atm), defined as 101.325 kPa exactly.
These four pressure units are used in different settings. For example, the bar is used in meteorology to report atmospheric pressures. The torr is used in high-vacuum physics and engineering.
See also
Atmosphere (unit)
Centimetre of water
Conversion of units
Inch of mercury
Outline of the metric system
Pascal (unit)
Pressure head
Pressure
References
External links
NPL – pressure units
Non-SI metric units
Units of pressure
Mercury (element) | 0.768177 | 0.994205 | 0.763725 |
Signal | Signal refers to both the process and the result of transmission of data over some media accomplished by embedding some variation. Signals are important in multiple subject fields including signal processing, information theory and biology.
In signal processing, a signal is a function that conveys information about a phenomenon. Any quantity that can vary over space or time can be used as a signal to share messages between observers. The IEEE Transactions on Signal Processing includes audio, video, speech, image, sonar, and radar as examples of signals. A signal may also be defined as observable change in a quantity over space or time (a time series), even if it does not carry information.
In nature, signals can be actions done by an organism to alert other organisms, ranging from the release of plant chemicals to warn nearby plants of a predator, to sounds or motions made by animals to alert other animals of food. Signaling occurs in all organisms even at cellular levels, with cell signaling. Signaling theory, in evolutionary biology, proposes that a substantial driver for evolution is the ability of animals to communicate with each other by developing ways of signaling. In human engineering, signals are typically provided by a sensor, and often the original form of a signal is converted to another form of energy using a transducer. For example, a microphone converts an acoustic signal to a voltage waveform, and a speaker does the reverse.
Another important property of a signal is its entropy or information content. Information theory serves as the formal study of signals and their content. The information of a signal is often accompanied by noise, which primarily refers to unwanted modifications of signals, but is often extended to include unwanted signals conflicting with desired signals (crosstalk). The reduction of noise is covered in part under the heading of signal integrity. The separation of desired signals from background noise is the field of signal recovery, one branch of which is estimation theory, a probabilistic approach to suppressing random disturbances.
Engineering disciplines such as electrical engineering have advanced the design, study, and implementation of systems involving transmission, storage, and manipulation of information. In the latter half of the 20th century, electrical engineering itself separated into several disciplines: electronic engineering and computer engineering developed to specialize in the design and analysis of systems that manipulate physical signals, while design engineering developed to address the functional design of signals in user–machine interfaces.
Definitions
Definitions specific to sub-fields are common:
In electronics and telecommunications, signal refers to any time-varying voltage, current, or electromagnetic wave that carries information.
In signal processing, signals are analog and digital representations of analog physical quantities.
In information theory, a signal is a codified message, that is, the sequence of states in a communication channel that encodes a message.
In a communication system, a transmitter encodes a message to create a signal, which is carried to a receiver by the communication channel. For example, the words "Mary had a little lamb" might be the message spoken into a telephone. The telephone transmitter converts the sounds into an electrical signal. The signal is transmitted to the receiving telephone by wires; at the receiver it is reconverted into sounds.
In telephone networks, signaling, for example common-channel signaling, refers to phone number and other digital control information rather than the actual voice signal.
Classification
Signals can be categorized in various ways. The most common distinction is between discrete and continuous spaces that the functions are defined over, for example, discrete and continuous-time domains. Discrete-time signals are often referred to as time series in other fields. Continuous-time signals are often referred to as continuous signals.
A second important distinction is between discrete-valued and continuous-valued. Particularly in digital signal processing, a digital signal may be defined as a sequence of discrete values, typically associated with an underlying continuous-valued physical process. In digital electronics, digital signals are the continuous-time waveform signals in a digital system, representing a bit-stream.
Signals may also be categorized by their spatial distributions as either point source signals (PSSs) or distributed source signals (DSSs).
In Signals and Systems, signals can be classified according to many criteria, mainly: according to the different feature of values, classified into analog signals and digital signals; according to the determinacy of signals, classified into deterministic signals and random signals; according to the strength of signals, classified into energy signals and power signals.
Analog and digital signals
Two main types of signals encountered in practice are analog and digital. The figure shows a digital signal that results from approximating an analog signal by its values at particular time instants. Digital signals are quantized, while analog signals are continuous.
Analog signal
An analog signal is any continuous signal for which the time-varying feature of the signal is a representation of some other time varying quantity, i.e., analogous to another time varying signal. For example, in an analog audio signal, the instantaneous voltage of the signal varies continuously with the sound pressure. It differs from a digital signal, in which the continuous quantity is a representation of a sequence of discrete values which can only take on one of a finite number of values.
The term analog signal usually refers to electrical signals; however, analog signals may use other mediums such as mechanical, pneumatic or hydraulic. An analog signal uses some property of the medium to convey the signal's information. For example, an aneroid barometer uses rotary position as the signal to convey pressure information. In an electrical signal, the voltage, current, or frequency of the signal may be varied to represent the information.
Any information may be conveyed by an analog signal; often such a signal is a measured response to changes in physical phenomena, such as sound, light, temperature, position, or pressure. The physical variable is converted to an analog signal by a transducer. For example, in sound recording, fluctuations in air pressure (that is to say, sound) strike the diaphragm of a microphone which induces corresponding electrical fluctuations. The voltage or the current is said to be an analog of the sound.
Digital signal
A digital signal is a signal that is constructed from a discrete set of waveforms of a physical quantity so as to represent a sequence of discrete values. A logic signal is a digital signal with only two possible values, and describes an arbitrary bit stream. Other types of digital signals can represent three-valued logic or higher valued logics.
Alternatively, a digital signal may be considered to be the sequence of codes represented by such a physical quantity. The physical quantity may be a variable electric current or voltage, the intensity, phase or polarization of an optical or other electromagnetic field, acoustic pressure, the magnetization of a magnetic storage media, etc. Digital signals are present in all digital electronics, notably computing equipment and data transmission.
With digital signals, system noise, provided it is not too great, will not affect system operation whereas noise always degrades the operation of analog signals to some degree.
Digital signals often arise via sampling of analog signals, for example, a continually fluctuating voltage on a line that can be digitized by an analog-to-digital converter circuit, wherein the circuit will read the voltage level on the line, say, every 50 microseconds and represent each reading with a fixed number of bits. The resulting stream of numbers is stored as digital data on a discrete-time and quantized-amplitude signal. Computers and other digital devices are restricted to discrete time.
Energy and power
According to the strengths of signals, practical signals can be classified into two categories: energy signals and power signals.
Energy signals: Those signals' energy are equal to a finite positive value, but their average powers are 0;
Power signals: Those signals' average power are equal to a finite positive value, but their energy are infinite.
Deterministic and random
Deterministic signals are those whose values at any time are predictable and can be calculated by a mathematical equation.
Random signals are signals that take on random values at any given time instant and must be modeled stochastically.
Even and odd
An even signal satisfies the condition
or equivalently if the following equation holds for all and in the domain of :
An odd signal satisfies the condition
or equivalently if the following equation holds for all and in the domain of :
Periodic
A signal is said to be periodic if it satisfies the condition:
or
Where:
= fundamental time period,
= fundamental frequency.
The same can be applied to . A periodic signal will repeat for every period.
Time discretization
Signals can be classified as continuous or discrete time. In the mathematical abstraction, the domain of a continuous-time signal is the set of real numbers (or some interval thereof), whereas the domain of a discrete-time (DT) signal is the set of integers (or other subsets of real numbers). What these integers represent depends on the nature of the signal; most often it is time.
A continuous-time signal is any function which is defined at every time t in an interval, most commonly an infinite interval. A simple source for a discrete-time signal is the sampling of a continuous signal, approximating the signal by a sequence of its values at particular time instants.
Amplitude quantization
If a signal is to be represented as a sequence of digital data, it is impossible to maintain exact precision – each number in the sequence must have a finite number of digits. As a result, the values of such a signal must be quantized into a finite set for practical representation. Quantization is the process of converting a continuous analog audio signal to a digital signal with discrete numerical values of integers.
Examples of signals
Naturally occurring signals can be converted to electronic signals by various sensors. Examples include:
Motion. The motion of an object can be considered to be a signal and can be monitored by various sensors to provide electrical signals. For example, radar can provide an electromagnetic signal for following aircraft motion. A motion signal is one-dimensional (time), and the range is generally three-dimensional. Position is thus a 3-vector signal; position and orientation of a rigid body is a 6-vector signal. Orientation signals can be generated using a gyroscope.
Sound. Since a sound is a vibration of a medium (such as air), a sound signal associates a pressure value to every value of time and possibly three space coordinates indicating the direction of travel. A sound signal is converted to an electrical signal by a microphone, generating a voltage signal as an analog of the sound signal. Sound signals can be sampled at a discrete set of time points; for example, compact discs (CDs) contain discrete signals representing sound, recorded at 44,100 Hz; since CDs are recorded in stereo, each sample contains data for a left and right channel, which may be considered to be a 2-vector signal. The CD encoding is converted to an electrical signal by reading the information with a laser, converting the sound signal to an optical signal.
Images. A picture or image consists of a brightness or color signal, a function of a two-dimensional location. The object's appearance is presented as emitted or reflected light, an electromagnetic signal. It can be converted to voltage or current waveforms using devices such as the charge-coupled device. A 2D image can have a continuous spatial domain, as in a traditional photograph or painting; or the image can be discretized in space, as in a digital image. Color images are typically represented as a combination of monochrome images in three primary colors.
Videos. A video signal is a sequence of images. A point in a video is identified by its two-dimensional position in the image and by the time at which it occurs, so a video signal has a three-dimensional domain. Analog video has one continuous domain dimension (across a scan line) and two discrete dimensions (frame and line).
Biological membrane potentials. The value of the signal is an electric potential (voltage). The domain is more difficult to establish. Some cells or organelles have the same membrane potential throughout; neurons generally have different potentials at different points. These signals have very low energies, but are enough to make nervous systems work; they can be measured in aggregate by electrophysiology techniques.
The output of a thermocouple, which conveys temperature information.
The output of a pH meter which conveys acidity information.
Signal processing
Signal processing is the manipulation of signals. A common example is signal transmission between different locations. The embodiment of a signal in electrical form is made by a transducer that converts the signal from its original form to a waveform expressed as a current or a voltage, or electromagnetic radiation, for example, an optical signal or radio transmission. Once expressed as an electronic signal, the signal is available for further processing by electrical devices such as electronic amplifiers and filters, and can be transmitted to a remote location by a transmitter and received using radio receivers.
Signals and systems
In electrical engineering (EE) programs, signals are covered in a class and field of study known as signals and systems. Depending on the school, undergraduate EE students generally take the class as juniors or seniors, normally depending on the number and level of previous linear algebra and differential equation classes they have taken.
The field studies input and output signals, and the mathematical representations between them known as systems, in four domains: time, frequency, s and z. Since signals and systems are both studied in these four domains, there are 8 major divisions of study. As an example, when working with continuous-time signals (t), one might transform from the time domain to a frequency or s domain; or from discrete time (n) to frequency or z domains. Systems also can be transformed between these domains like signals, with continuous to s and discrete to z.
Signals and systems is a subset of the field of mathematical modeling. It involves circuit analysis and design via mathematical modeling and some numerical methods, and was updated several decades ago with dynamical systems tools including differential equations, and recently, Lagrangians. Students are expected to understand the modeling tools as well as the mathematics, physics, circuit analysis, and transformations between the 8 domains.
Because mechanical engineering (ME) topics like friction, dampening etc. have very close analogies in signal science (inductance, resistance, voltage, etc.), many of the tools originally used in ME transformations (Laplace and Fourier transforms, Lagrangians, sampling theory, probability, difference equations, etc.) have now been applied to signals, circuits, systems and their components, analysis and design in EE. Dynamical systems that involve noise, filtering and other random or chaotic attractors and repellers have now placed stochastic sciences and statistics between the more deterministic discrete and continuous functions in the field. (Deterministic as used here means signals that are completely determined as functions of time).
EE taxonomists are still not decided where signals and systems falls within the whole field of signal processing vs. circuit analysis and mathematical modeling, but the common link of the topics that are covered in the course of study has brightened boundaries with dozens of books, journals, etc. called "Signals and Systems", and used as text and test prep for the EE, as well as, recently, computer engineering exams.
Gallery
See also
Current loop – a signaling system in widespread use for process control
Signal-to-noise ratio
Notes
References
Further reading
Engineering concepts
Digital signal processing
Signal processing
Telecommunication theory | 0.767589 | 0.99496 | 0.763721 |
Brunt–Väisälä frequency | In atmospheric dynamics, oceanography, asteroseismology and geophysics, the Brunt–Väisälä frequency, or buoyancy frequency, is a measure of the stability of a fluid to vertical displacements such as those caused by convection. More precisely it is the frequency at which a vertically displaced parcel will oscillate within a statically stable environment. It is named after David Brunt and Vilho Väisälä. It can be used as a measure of atmospheric stratification.
Derivation for a general fluid
Consider a parcel of water or gas that has density . This parcel is in an environment of other water or gas particles where the density of the environment is a function of height: . If the parcel is displaced by a small vertical increment , and it maintains its original density so that its volume does not change, it will be subject to an extra gravitational force against its surroundings of:
where is the gravitational acceleration, and is defined to be positive. We make a linear approximation to , and move to the RHS:
The above second-order differential equation has the following solution:
where the Brunt–Väisälä frequency is:
For negative , the displacement has oscillating solutions (and N gives our angular frequency). If it is positive, then there is run away growth – i.e. the fluid is statically unstable.
In meteorology and astrophysics
For a gas parcel, the density will only remain fixed as assumed in the previous derivation if the pressure, , is constant with height, which is not true in an atmosphere confined by gravity. Instead, the parcel will expand adiabatically as the pressure declines. Therefore a more general formulation used in meteorology is:
, where is potential temperature, is the local acceleration of gravity, and is geometric height.
Since , where is a constant reference pressure, for a perfect gas this expression is equivalent to:
,
where in the last form , the adiabatic index. Using the ideal gas law, we can eliminate the temperature to express in terms of pressure and density:
.
This version is in fact more general than the first, as it applies when the chemical composition of the gas varies with height, and also for imperfect gases with variable adiabatic index, in which case , i.e. the derivative
is taken at constant entropy, .
If a gas parcel is pushed up and , the air parcel will move up and down around the height where the density of the parcel matches the density of the surrounding air. If the air parcel is pushed up and , the air parcel will not move any further. If the air parcel is pushed up and , (i.e. the Brunt–Väisälä frequency is imaginary), then the air parcel will rise and rise unless becomes positive or zero again further up in the atmosphere. In practice this leads to convection, and hence the Schwarzschild criterion for stability against convection (or the Ledoux criterion if there is compositional stratification) is equivalent to the statement that should be positive.
The Brunt–Väisälä frequency commonly appears in the thermodynamic equations for the atmosphere and in the structure of stars.
In oceanography
In the ocean where salinity is important, or in fresh water lakes near freezing, where density is not a linear function of temperature:where , the potential density, depends on both temperature and salinity. An example of Brunt–Väisälä oscillation in a density stratified liquid can be observed in the 'Magic Cork' movie here .
Context
The concept derives from Newton's Second Law when applied to a fluid parcel in the presence of a background stratification (in which the density changes in the vertical - i.e. the density can be said to have multiple vertical layers). The parcel, perturbed vertically from its starting position, experiences a vertical acceleration. If the acceleration is back towards the initial position, the stratification is said to be stable and the parcel oscillates vertically. In this case, and the angular frequency of oscillation is given . If the acceleration is away from the initial position, the stratification is unstable. In this case, overturning or convection generally ensues.
The Brunt–Väisälä frequency relates to internal gravity waves: it is the frequency when the waves propagate horizontally; and it provides a useful description of atmospheric and oceanic stability.
See also
Buoyancy
Bénard cell
References
Atmospheric thermodynamics
Atmospheric dynamics
Fluid dynamics
Oceanography
Buoyancy | 0.777825 | 0.981863 | 0.763718 |
Equifinality | Equifinality is the principle that in open systems a given end state can be reached by many potential means. The term and concept is due to the German Hans Driesch, the developmental biologist, later applied by the Austrian Ludwig von Bertalanffy, the founder of general systems theory, and by William T. Powers, the founder of perceptual control theory. Driesch and von Bertalanffy prefer this term, in contrast to "goal", in describing complex systems' similar or convergent behavior. Powers simply emphasised the flexibility of response, since it emphasizes that the same end state may be achieved via many different paths or trajectories.
In closed systems, a direct cause-and-effect relationship exists between the initial condition and the final state of the system: When a computer's 'on' switch is pushed, the system powers up. Open systems (such as biological and social systems), however, operate quite differently. The idea of equifinality suggests that similar results may be achieved with different initial conditions and in many different ways. This phenomenon has also been referred to as isotelesis (from Greek ἴσος isos "equal" and τέλεσις telesis: "the intelligent direction of effort toward the achievement of an end") when in games involving superrationality.
Overview
In business, equifinality implies that firms may establish similar competitive advantages based on substantially different competencies.
In psychology, equifinality refers to how different early experiences in life (e.g., parental divorce, physical abuse, parental substance abuse) can lead to similar outcomes (e.g., childhood depression). In other words, there are many different early experiences that can lead to the same psychological disorder.
In archaeology, equifinality refers to how different historical processes may lead to a similar outcome or social formation. For example, the development of agriculture or the bow and arrow occurred independently in many different areas of the world, yet for different reasons and through different historical trajectories. This highlights that generalizations based on cross-cultural comparisons cannot be made uncritically.
In Earth and environmental Sciences, two general types of equifinality are distinguished: process equifinality (concerned with real-world open systems) and model equifinality (concerned with conceptual open systems). For example, process equifinality in geomorphology indicates that similar landforms might arise as a result of quite different sets of processes. Model equifinality refers to a condition where distinct configurations of model components (e.g. distinct model parameter values) can lead to similar or equally acceptable simulations (or representations of the real-world process of interest). This similarity or equal acceptability is conditional on the objective functions and criteria of acceptability defined by the modeler. While model equifinality has various facets, model parameter and structural equifinality are mostly known and focused in modeling studies. Equifinality (particularly parameter equifinality) and Monte Carlo experiments are the foundation of the GLUE method that was the first generalised method for uncertainty assessment in hydrological modeling. GLUE is now widely used within and beyond environmental modeling.
See also
GLUE – Generalized Likelihood Uncertainty Estimation (when modeling environmental systems there are many different model structures and parameter sets that may be behavioural or acceptable in reproducing the behaviour of that system)
TMTOWTDI – Computer programming maxim: "there is more than one way to do it"
Underdetermination
Consilience
Convergent evolution
Teleonomy
Degeneracy (biology)
Kruskal's principle
Multicollinearity
References
Publications
Bertalanffy, Ludwig von, General Systems Theory, 1968
Beven, K.J. and Binley, A.M., 1992. The future of distributed models: model calibration and uncertainty prediction, Hydrological Processes, 6, pp. 279–298.
Beven, K.J. and Freer, J., 2001a. Equifinality, data assimilation, and uncertainty estimation in mechanistic modelling of complex environmental systems, Journal of Hydrology, 249, 11–29.
Croft, Gary W., Glossary of Systems Theory and Practice for the Applied Behavioral Sciences, Syntropy Incorporated, Freeland, WA, Prepublication Review Copy, 1996
Durkin, James E. (ed.), Living Groups: Group Psychotherapy and General System Theory, Brunner/Mazel, New York, 1981
Mash, E. J., & Wolfe, D. A. (2005). Abnormal Child Psychology (3rd edition). Wadsworth Canada. pp. 13–14.
Weisbord, Marvin R., Productive Workplaces: Organizing and Managing for Dignity, Meaning, and Community, Jossey-Bass Publishers, San Francisco, 1987
Tang, J.Y. and Zhuang, Q. (2008). Equifinality in parameterization of process-based biogeochemistry models: A significant uncertainty source to the estimation of regional carbon dynamics, J. Geophys. Res., 113, G04010.
Systems theory | 0.783179 | 0.975139 | 0.763708 |
Duhamel's principle | In mathematics, and more specifically in partial differential equations, Duhamel's principle is a general method for obtaining solutions to inhomogeneous linear evolution equations like the heat equation, wave equation, and vibrating plate equation. It is named after Jean-Marie Duhamel who first applied the principle to the inhomogeneous heat equation that models, for instance, the distribution of heat in a thin plate which is heated from beneath. For linear evolution equations without spatial dependency, such as a harmonic oscillator, Duhamel's principle reduces to the method of variation of parameters technique for solving linear inhomogeneous ordinary differential equations. It is also an indispensable tool in the study of nonlinear partial differential equations such as the Navier–Stokes equations and nonlinear Schrödinger equation where one treats the nonlinearity as an inhomogeneity.
The philosophy underlying Duhamel's principle is that it is possible to go from solutions of the Cauchy problem (or initial value problem) to solutions of the inhomogeneous problem. Consider, for instance, the example of the heat equation modeling the distribution of heat energy in . Indicating by the time derivative of , the initial value problem is
where g is the initial heat distribution. By contrast, the inhomogeneous problem for the heat equation,
corresponds to adding an external heat energy at each point. Intuitively, one can think of the inhomogeneous problem as a set of homogeneous problems each starting afresh at a different time slice . By linearity, one can add up (integrate) the resulting solutions through time and obtain the solution for the inhomogeneous problem. This is the essence of Duhamel's principle.
General considerations
Formally, consider a linear inhomogeneous evolution equation for a function
with spatial domain in , of the form
where L is a linear differential operator that involves no time derivatives.
Duhamel's principle is, formally, that the solution to this problem is
where is the solution of the problem
The integrand is the retarded solution , evaluated at time , representing the effect, at the later time , of an infinitesimal force applied at time . (The operator can be thought of as an inverse of the operator for the Cauchy problem with initial condition .)
Duhamel's principle also holds for linear systems (with vector-valued functions ), and this in turn furnishes a generalization to higher t derivatives, such as those appearing in the wave equation (see below). Validity of the principle depends on being able to solve the homogeneous problem in an appropriate function space and that the solution should exhibit reasonable dependence on parameters so that the integral is well-defined. Precise analytic conditions on and depend on the particular application.
Examples
Wave equation
The linear wave equation models the displacement of an idealized dispersionless one-dimensional string, in terms of derivatives with respect to time and space :
The function , in natural units, represents an external force applied to string at the position . In order to be a suitable physical model for nature, it should be possible to solve it for any initial state that the string is in, specified by its initial displacement and velocity:
More generally, we should be able to solve the equation with data specified on any slice:
To evolve a solution from any given time slice to , the contribution of the force must be added to the solution. That contribution comes from changing the velocity of the string by . That is, to get the solution at time from the solution at time , we must add to it a new (forward) solution of the homogeneous (no external forces) wave equation
with the initial conditions
A solution to this equation is achieved by straightforward integration:
(The expression in parentheses is just in the notation of the general method above.) So a solution of the original initial value problem is obtained by starting with a solution to the problem with the same prescribed initial values problem but with zero initial displacement, and adding to that (integrating) the contributions from the added force in the time intervals from T to T+dT:
Constant-coefficient linear ODE
Duhamel's principle is the result that the solution to an inhomogeneous, linear, partial differential equation can be solved by first finding the solution for a step input, and then superposing using Duhamel's integral.
Suppose we have a constant coefficient, -th order inhomogeneous ordinary differential equation.
where
We can reduce this to the solution of a homogeneous ODE using the following method. All steps are done formally, ignoring necessary requirements for the solution to be well defined.
First let G solve
Define , with being the characteristic function of the interval . Then we have
in the sense of distributions. Therefore
solves the ODE.
Constant-coefficient linear PDE
More generally, suppose we have a constant coefficient inhomogeneous partial differential equation
where
We can reduce this to the solution of a homogeneous ODE using the following method. All steps are done formally, ignoring necessary requirements for the solution to be well defined.
First, taking the Fourier transform in we have
Assume that is an -th order ODE in . Let be the coefficient of the highest order term of .
Now for every let solve
Define . We then have
in the sense of distributions. Therefore
solves the PDE (after transforming back to ).
See also
Retarded potential
Propagator
Impulse response
Variation of parameters
References
Wave mechanics
Partial differential equations
Mathematical principles | 0.775781 | 0.984436 | 0.763707 |
Rigid transformation | In mathematics, a rigid transformation (also called Euclidean transformation or Euclidean isometry) is a geometric transformation of a Euclidean space that preserves the Euclidean distance between every pair of points.
The rigid transformations include rotations, translations, reflections, or any sequence of these. Reflections are sometimes excluded from the definition of a rigid transformation by requiring that the transformation also preserve the handedness of objects in the Euclidean space. (A reflection would not preserve handedness; for instance, it would transform a left hand into a right hand.) To avoid ambiguity, a transformation that preserves handedness is known as a rigid motion, a Euclidean motion, or a proper rigid transformation.
In dimension two, a rigid motion is either a translation or a rotation. In dimension three, every rigid motion can be decomposed as the composition of a rotation and a translation, and is thus sometimes called a rototranslation. In dimension three, all rigid motions are also screw motions (this is Chasles' theorem)
In dimension at most three, any improper rigid transformation can be decomposed into an improper rotation followed by a translation, or into a sequence of reflections.
Any object will keep the same shape and size after a proper rigid transformation.
All rigid transformations are examples of affine transformations. The set of all (proper and improper) rigid transformations is a mathematical group called the Euclidean group, denoted for -dimensional Euclidean spaces. The set of rigid motions is called the special Euclidean group, and denoted .
In kinematics, rigid motions in a 3-dimensional Euclidean space are used to represent displacements of rigid bodies. According to Chasles' theorem, every rigid transformation can be expressed as a screw motion.
Formal definition
A rigid transformation is formally defined as a transformation that, when acting on any vector , produces a transformed vector of the form
where (i.e., is an orthogonal transformation), and is a vector giving the translation of the origin.
A proper rigid transformation has, in addition,
which means that R does not produce a reflection, and hence it represents a rotation (an orientation-preserving orthogonal transformation). Indeed, when an orthogonal transformation matrix produces a reflection, its determinant is −1.
Distance formula
A measure of distance between points, or metric, is needed in order to confirm that a transformation is rigid. The Euclidean distance formula for is the generalization of the Pythagorean theorem. The formula gives the distance squared between two points and as the sum of the squares of the distances along the coordinate axes, that is
where and , and the dot denotes the scalar product.
Using this distance formula, a rigid transformation has the property,
Translations and linear transformations
A translation of a vector space adds a vector to every vector in the space, which means it is the transformation
It is easy to show that this is a rigid transformation by showing that the distance between translated vectors equal the distance between the original vectors:
A linear transformation of a vector space, , preserves linear combinations,
A linear transformation can be represented by a matrix, which means
where is an matrix.
A linear transformation is a rigid transformation if it satisfies the condition,
that is
Now use the fact that the scalar product of two vectors v.w can be written as the matrix operation , where the T denotes the matrix transpose, we have
Thus, the linear transformation L is rigid if its matrix satisfies the condition
where is the identity matrix. Matrices that satisfy this condition are called orthogonal matrices. This condition actually requires the columns of these matrices to be orthogonal unit vectors.
Matrices that satisfy this condition form a mathematical group under the operation of matrix multiplication called the orthogonal group of n×n matrices and denoted .
Compute the determinant of the condition for an orthogonal matrix to obtain
which shows that the matrix can have a determinant of either +1 or −1. Orthogonal matrices with determinant −1 are reflections, and those with determinant +1 are rotations. Notice that the set of orthogonal matrices can be viewed as consisting of two manifolds in separated by the set of singular matrices.
The set of rotation matrices is called the special orthogonal group, and denoted . It is an example of a Lie group because it has the structure of a manifold.
See also
Deformation (mechanics)
Motion (geometry)
Rigid body dynamics
References
Functions and mappings
Kinematics
Euclidean symmetries | 0.770051 | 0.991758 | 0.763704 |
Structure and Interpretation of Computer Programs | Structure and Interpretation of Computer Programs (SICP) is a computer science textbook by Massachusetts Institute of Technology professors Harold Abelson and Gerald Jay Sussman with Julie Sussman. It is known as the "Wizard Book" in hacker culture. It teaches fundamental principles of computer programming, including recursion, abstraction, modularity, and programming language design and implementation.
MIT Press published the first edition in 1984, and the second edition in 1996. It was formerly used as the textbook for MIT's introductory course in computer science. SICP focuses on discovering general patterns for solving specific problems, and building software systems that make use of those patterns.
MIT Press published the JavaScript edition in 2022.
Content
The book describes computer science concepts using Scheme, a dialect of Lisp. It also uses a virtual register machine and assembler to implement Lisp interpreters and compilers.
Topics in the books are:
Chapter 1: Building Abstractions with Procedures
The Elements of Programming
Procedures and the Processes They Generate
Formulating Abstractions with Higher-Order Procedures
Chapter 2: Building Abstractions with Data
Introduction to Data Abstraction
Hierarchical Data and the Closure Property
Symbolic Data
Multiple Representations for Abstract Data
Systems with Generic Operations
Chapter 3: Modularity, Objects, and State
Assignment and Local State
The Environment Model of Evaluation
Modeling with Mutable Data
Concurrency: Time Is of the Essence
Streams
Chapter 4: Metalinguistic Abstraction
The Metacircular Evaluator
Variations on a Scheme – Lazy Evaluation
Variations on a Scheme – Nondeterministic Computing
Logic Programming
Chapter 5: Computing with Register Machines
Designing Register Machines
A Register-Machine Simulator
Storage Allocation and Garbage Collection
The Explicit-Control Evaluator
Compilation
Characters
Several fictional characters appear in the book:
Alyssa P. Hacker, a Lisp hacker
Ben Bitdiddle
Cy D. Fect, a "reformed C programmer"
Eva Lu Ator
Lem E. Tweakit
Louis Reasoner, a loose reasoner
License
The book is licensed under a Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.
Coursework
The book was used as the textbook for MIT's former introductory programming course, 6.001, from fall 1984 through its last semester, in fall 2007. Other schools also made use of the book as a course textbook.
Reception
Byte recommended SICP in 1986 "for professional programmers who are really interested in their profession". The magazine said that the book was not easy to read, but that it would expose experienced programmers to both old and new topics.
Influence
SICP has been influential in computer science education, and several later books have been inspired by its style.
Structure and Interpretation of Classical Mechanics (SICM), another book that uses Scheme as an instructional element, by Gerald Jay Sussman and Jack Wisdom
Software Design for Flexibility, by Chris Hanson and Gerald Jay Sussman
How to Design Programs (HtDP), which intends to be a more accessible book for introductory Computer Science, and to address perceived incongruities in SICP
Essentials of Programming Languages (EoPL), a book for Programming Languages courses
See also
Compilers: Principles, Techniques, and Tools also known as The Dragon Book
References
External links
Video lectures
Book compiled from TeX source
Structure and Interpretation of Computer Programs. Interactive Version
1984 non-fiction books
1996 non-fiction books
Computer science books
Computer programming books
Creative Commons-licensed books
Massachusetts Institute of Technology
Scheme (programming language)
Lisp (programming language) | 0.768287 | 0.994023 | 0.763695 |
Fourier transform | In physics, engineering and mathematics, the Fourier transform (FT) is an integral transform that takes a function as input and outputs another function that describes the extent to which various frequencies are present in the original function. The output of the transform is a complex-valued function of frequency. The term Fourier transform refers to both this complex-valued function and the mathematical operation. When a distinction needs to be made, the output of the operation is sometimes called the frequency domain representation of the original function. The Fourier transform is analogous to decomposing the sound of a musical chord into the intensities of its constituent pitches.
Functions that are localized in the time domain have Fourier transforms that are spread out across the frequency domain and vice versa, a phenomenon known as the uncertainty principle. The critical case for this principle is the Gaussian function, of substantial importance in probability theory and statistics as well as in the study of physical phenomena exhibiting normal distribution (e.g., diffusion). The Fourier transform of a Gaussian function is another Gaussian function. Joseph Fourier introduced sine and cosine transforms (which correspond to the imaginary and real components of the modern Fourier transform) in his study of heat transfer, where Gaussian functions appear as solutions of the heat equation.
The Fourier transform can be formally defined as an improper Riemann integral, making it an integral transform, although this definition is not suitable for many applications requiring a more sophisticated integration theory. For example, many relatively simple applications use the Dirac delta function, which can be treated formally as if it were a function, but the justification requires a mathematically more sophisticated viewpoint.
The Fourier transform can also be generalized to functions of several variables on Euclidean space, sending a function of 'position space' to a function of momentum (or a function of space and time to a function of 4-momentum). This idea makes the spatial Fourier transform very natural in the study of waves, as well as in quantum mechanics, where it is important to be able to represent wave solutions as functions of either position or momentum and sometimes both. In general, functions to which Fourier methods are applicable are complex-valued, and possibly vector-valued. Still further generalization is possible to functions on groups, which, besides the original Fourier transform on or , notably includes the discrete-time Fourier transform (DTFT, group = ), the discrete Fourier transform (DFT, group = ) and the Fourier series or circular Fourier transform (group = , the unit circle ≈ closed finite interval with endpoints identified). The latter is routinely employed to handle periodic functions. The fast Fourier transform (FFT) is an algorithm for computing the DFT.
Definition
The Fourier transform is an analysis process, decomposing a complex-valued function into its constituent frequencies and their amplitudes. The inverse process is synthesis, which recreates from its transform.
We can start with an analogy, the Fourier series, which analyzes on a bounded interval for some positive real number The constituent frequencies are a discrete set of harmonics at frequencies whose amplitude and phase are given by the analysis formula:The actual Fourier series is the synthesis formula:On an unbounded interval, the constituent frequencies are a continuum: and is replaced by a function:
Evaluating for all values of produces the frequency-domain function. The integral can diverge at some frequencies. (see ) But it converges for all frequencies when decays with all derivatives as : . (See Schwartz function). By the Riemann–Lebesgue lemma, the transformed function also decays with all derivatives.
The complex number , in polar coordinates, conveys both amplitude and phase of frequency The intuitive interpretation of is that the effect of multiplying by is to subtract from every frequency component of function Only the component that was at frequency can produce a non-zero value of the infinite integral, because (at least formally) all the other shifted components are oscillatory and integrate to zero. (see )
The corresponding synthesis formula is:
is a representation of as a weighted summation of complex exponential functions.
This is also known as the Fourier inversion theorem, and was first introduced in Fourier's Analytical Theory of Heat.
The functions and are referred to as a Fourier transform pair. A common notation for designating transform pairs is:
for example
Definition for Lebesgue integrable functions
Until now, we have been dealing with Schwartz functions, which decay rapidly at infinity, with all derivatives. This excludes many functions of practical importance from the definition, such as the rect function. A measurable function is called (Lebesgue) integrable if the Lebesgue integral of its absolute value is finite:
Two measurable functions are equivalent if they are equal except on a set of measure zero. The set of all equivalence classes of integrable functions is denoted . Then:
The integral is well-defined for all because of the assumption . (It can be shown that the function is bounded and uniformly continuous in the frequency domain, and moreover, by the Riemann–Lebesgue lemma, it is zero at infinity.)
However, the class of Lebesgue integrable functions is not ideal from the point of view of the Fourier transform because there is no easy characterization of the image, and thus no easy characterization of the inverse transform.
Unitarity and definition for square integrable functions
While defines the Fourier transform for (complex-valued) functions in , it is easy to see that it is not well-defined for other integrability classes, most importantly . For functions in , and with the conventions of , the Fourier transform is a unitary operator with respect to the Hilbert inner product on , restricted to the dense subspace of integrable functions. Therefore, it admits a unique continuous extension to a unitary operator on , also called the Fourier transform. This extension is important in part because the Fourier transform preserves the space so that, unlike the case of , the Fourier transform and inverse transform are on the same footing, being transformations of the same space of functions to itself.
Importantly, for functions in , the Fourier transform is no longer given by (interpreted as a Lebesgue integral). For example, the function is in but not , so the integral diverges. In such cases, the Fourier transform can be obtained explicitly by regularizing the integral, and then passing to a limit. In practice, the integral is often regarded as an improper integral instead of a proper Lebesgue integral, but sometimes for convergence one needs to use weak limit or principal value instead of the (pointwise) limits implicit in an improper integral. and each gives three rigorous ways of extending the Fourier transform to square integrable functions using this procedure.
The conventions chosen in this article are those of harmonic analysis, and are characterized as the unique conventions such that the Fourier transform is both unitary on and an algebra homomorphism from to , without renormalizing the Lebesgue measure.
Angular frequency (ω)
When the independent variable represents time (often denoted by ), the transform variable represents frequency (often denoted by ). For example, if time is measured in seconds, then frequency is in hertz. The Fourier transform can also be written in terms of angular frequency, whose units are radians per second.
The substitution into produces this convention, where function is relabeled
Unlike the definition, the Fourier transform is no longer a unitary transformation, and there is less symmetry between the formulas for the transform and its inverse. Those properties are restored by splitting the factor evenly between the transform and its inverse, which leads to another convention:
Variations of all three conventions can be created by conjugating the complex-exponential kernel of both the forward and the reverse transform. The signs must be opposites.
Extension of the definition
For , the Fourier transform can be defined on by Marcinkiewicz interpolation.
The Fourier transform can be defined on domains other than the real line. The Fourier transform on Euclidean space and the Fourier transform on locally abelian groups are discussed later in the article.
The Fourier transform can also be defined for tempered distributions, dual to the space of rapidly decreasing functions (Schwartz functions). A Schwartz function is a smooth function that decays at infinity, along with all of its derivatives. The space of Schwartz functions is denoted by , and its dual is the space of tempered distributions. It is easy to see, by differentiating under the integral and applying the Riemann-Lebesgue lemma, that the Fourier transform of a Schwartz function (defined by the formula ) is again a Schwartz function. The Fourier transform of a tempered distribution is defined by duality:
Many other characterizations of the Fourier transform exist. For example, one uses the Stone–von Neumann theorem: the Fourier transform is the unique unitary intertwiner for the symplectic and Euclidean Schrödinger representations of the Heisenberg group.
Background
History
In 1822, Fourier claimed (see ) that any function, whether continuous or discontinuous, can be expanded into a series of sines. That important work was corrected and expanded upon by others to provide the foundation for the various forms of the Fourier transform used since.
Complex sinusoids
In general, the coefficients are complex numbers, which have two equivalent forms (see Euler's formula):
The product with has these forms:
It is noteworthy how easily the product was simplified using the polar form, and how easily the rectangular form was deduced by an application of Euler's formula.
Negative frequency
Euler's formula introduces the possibility of negative And is defined Only certain complex-valued have transforms (See Analytic signal. A simple example is ) But negative frequency is necessary to characterize all other complex-valued found in signal processing, partial differential equations, radar, nonlinear optics, quantum mechanics, and others.
For a real-valued has the symmetry property (see below). This redundancy enables to distinguish from But of course it cannot tell us the actual sign of because and are indistinguishable on just the real numbers line.
Fourier transform for periodic functions
The Fourier transform of a periodic function cannot be defined using the integral formula directly. In order for integral in to be defined the function must be absolutely integrable. Instead it is common to use Fourier series. It is possible to extend the definition to include periodic functions by viewing them as tempered distributions.
This makes it possible to see a connection between the Fourier series and the Fourier transform for periodic functions that have a convergent Fourier series. If is a periodic function, with period , that has a convergent Fourier series, then:
where are the Fourier series coefficients of , and is the Dirac delta function. In other words, the Fourier transform is a Dirac comb function whose teeth are multiplied by the Fourier series coefficients.
Sampling the Fourier transform
The Fourier transform of an integrable function can be sampled at regular intervals of arbitrary length These samples can be deduced from one cycle of a periodic function which has Fourier series coefficients proportional to those samples by the Poisson summation formula:
The integrability of ensures the periodic summation converges. Therefore, the samples can be determined by Fourier series analysis:
When has compact support, has a finite number of terms within the interval of integration. When does not have compact support, numerical evaluation of requires an approximation, such as tapering or truncating the number of terms.
Example
The following figures provide a visual illustration of how the Fourier transform's integral measures whether a frequency is present in a particular function. The first image depicts the function which is a 3 Hz cosine wave (the first term) shaped by a Gaussian envelope function (the second term) that smoothly turns the wave on and off. The next 2 images show the product which must be integrated to calculate the Fourier transform at +3 Hz. The real part of the integrand has a non-negative average value, because the alternating signs of and oscillate at the same rate and in phase, whereas and oscillate at the same rate but with orthogonal phase. The absolute value of the Fourier transform at +3 Hz is 0.5, which is relatively large. When added to the Fourier transform at -3 Hz (which is identical because we started with a real signal), we find that the amplitude of the 3 Hz frequency component is 1.
However, when you try to measure a frequency that is not present, both the real and imaginary component of the integral vary rapidly between positive and negative values. For instance, the red curve is looking for 5 Hz. The absolute value of its integral is nearly zero, indicating that almost no 5 Hz component was in the signal. The general situation is usually more complicated than this, but heuristically this is how the Fourier transform measures how much of an individual frequency is present in a function
To re-enforce an earlier point, the reason for the response at Hz is because and are indistinguishable. The transform of would have just one response, whose amplitude is the integral of the smooth envelope: whereas is
Properties of the Fourier transform
Let and represent integrable functions Lebesgue-measurable on the real line satisfying:
We denote the Fourier transforms of these functions as and respectively.
Basic properties
The Fourier transform has the following basic properties:
Linearity
Time shifting
Frequency shifting
Time scaling
The case leads to the time-reversal property:
Symmetry
When the real and imaginary parts of a complex function are decomposed into their even and odd parts, there are four components, denoted below by the subscripts RE, RO, IE, and IO. And there is a one-to-one mapping between the four components of a complex time function and the four components of its complex frequency transform:
From this, various relationships are apparent, for example:
The transform of a real-valued function is the conjugate symmetric function Conversely, a conjugate symmetric transform implies a real-valued time-domain.
The transform of an imaginary-valued function is the conjugate antisymmetric function and the converse is true.
The transform of an conjugate symmetric function is the real-valued function and the converse is true.
The transform of an conjugate antisymmetric function is the imaginary-valued function and the converse is true.
Conjugation
(Note: the ∗ denotes complex conjugation.)
In particular, if is real, then is even symmetric (aka Hermitian function):
And if is purely imaginary, then is odd symmetric:
Real and imaginary part in time
Zero frequency component
Substituting in the definition, we obtain:
The integral of over its domain is known as the average value or DC bias of the function.
Invertibility and periodicity
Under suitable conditions on the function , it can be recovered from its Fourier transform . Indeed, denoting the Fourier transform operator by , so , then for suitable functions, applying the Fourier transform twice simply flips the function: , which can be interpreted as "reversing time". Since reversing time is two-periodic, applying this twice yields , so the Fourier transform operator is four-periodic, and similarly the inverse Fourier transform can be obtained by applying the Fourier transform three times: . In particular the Fourier transform is invertible (under suitable conditions).
More precisely, defining the parity operator such that , we have:
These equalities of operators require careful definition of the space of functions in question, defining equality of functions (equality at every point? equality almost everywhere?) and defining equality of operators – that is, defining the topology on the function space and operator space in question. These are not true for all functions, but are true under various conditions, which are the content of the various forms of the Fourier inversion theorem.
This fourfold periodicity of the Fourier transform is similar to a rotation of the plane by 90°, particularly as the two-fold iteration yields a reversal, and in fact this analogy can be made precise. While the Fourier transform can simply be interpreted as switching the time domain and the frequency domain, with the inverse Fourier transform switching them back, more geometrically it can be interpreted as a rotation by 90° in the time–frequency domain (considering time as the -axis and frequency as the -axis), and the Fourier transform can be generalized to the fractional Fourier transform, which involves rotations by other angles. This can be further generalized to linear canonical transformations, which can be visualized as the action of the special linear group on the time–frequency plane, with the preserved symplectic form corresponding to the uncertainty principle, below. This approach is particularly studied in signal processing, under time–frequency analysis.
Units
The frequency variable must have inverse units to the units of the original function's domain (typically named or ). For example, if is measured in seconds, should be in cycles per second or hertz. If the scale of time is in units of 2 seconds, then another Greek letter typically is used instead to represent angular frequency (where ) in units of radians per second. If using for units of length, then must be in inverse length, e.g., wavenumbers. That is to say, there are two versions of the real line: one which is the range of and measured in units of , and the other which is the range of and measured in inverse units to the units of . These two distinct versions of the real line cannot be equated with each other. Therefore, the Fourier transform goes from one space of functions to a different space of functions: functions which have a different domain of definition.
In general, must always be taken to be a linear form on the space of its domain, which is to say that the second real line is the dual space of the first real line. See the article on linear algebra for a more formal explanation and for more details. This point of view becomes essential in generalizations of the Fourier transform to general symmetry groups, including the case of Fourier series.
That there is no one preferred way (often, one says "no canonical way") to compare the two versions of the real line which are involved in the Fourier transform—fixing the units on one line does not force the scale of the units on the other line—is the reason for the plethora of rival conventions on the definition of the Fourier transform. The various definitions resulting from different choices of units differ by various constants.
In other conventions, the Fourier transform has in the exponent instead of , and vice versa for the inversion formula. This convention is common in modern physics and is the default for Wolfram Alpha, and does not mean that the frequency has become negative, since there is no canonical definition of positivity for frequency of a complex wave. It simply means that is the amplitude of the wave instead of the wave (the former, with its minus sign, is often seen in the time dependence for Sinusoidal plane-wave solutions of the electromagnetic wave equation, or in the time dependence for quantum wave functions). Many of the identities involving the Fourier transform remain valid in those conventions, provided all terms that explicitly involve have it replaced by . In Electrical engineering the letter is typically used for the imaginary unit instead of because is used for current.
When using dimensionless units, the constant factors might not even be written in the transform definition. For instance, in probability theory, the characteristic function of the probability density function of a random variable of continuous type is defined without a negative sign in the exponential, and since the units of are ignored, there is no 2 either:
(In probability theory, and in mathematical statistics, the use of the Fourier—Stieltjes transform is preferred, because so many random variables are not of continuous type, and do not possess a density function, and one must treat not functions but distributions, i.e., measures which possess "atoms".)
From the higher point of view of group characters, which is much more abstract, all these arbitrary choices disappear, as will be explained in the later section of this article, which treats the notion of the Fourier transform of a function on a locally compact Abelian group.
Uniform continuity and the Riemann–Lebesgue lemma
The Fourier transform may be defined in some cases for non-integrable functions, but the Fourier transforms of integrable functions have several strong properties.
The Fourier transform of any integrable function is uniformly continuous and
By the Riemann–Lebesgue lemma,
However, need not be integrable. For example, the Fourier transform of the rectangular function, which is integrable, is the sinc function, which is not Lebesgue integrable, because its improper integrals behave analogously to the alternating harmonic series, in converging to a sum without being absolutely convergent.
It is not generally possible to write the inverse transform as a Lebesgue integral. However, when both and are integrable, the inverse equality
holds for almost every . As a result, the Fourier transform is injective on .
Plancherel theorem and Parseval's theorem
Main page: Plancherel theoremLet and be integrable, and let and be their Fourier transforms. If and are also square-integrable, then the Parseval formula follows:
where the bar denotes complex conjugation.
The Plancherel theorem, which follows from the above, states that
Plancherel's theorem makes it possible to extend the Fourier transform, by a continuity argument, to a unitary operator on . On , this extension agrees with original Fourier transform defined on , thus enlarging the domain of the Fourier transform to (and consequently to for ). Plancherel's theorem has the interpretation in the sciences that the Fourier transform preserves the energy of the original quantity. The terminology of these formulas is not quite standardised. Parseval's theorem was proved only for Fourier series, and was first proved by Lyapunov. But Parseval's formula makes sense for the Fourier transform as well, and so even though in the context of the Fourier transform it was proved by Plancherel, it is still often referred to as Parseval's formula, or Parseval's relation, or even Parseval's theorem.
See Pontryagin duality for a general formulation of this concept in the context of locally compact abelian groups.
Poisson summation formula
The Poisson summation formula (PSF) is an equation that relates the Fourier series coefficients of the periodic summation of a function to values of the function's continuous Fourier transform. The Poisson summation formula says that for sufficiently regular functions ,
It has a variety of useful forms that are derived from the basic one by application of the Fourier transform's scaling and time-shifting properties. The formula has applications in engineering, physics, and number theory. The frequency-domain dual of the standard Poisson summation formula is also called the discrete-time Fourier transform.
Poisson summation is generally associated with the physics of periodic media, such as heat conduction on a circle. The fundamental solution of the heat equation on a circle is called a theta function. It is used in number theory to prove the transformation properties of theta functions, which turn out to be a type of modular form, and it is connected more generally to the theory of automorphic forms where it appears on one side of the Selberg trace formula.
Differentiation
Suppose is an absolutely continuous differentiable function, and both and its derivative are integrable. Then the Fourier transform of the derivative is given by
More generally, the Fourier transformation of the th derivative is given by
Analogously, , so
By applying the Fourier transform and using these formulas, some ordinary differential equations can be transformed into algebraic equations, which are much easier to solve. These formulas also give rise to the rule of thumb " is smooth if and only if quickly falls to 0 for ." By using the analogous rules for the inverse Fourier transform, one can also say " quickly falls to 0 for if and only if is smooth."
Convolution theorem
The Fourier transform translates between convolution and multiplication of functions. If and are integrable functions with Fourier transforms and respectively, then the Fourier transform of the convolution is given by the product of the Fourier transforms and (under other conventions for the definition of the Fourier transform a constant factor may appear).
This means that if:
where denotes the convolution operation, then:
In linear time invariant (LTI) system theory, it is common to interpret as the impulse response of an LTI system with input and output , since substituting the unit impulse for yields . In this case, represents the frequency response of the system.
Conversely, if can be decomposed as the product of two square integrable functions and , then the Fourier transform of is given by the convolution of the respective Fourier transforms and .
Cross-correlation theorem
In an analogous manner, it can be shown that if is the cross-correlation of and :
then the Fourier transform of is:
As a special case, the autocorrelation of function is:
for which
Eigenfunctions
The Fourier transform is a linear transform which has eigenfunctions obeying with
A set of eigenfunctions is found by noting that the homogeneous differential equation
leads to eigenfunctions of the Fourier transform as long as the form of the equation remains invariant under Fourier transform. In other words, every solution and its Fourier transform obey the same equation. Assuming uniqueness of the solutions, every solution must therefore be an eigenfunction of the Fourier transform. The form of the equation remains unchanged under Fourier transform if can be expanded in a power series in which for all terms the same factor of either one of arises from the factors introduced by the differentiation rules upon Fourier transforming the homogeneous differential equation because this factor may then be cancelled. The simplest allowable leads to the standard normal distribution.
More generally, a set of eigenfunctions is also found by noting that the differentiation rules imply that the ordinary differential equation
with constant and being a non-constant even function remains invariant in form when applying the Fourier transform to both sides of the equation. The simplest example is provided by which is equivalent to considering the Schrödinger equation for the quantum harmonic oscillator. The corresponding solutions provide an important choice of an orthonormal basis for and are given by the "physicist's" Hermite functions. Equivalently one may use
where are the "probabilist's" Hermite polynomials, defined as
Under this convention for the Fourier transform, we have that
In other words, the Hermite functions form a complete orthonormal system of eigenfunctions for the Fourier transform on . However, this choice of eigenfunctions is not unique. Because of there are only four different eigenvalues of the Fourier transform (the fourth roots of unity ±1 and ±) and any linear combination of eigenfunctions with the same eigenvalue gives another eigenfunction. As a consequence of this, it is possible to decompose as a direct sum of four spaces , , , and where the Fourier transform acts on simply by multiplication by .
Since the complete set of Hermite functions provides a resolution of the identity they diagonalize the Fourier operator, i.e. the Fourier transform can be represented by such a sum of terms weighted by the above eigenvalues, and these sums can be explicitly summed:
This approach to define the Fourier transform was first proposed by Norbert Wiener. Among other properties, Hermite functions decrease exponentially fast in both frequency and time domains, and they are thus used to define a generalization of the Fourier transform, namely the fractional Fourier transform used in time–frequency analysis. In physics, this transform was introduced by Edward Condon. This change of basis functions becomes possible because the Fourier transform is a unitary transform when using the right conventions. Consequently, under the proper conditions it may be expected to result from a self-adjoint generator via
The operator is the number operator of the quantum harmonic oscillator written as
It can be interpreted as the generator of fractional Fourier transforms for arbitrary values of , and of the conventional continuous Fourier transform for the particular value with the Mehler kernel implementing the corresponding active transform. The eigenfunctions of are the Hermite functions which are therefore also eigenfunctions of
Upon extending the Fourier transform to distributions the Dirac comb is also an eigenfunction of the Fourier transform.
Connection with the Heisenberg group
The Heisenberg group is a certain group of unitary operators on the Hilbert space of square integrable complex valued functions on the real line, generated by the translations and multiplication by , . These operators do not commute, as their (group) commutator is
which is multiplication by the constant (independent of ) (the circle group of unit modulus complex numbers). As an abstract group, the Heisenberg group is the three-dimensional Lie group of triples , with the group law
Denote the Heisenberg group by . The above procedure describes not only the group structure, but also a standard unitary representation of on a Hilbert space, which we denote by . Define the linear automorphism of by
so that . This can be extended to a unique automorphism of :
According to the Stone–von Neumann theorem, the unitary representations and are unitarily equivalent, so there is a unique intertwiner such that
This operator is the Fourier transform.
Many of the standard properties of the Fourier transform are immediate consequences of this more general framework. For example, the square of the Fourier transform, , is an intertwiner associated with , and so we have is the reflection of the original function .
Complex domain
The integral for the Fourier transform
can be studied for complex values of its argument . Depending on the properties of , this might not converge off the real axis at all, or it might converge to a complex analytic function for all values of , or something in between.
The Paley–Wiener theorem says that is smooth (i.e., -times differentiable for all positive integers ) and compactly supported if and only if is a holomorphic function for which there exists a constant such that for any integer ,
for some constant . (In this case, is supported on .) This can be expressed by saying that is an entire function which is rapidly decreasing in (for fixed ) and of exponential growth in (uniformly in ).
(If is not smooth, but only , the statement still holds provided .) The space of such functions of a complex variable is called the Paley—Wiener space. This theorem has been generalised to semisimple Lie groups.
If is supported on the half-line , then is said to be "causal" because the impulse response function of a physically realisable filter must have this property, as no effect can precede its cause. Paley and Wiener showed that then extends to a holomorphic function on the complex lower half-plane which tends to zero as goes to infinity. The converse is false and it is not known how to characterise the Fourier transform of a causal function.
Laplace transform
The Fourier transform is related to the Laplace transform , which is also used for the solution of differential equations and the analysis of filters.
It may happen that a function for which the Fourier integral does not converge on the real axis at all, nevertheless has a complex Fourier transform defined in some region of the complex plane.
For example, if is of exponential growth, i.e.,
for some constants , then
convergent for all , is the two-sided Laplace transform of .
The more usual version ("one-sided") of the Laplace transform is
If is also causal, and analytical, then: Thus, extending the Fourier transform to the complex domain means it includes the Laplace transform as a special case in the case of causal functions—but with the change of variable .
From another, perhaps more classical viewpoint, the Laplace transform by its form involves an additional exponential regulating term which lets it converge outside of the imaginary line where the Fourier transform is defined. As such it can converge for at most exponentially divergent series and integrals, whereas the original Fourier decomposition cannot, enabling analysis of systems with divergent or critical elements. Two particular examples from linear signal processing are the construction of allpass filter networks from critical comb and mitigating filters via exact pole-zero cancellation on the unit circle. Such designs are common in audio processing, where highly nonlinear phase response is sought for, as in reverb.
Furthermore, when extended pulselike impulse responses are sought for signal processing work, the easiest way to produce them is to have one circuit which produces a divergent time response, and then to cancel its divergence through a delayed opposite and compensatory response. There, only the delay circuit in-between admits a classical Fourier description, which is critical. Both the circuits to the side are unstable, and do not admit a convergent Fourier decomposition. However, they do admit a Laplace domain description, with identical half-planes of convergence in the complex plane (or in the discrete case, the Z-plane), wherein their effects cancel.
In modern mathematics the Laplace transform is conventionally subsumed under the aegis Fourier methods. Both of them are subsumed by the far more general, and more abstract, idea of harmonic analysis.
Inversion
Still with , if is complex analytic for , then
by Cauchy's integral theorem. Therefore, the Fourier inversion formula can use integration along different lines, parallel to the real axis.
Theorem: If for , and for some constants , then
for any .
This theorem implies the Mellin inversion formula for the Laplace transformation,
for any , where is the Laplace transform of .
The hypotheses can be weakened, as in the results of Carleson and Hunt, to being , provided that be of bounded variation in a closed neighborhood of (cf. Dini test), the value of at be taken to be the arithmetic mean of the left and right limits, and that the integrals be taken in the sense of Cauchy principal values.
versions of these inversion formulas are also available.
Fourier transform on Euclidean space
The Fourier transform can be defined in any arbitrary number of dimensions . As with the one-dimensional case, there are many conventions. For an integrable function , this article takes the definition:
where and are -dimensional vectors, and is the dot product of the vectors. Alternatively, can be viewed as belonging to the dual vector space , in which case the dot product becomes the contraction of and , usually written as .
All of the basic properties listed above hold for the -dimensional Fourier transform, as do Plancherel's and Parseval's theorem. When the function is integrable, the Fourier transform is still uniformly continuous and the Riemann–Lebesgue lemma holds.
Uncertainty principle
Generally speaking, the more concentrated is, the more spread out its Fourier transform must be. In particular, the scaling property of the Fourier transform may be seen as saying: if we squeeze a function in , its Fourier transform stretches out in . It is not possible to arbitrarily concentrate both a function and its Fourier transform.
The trade-off between the compaction of a function and its Fourier transform can be formalized in the form of an uncertainty principle by viewing a function and its Fourier transform as conjugate variables with respect to the symplectic form on the time–frequency domain: from the point of view of the linear canonical transformation, the Fourier transform is rotation by 90° in the time–frequency domain, and preserves the symplectic form.
Suppose is an integrable and square-integrable function. Without loss of generality, assume that is normalized:
It follows from the Plancherel theorem that is also normalized.
The spread around may be measured by the dispersion about zero defined by
In probability terms, this is the second moment of about zero.
The uncertainty principle states that, if is absolutely continuous and the functions and are square integrable, then
The equality is attained only in the case
where is arbitrary and so that is -normalized. In other words, where is a (normalized) Gaussian function with variance , centered at zero, and its Fourier transform is a Gaussian function with variance .
In fact, this inequality implies that:
for any , .
In quantum mechanics, the momentum and position wave functions are Fourier transform pairs, up to a factor of the Planck constant. With this constant properly taken into account, the inequality above becomes the statement of the Heisenberg uncertainty principle.
A stronger uncertainty principle is the Hirschman uncertainty principle, which is expressed as:
where is the differential entropy of the probability density function :
where the logarithms may be in any base that is consistent. The equality is attained for a Gaussian, as in the previous case.
Sine and cosine transforms
Fourier's original formulation of the transform did not use complex numbers, but rather sines and cosines. Statisticians and others still use this form. An absolutely integrable function for which Fourier inversion holds can be expanded in terms of genuine frequencies (avoiding negative frequencies, which are sometimes considered hard to interpret physically) by
This is called an expansion as a trigonometric integral, or a Fourier integral expansion. The coefficient functions and can be found by using variants of the Fourier cosine transform and the Fourier sine transform (the normalisations are, again, not standardised):
and
Older literature refers to the two transform functions, the Fourier cosine transform, , and the Fourier sine transform, .
The function can be recovered from the sine and cosine transform using
together with trigonometric identities. This is referred to as Fourier's integral formula.
Spherical harmonics
Let the set of homogeneous harmonic polynomials of degree on be denoted by . The set consists of the solid spherical harmonics of degree . The solid spherical harmonics play a similar role in higher dimensions to the Hermite polynomials in dimension one. Specifically, if for some in , then . Let the set be the closure in of linear combinations of functions of the form where is in . The space is then a direct sum of the spaces and the Fourier transform maps each space to itself and is possible to characterize the action of the Fourier transform on each space .
Let (with in ), then
where
Here denotes the Bessel function of the first kind with order . When this gives a useful formula for the Fourier transform of a radial function. This is essentially the Hankel transform. Moreover, there is a simple recursion relating the cases and allowing to compute, e.g., the three-dimensional Fourier transform of a radial function from the one-dimensional one.
Restriction problems
In higher dimensions it becomes interesting to study restriction problems for the Fourier transform. The Fourier transform of an integrable function is continuous and the restriction of this function to any set is defined. But for a square-integrable function the Fourier transform could be a general class of square integrable functions. As such, the restriction of the Fourier transform of an function cannot be defined on sets of measure 0. It is still an active area of study to understand restriction problems in for . It is possible in some cases to define the restriction of a Fourier transform to a set , provided has non-zero curvature. The case when is the unit sphere in is of particular interest. In this case the Tomas–Stein restriction theorem states that the restriction of the Fourier transform to the unit sphere in is a bounded operator on provided .
One notable difference between the Fourier transform in 1 dimension versus higher dimensions concerns the partial sum operator. Consider an increasing collection of measurable sets indexed by : such as balls of radius centered at the origin, or cubes of side . For a given integrable function , consider the function defined by:
Suppose in addition that . For and , if one takes , then converges to in as tends to infinity, by the boundedness of the Hilbert transform. Naively one may hope the same holds true for . In the case that is taken to be a cube with side length , then convergence still holds. Another natural candidate is the Euclidean ball . In order for this partial sum operator to converge, it is necessary that the multiplier for the unit ball be bounded in . For it is a celebrated theorem of Charles Fefferman that the multiplier for the unit ball is never bounded unless . In fact, when , this shows that not only may fail to converge to in , but for some functions , is not even an element of .
Fourier transform on function spaces
On Lp spaces
On L1
The definition of the Fourier transform by the integral formula
is valid for Lebesgue integrable functions ; that is, .
The Fourier transform is a bounded operator. This follows from the observation that
which shows that its operator norm is bounded by 1. Indeed, it equals 1, which can be seen, for example, from the transform of the rect function. The image of is a subset of the space of continuous functions that tend to zero at infinity (the Riemann–Lebesgue lemma), although it is not the entire space. Indeed, there is no simple characterization of the image.
On L2
Since compactly supported smooth functions are integrable and dense in , the Plancherel theorem allows one to extend the definition of the Fourier transform to general functions in by continuity arguments. The Fourier transform in is no longer given by an ordinary Lebesgue integral, although it can be computed by an improper integral, here meaning that for an function ,
where the limit is taken in the sense.)
Many of the properties of the Fourier transform in carry over to , by a suitable limiting argument.
Furthermore, is a unitary operator. For an operator to be unitary it is sufficient to show that it is bijective and preserves the inner product, so in this case these follow from the Fourier inversion theorem combined with the fact that for any we have
In particular, the image of is itself under the Fourier transform.
On other Lp
The definition of the Fourier transform can be extended to functions in for by decomposing such functions into a fat tail part in plus a fat body part in . In each of these spaces, the Fourier transform of a function in is in , where is the Hölder conjugate of (by the Hausdorff–Young inequality). However, except for , the image is not easily characterized. Further extensions become more technical. The Fourier transform of functions in for the range requires the study of distributions. In fact, it can be shown that there are functions in with so that the Fourier transform is not defined as a function.
Tempered distributions
One might consider enlarging the domain of the Fourier transform from by considering generalized functions, or distributions. A distribution on is a continuous linear functional on the space of compactly supported smooth functions, equipped with a suitable topology. The strategy is then to consider the action of the Fourier transform on and pass to distributions by duality. The obstruction to doing this is that the Fourier transform does not map to . In fact the Fourier transform of an element in can not vanish on an open set; see the above discussion on the uncertainty principle. The right space here is the slightly larger space of Schwartz functions. The Fourier transform is an automorphism on the Schwartz space, as a topological vector space, and thus induces an automorphism on its dual, the space of tempered distributions. The tempered distributions include all the integrable functions mentioned above, as well as well-behaved functions of polynomial growth and distributions of compact support.
For the definition of the Fourier transform of a tempered distribution, let and be integrable functions, and let and be their Fourier transforms respectively. Then the Fourier transform obeys the following multiplication formula,
Every integrable function defines (induces) a distribution by the relation
for all Schwartz functions . So it makes sense to define Fourier transform of by
for all Schwartz functions . Extending this to all tempered distributions gives the general definition of the Fourier transform.
Distributions can be differentiated and the above-mentioned compatibility of the Fourier transform with differentiation and convolution remains true for tempered distributions.
Generalizations
Fourier–Stieltjes transform
The Fourier transform of a finite Borel measure on is given by:
This transform continues to enjoy many of the properties of the Fourier transform of integrable functions. One notable difference is that the Riemann–Lebesgue lemma fails for measures. In the case that , then the formula above reduces to the usual definition for the Fourier transform of . In the case that is the probability distribution associated to a random variable , the Fourier–Stieltjes transform is closely related to the characteristic function, but the typical conventions in probability theory take instead of . In the case when the distribution has a probability density function this definition reduces to the Fourier transform applied to the probability density function, again with a different choice of constants.
The Fourier transform may be used to give a characterization of measures. Bochner's theorem characterizes which functions may arise as the Fourier–Stieltjes transform of a positive measure on the circle.
Furthermore, the Dirac delta function, although not a function, is a finite Borel measure. Its Fourier transform is a constant function (whose specific value depends upon the form of the Fourier transform used).
Locally compact abelian groups
The Fourier transform may be generalized to any locally compact abelian group. A locally compact abelian group is an abelian group that is at the same time a locally compact Hausdorff topological space so that the group operation is continuous. If is a locally compact abelian group, it has a translation invariant measure , called Haar measure. For a locally compact abelian group , the set of irreducible, i.e. one-dimensional, unitary representations are called its characters. With its natural group structure and the topology of uniform convergence on compact sets (that is, the topology induced by the compact-open topology on the space of all continuous functions from to the circle group), the set of characters is itself a locally compact abelian group, called the Pontryagin dual of . For a function in , its Fourier transform is defined by
The Riemann–Lebesgue lemma holds in this case; is a function vanishing at infinity on .
The Fourier transform on is an example; here is a locally compact abelian group, and the Haar measure on can be thought of as the Lebesgue measure on [0,1). Consider the representation of on the complex plane that is a 1-dimensional complex vector space. There are a group of representations (which are irreducible since is 1-dim) where for .
The character of such representation, that is the trace of for each and , is itself. In the case of representation of finite group, the character table of the group are rows of vectors such that each row is the character of one irreducible representation of , and these vectors form an orthonormal basis of the space of class functions that map from to by Schur's lemma. Now the group is no longer finite but still compact, and it preserves the orthonormality of character table. Each row of the table is the function of and the inner product between two class functions (all functions being class functions since is abelian) is defined as with the normalizing factor . The sequence is an orthonormal basis of the space of class functions .
For any representation of a finite group , can be expressed as the span ( are the irreps of ), such that . Similarly for and , . The Pontriagin dual is and for , is its Fourier transform for .
Gelfand transform
The Fourier transform is also a special case of Gelfand transform. In this particular context, it is closely related to the Pontryagin duality map defined above.
Given an abelian locally compact Hausdorff topological group , as before we consider space , defined using a Haar measure. With convolution as multiplication, is an abelian Banach algebra. It also has an involution * given by
Taking the completion with respect to the largest possibly -norm gives its enveloping -algebra, called the group -algebra of . (Any -norm on is bounded by the norm, therefore their supremum exists.)
Given any abelian -algebra , the Gelfand transform gives an isomorphism between and , where is the multiplicative linear functionals, i.e. one-dimensional representations, on with the weak-* topology. The map is simply given by
It turns out that the multiplicative linear functionals of , after suitable identification, are exactly the characters of , and the Gelfand transform, when restricted to the dense subset is the Fourier–Pontryagin transform.
Compact non-abelian groups
The Fourier transform can also be defined for functions on a non-abelian group, provided that the group is compact. Removing the assumption that the underlying group is abelian, irreducible unitary representations need not always be one-dimensional. This means the Fourier transform on a non-abelian group takes values as Hilbert space operators. The Fourier transform on compact groups is a major tool in representation theory and non-commutative harmonic analysis.
Let be a compact Hausdorff topological group. Let denote the collection of all isomorphism classes of finite-dimensional irreducible unitary representations, along with a definite choice of representation on the Hilbert space of finite dimension for each . If is a finite Borel measure on , then the Fourier–Stieltjes transform of is the operator on defined by
where is the complex-conjugate representation of acting on . If is absolutely continuous with respect to the left-invariant probability measure on , represented as
for some , one identifies the Fourier transform of with the Fourier–Stieltjes transform of .
The mapping
defines an isomorphism between the Banach space of finite Borel measures (see rca space) and a closed subspace of the Banach space consisting of all sequences indexed by of (bounded) linear operators for which the norm
is finite. The "convolution theorem" asserts that, furthermore, this isomorphism of Banach spaces is in fact an isometric isomorphism of C*-algebras into a subspace of . Multiplication on is given by convolution of measures and the involution * defined by
and has a natural -algebra structure as Hilbert space operators.
The Peter–Weyl theorem holds, and a version of the Fourier inversion formula (Plancherel's theorem) follows: if , then
where the summation is understood as convergent in the sense.
The generalization of the Fourier transform to the noncommutative situation has also in part contributed to the development of noncommutative geometry. In this context, a categorical generalization of the Fourier transform to noncommutative groups is Tannaka–Krein duality, which replaces the group of characters with the category of representations. However, this loses the connection with harmonic functions.
Alternatives
In signal processing terms, a function (of time) is a representation of a signal with perfect time resolution, but no frequency information, while the Fourier transform has perfect frequency resolution, but no time information: the magnitude of the Fourier transform at a point is how much frequency content there is, but location is only given by phase (argument of the Fourier transform at a point), and standing waves are not localized in time – a sine wave continues out to infinity, without decaying. This limits the usefulness of the Fourier transform for analyzing signals that are localized in time, notably transients, or any signal of finite extent.
As alternatives to the Fourier transform, in time–frequency analysis, one uses time–frequency transforms or time–frequency distributions to represent signals in a form that has some time information and some frequency information – by the uncertainty principle, there is a trade-off between these. These can be generalizations of the Fourier transform, such as the short-time Fourier transform, fractional Fourier transform, Synchrosqueezing Fourier transform, or other functions to represent signals, as in wavelet transforms and chirplet transforms, with the wavelet analog of the (continuous) Fourier transform being the continuous wavelet transform.
Applications
Linear operations performed in one domain (time or frequency) have corresponding operations in the other domain, which are sometimes easier to perform. The operation of differentiation in the time domain corresponds to multiplication by the frequency, so some differential equations are easier to analyze in the frequency domain. Also, convolution in the time domain corresponds to ordinary multiplication in the frequency domain (see Convolution theorem). After performing the desired operations, transformation of the result can be made back to the time domain. Harmonic analysis is the systematic study of the relationship between the frequency and time domains, including the kinds of functions or operations that are "simpler" in one or the other, and has deep connections to many areas of modern mathematics.
Analysis of differential equations
Perhaps the most important use of the Fourier transformation is to solve partial differential equations.
Many of the equations of the mathematical physics of the nineteenth century can be treated this way. Fourier studied the heat equation, which in one dimension and in dimensionless units is
The example we will give, a slightly more difficult one, is the wave equation in one dimension,
As usual, the problem is not to find a solution: there are infinitely many. The problem is that of the so-called "boundary problem": find a solution which satisfies the "boundary conditions"
Here, and are given functions. For the heat equation, only one boundary condition can be required (usually the first one). But for the wave equation, there are still infinitely many solutions which satisfy the first boundary condition. But when one imposes both conditions, there is only one possible solution.
It is easier to find the Fourier transform of the solution than to find the solution directly. This is because the Fourier transformation takes differentiation into multiplication by the Fourier-dual variable, and so a partial differential equation applied to the original function is transformed into multiplication by polynomial functions of the dual variables applied to the transformed function. After is determined, we can apply the inverse Fourier transformation to find .
Fourier's method is as follows. First, note that any function of the forms
satisfies the wave equation. These are called the elementary solutions.
Second, note that therefore any integral
satisfies the wave equation for arbitrary . This integral may be interpreted as a continuous linear combination of solutions for the linear equation.
Now this resembles the formula for the Fourier synthesis of a function. In fact, this is the real inverse Fourier transform of and in the variable .
The third step is to examine how to find the specific unknown coefficient functions and that will lead to satisfying the boundary conditions. We are interested in the values of these solutions at . So we will set . Assuming that the conditions needed for Fourier inversion are satisfied, we can then find the Fourier sine and cosine transforms (in the variable ) of both sides and obtain
and
Similarly, taking the derivative of with respect to and then applying the Fourier sine and cosine transformations yields
and
These are four linear equations for the four unknowns and , in terms of the Fourier sine and cosine transforms of the boundary conditions, which are easily solved by elementary algebra, provided that these transforms can be found.
In summary, we chose a set of elementary solutions, parametrized by , of which the general solution would be a (continuous) linear combination in the form of an integral over the parameter . But this integral was in the form of a Fourier integral. The next step was to express the boundary conditions in terms of these integrals, and set them equal to the given functions and . But these expressions also took the form of a Fourier integral because of the properties of the Fourier transform of a derivative. The last step was to exploit Fourier inversion by applying the Fourier transformation to both sides, thus obtaining expressions for the coefficient functions and in terms of the given boundary conditions and .
From a higher point of view, Fourier's procedure can be reformulated more conceptually. Since there are two variables, we will use the Fourier transformation in both and rather than operate as Fourier did, who only transformed in the spatial variables. Note that must be considered in the sense of a distribution since is not going to be : as a wave, it will persist through time and thus is not a transient phenomenon. But it will be bounded and so its Fourier transform can be defined as a distribution. The operational properties of the Fourier transformation that are relevant to this equation are that it takes differentiation in to multiplication by and differentiation with respect to to multiplication by where is the frequency. Then the wave equation becomes an algebraic equation in :
This is equivalent to requiring unless . Right away, this explains why the choice of elementary solutions we made earlier worked so well: obviously will be solutions. Applying Fourier inversion to these delta functions, we obtain the elementary solutions we picked earlier. But from the higher point of view, one does not pick elementary solutions, but rather considers the space of all distributions which are supported on the (degenerate) conic .
We may as well consider the distributions supported on the conic that are given by distributions of one variable on the line plus distributions on the line as follows: if is any test function,
where , and , are distributions of one variable.
Then Fourier inversion gives, for the boundary conditions, something very similar to what we had more concretely above (put , which is clearly of polynomial growth):
and
Now, as before, applying the one-variable Fourier transformation in the variable to these functions of yields two equations in the two unknown distributions (which can be taken to be ordinary functions if the boundary conditions are or ).
From a calculational point of view, the drawback of course is that one must first calculate the Fourier transforms of the boundary conditions, then assemble the solution from these, and then calculate an inverse Fourier transform. Closed form formulas are rare, except when there is some geometric symmetry that can be exploited, and the numerical calculations are difficult because of the oscillatory nature of the integrals, which makes convergence slow and hard to estimate. For practical calculations, other methods are often used.
The twentieth century has seen the extension of these methods to all linear partial differential equations with polynomial coefficients, and by extending the notion of Fourier transformation to include Fourier integral operators, some non-linear equations as well.
Fourier-transform spectroscopy
The Fourier transform is also used in nuclear magnetic resonance (NMR) and in other kinds of spectroscopy, e.g. infrared (FTIR). In NMR an exponentially shaped free induction decay (FID) signal is acquired in the time domain and Fourier-transformed to a Lorentzian line-shape in the frequency domain. The Fourier transform is also used in magnetic resonance imaging (MRI) and mass spectrometry.
Quantum mechanics
The Fourier transform is useful in quantum mechanics in at least two different ways. To begin with, the basic conceptual structure of quantum mechanics postulates the existence of pairs of complementary variables, connected by the Heisenberg uncertainty principle. For example, in one dimension, the spatial variable of, say, a particle, can only be measured by the quantum mechanical "position operator" at the cost of losing information about the momentum of the particle. Therefore, the physical state of the particle can either be described by a function, called "the wave function", of or by a function of but not by a function of both variables. The variable is called the conjugate variable to . In classical mechanics, the physical state of a particle (existing in one dimension, for simplicity of exposition) would be given by assigning definite values to both and simultaneously. Thus, the set of all possible physical states is the two-dimensional real vector space with a -axis and a -axis called the phase space.
In contrast, quantum mechanics chooses a polarisation of this space in the sense that it picks a subspace of one-half the dimension, for example, the -axis alone, but instead of considering only points, takes the set of all complex-valued "wave functions" on this axis. Nevertheless, choosing the -axis is an equally valid polarisation, yielding a different representation of the set of possible physical states of the particle. Both representations of the wavefunction are related by a Fourier transform, such that
or, equivalently,
Physically realisable states are , and so by the Plancherel theorem, their Fourier transforms are also . (Note that since is in units of distance and is in units of momentum, the presence of the Planck constant in the exponent makes the exponent dimensionless, as it should be.)
Therefore, the Fourier transform can be used to pass from one way of representing the state of the particle, by a wave function of position, to another way of representing the state of the particle: by a wave function of momentum. Infinitely many different polarisations are possible, and all are equally valid. Being able to transform states from one representation to another by the Fourier transform is not only convenient but also the underlying reason of the Heisenberg uncertainty principle.
The other use of the Fourier transform in both quantum mechanics and quantum field theory is to solve the applicable wave equation. In non-relativistic quantum mechanics, Schrödinger's equation for a time-varying wave function in one-dimension, not subject to external forces, is
This is the same as the heat equation except for the presence of the imaginary unit . Fourier methods can be used to solve this equation.
In the presence of a potential, given by the potential energy function , the equation becomes
The "elementary solutions", as we referred to them above, are the so-called "stationary states" of the particle, and Fourier's algorithm, as described above, can still be used to solve the boundary value problem of the future evolution of given its values for . Neither of these approaches is of much practical use in quantum mechanics. Boundary value problems and the time-evolution of the wave function is not of much practical interest: it is the stationary states that are most important.
In relativistic quantum mechanics, Schrödinger's equation becomes a wave equation as was usual in classical physics, except that complex-valued waves are considered. A simple example, in the absence of interactions with other particles or fields, is the free one-dimensional Klein–Gordon–Schrödinger–Fock equation, this time in dimensionless units,
This is, from the mathematical point of view, the same as the wave equation of classical physics solved above (but with a complex-valued wave, which makes no difference in the methods). This is of great use in quantum field theory: each separate Fourier component of a wave can be treated as a separate harmonic oscillator and then quantized, a procedure known as "second quantization". Fourier methods have been adapted to also deal with non-trivial interactions.
Finally, the number operator of the quantum harmonic oscillator can be interpreted, for example via the Mehler kernel, as the generator of the Fourier transform .
Signal processing
The Fourier transform is used for the spectral analysis of time-series. The subject of statistical signal processing does not, however, usually apply the Fourier transformation to the signal itself. Even if a real signal is indeed transient, it has been found in practice advisable to model a signal by a function (or, alternatively, a stochastic process) which is stationary in the sense that its characteristic properties are constant over all time. The Fourier transform of such a function does not exist in the usual sense, and it has been found more useful for the analysis of signals to instead take the Fourier transform of its autocorrelation function.
The autocorrelation function of a function is defined by
This function is a function of the time-lag elapsing between the values of to be correlated.
For most functions that occur in practice, is a bounded even function of the time-lag and for typical noisy signals it turns out to be uniformly continuous with a maximum at .
The autocorrelation function, more properly called the autocovariance function unless it is normalized in some appropriate fashion, measures the strength of the correlation between the values of separated by a time lag. This is a way of searching for the correlation of with its own past. It is useful even for other statistical tasks besides the analysis of signals. For example, if represents the temperature at time , one expects a strong correlation with the temperature at a time lag of 24 hours.
It possesses a Fourier transform,
This Fourier transform is called the power spectral density function of . (Unless all periodic components are first filtered out from , this integral will diverge, but it is easy to filter out such periodicities.)
The power spectrum, as indicated by this density function , measures the amount of variance contributed to the data by the frequency . In electrical signals, the variance is proportional to the average power (energy per unit time), and so the power spectrum describes how much the different frequencies contribute to the average power of the signal. This process is called the spectral analysis of time-series and is analogous to the usual analysis of variance of data that is not a time-series (ANOVA).
Knowledge of which frequencies are "important" in this sense is crucial for the proper design of filters and for the proper evaluation of measuring apparatuses. It can also be useful for the scientific analysis of the phenomena responsible for producing the data.
The power spectrum of a signal can also be approximately measured directly by measuring the average power that remains in a signal after all the frequencies outside a narrow band have been filtered out.
Spectral analysis is carried out for visual signals as well. The power spectrum ignores all phase relations, which is good enough for many purposes, but for video signals other types of spectral analysis must also be employed, still using the Fourier transform as a tool.
Other notations
Other common notations for include:
In the sciences and engineering it is also common to make substitutions like these:
So the transform pair can become
A disadvantage of the capital letter notation is when expressing a transform such as or which become the more awkward and
In some contexts such as particle physics, the same symbol may be used for both for a function as well as it Fourier transform, with the two only distinguished by their argument I.e. would refer to the Fourier transform because of the momentum argument, while would refer to the original function because of the positional argument. Although tildes may be used as in to indicate Fourier transforms, tildes may also be used to indicate a modification of a quantity with a more Lorentz invariant form, such as , so care must be taken. Similarly, often denotes the Hilbert transform of .
The interpretation of the complex function may be aided by expressing it in polar coordinate form
in terms of the two real functions and where:
is the amplitude and
is the phase (see arg function).
Then the inverse transform can be written:
which is a recombination of all the frequency components of . Each component is a complex sinusoid of the form whose amplitude is and whose initial phase angle (at ) is .
The Fourier transform may be thought of as a mapping on function spaces. This mapping is here denoted and is used to denote the Fourier transform of the function . This mapping is linear, which means that can also be seen as a linear transformation on the function space and implies that the standard notation in linear algebra of applying a linear transformation to a vector (here the function ) can be used to write instead of . Since the result of applying the Fourier transform is again a function, we can be interested in the value of this function evaluated at the value for its variable, and this is denoted either as or as . Notice that in the former case, it is implicitly understood that is applied first to and then the resulting function is evaluated at , not the other way around.
In mathematics and various applied sciences, it is often necessary to distinguish between a function and the value of when its variable equals , denoted . This means that a notation like formally can be interpreted as the Fourier transform of the values of at . Despite this flaw, the previous notation appears frequently, often when a particular function or a function of a particular variable is to be transformed. For example,
is sometimes used to express that the Fourier transform of a rectangular function is a sinc function, or
is used to express the shift property of the Fourier transform.
Notice, that the last example is only correct under the assumption that the transformed function is a function of , not of .
As discussed above, the characteristic function of a random variable is the same as the Fourier–Stieltjes transform of its distribution measure, but in this context it is typical to take a different convention for the constants. Typically characteristic function is defined
As in the case of the "non-unitary angular frequency" convention above, the factor of 2 appears in neither the normalizing constant nor the exponent. Unlike any of the conventions appearing above, this convention takes the opposite sign in the exponent.
Computation methods
The appropriate computation method largely depends how the original mathematical function is represented and the desired form of the output function. In this section we consider both functions of a continuous variable, and functions of a discrete variable (i.e. ordered pairs of and values). For discrete-valued the transform integral becomes a summation of sinusoids, which is still a continuous function of frequency ( or ). When the sinusoids are harmonically-related (i.e. when the -values are spaced at integer multiples of an interval), the transform is called discrete-time Fourier transform (DTFT).
Discrete Fourier transforms and fast Fourier transforms
Sampling the DTFT at equally-spaced values of frequency is the most common modern method of computation. Efficient procedures, depending on the frequency resolution needed, are described at . The discrete Fourier transform (DFT), used there, is usually computed by a fast Fourier transform (FFT) algorithm.
Analytic integration of closed-form functions
Tables of closed-form Fourier transforms, such as and , are created by mathematically evaluating the Fourier analysis integral (or summation) into another closed-form function of frequency ( or ). When mathematically possible, this provides a transform for a continuum of frequency values.
Many computer algebra systems such as Matlab and Mathematica that are capable of symbolic integration are capable of computing Fourier transforms analytically. For example, to compute the Fourier transform of one might enter the command into Wolfram Alpha.
Numerical integration of closed-form continuous functions
Discrete sampling of the Fourier transform can also be done by numerical integration of the definition at each value of frequency for which transform is desired. The numerical integration approach works on a much broader class of functions than the analytic approach.
Numerical integration of a series of ordered pairs
If the input function is a series of ordered pairs, numerical integration reduces to just a summation over the set of data pairs. The DTFT is a common subcase of this more general situation.
Tables of important Fourier transforms
The following tables record some closed-form Fourier transforms. For functions and denote their Fourier transforms by and . Only the three most common conventions are included. It may be useful to notice that entry 105 gives a relationship between the Fourier transform of a function and the original function, which can be seen as relating the Fourier transform and its inverse.
Functional relationships, one-dimensional
The Fourier transforms in this table may be found in or .
Square-integrable functions, one-dimensional
The Fourier transforms in this table may be found in , , or .
Distributions, one-dimensional
The Fourier transforms in this table may be found in or .
Two-dimensional functions
Formulas for general -dimensional functions
See also
Analog signal processing
Beevers–Lipson strip
Constant-Q transform
Discrete Fourier transform
*DFT matrix
Fast Fourier transform
Fourier integral operator
Fourier inversion theorem
Fourier multiplier
Fourier series
Fourier sine transform
Fourier–Deligne transform
Fourier–Mukai transform
Fractional Fourier transform
Indirect Fourier transform
Integral transform
Hankel transform
Hartley transform
Laplace transform
Least-squares spectral analysis
Linear canonical transform
List of Fourier-related transforms
Mellin transform
Multidimensional transform
NGC 4622, especially the image NGC 4622 Fourier transform .
Nonlocal operator
Quantum Fourier transform
Quadratic Fourier transform
Short-time Fourier transform
Spectral density
Spectral density estimation
Symbolic integration
Time stretch dispersive Fourier transform
Transform (mathematics)
Notes
Citations
References
(translated from French)
(translated from Russian)
(translated from Russian)
(translated from Russian)
(translated from Russian)
; also available at Fundamentals of Music Processing, Section 2.1, pages 40–56
External links
Encyclopedia of Mathematics
Fourier Transform in Crystallography
Fourier analysis
Integral transforms
Unitary operators
Joseph Fourier
Mathematical physics | 0.763878 | 0.999748 | 0.763686 |
Hardware-in-the-loop simulation | Hardware-in-the-loop (HIL) simulation, also known by various acronyms such as HiL, HITL, and HWIL, is a technique that is used in the development and testing of complex real-time embedded systems. HIL simulation provides an effective testing platform by adding the complexity of the process-actuator system, known as a plant, to the test platform. The complexity of the plant under control is included in testing and development by adding a mathematical representation of all related dynamic systems. These mathematical representations are referred to as the "plant simulation". The embedded system to be tested interacts with this plant simulation.
How HIL works
HIL simulation must include electrical emulation of sensors and actuators. These electrical emulations act as the interface between the plant simulation and the embedded system under test. The value of each electrically emulated sensor is controlled by the plant simulation and is read by the embedded system under test (feedback). Likewise, the embedded system under test implements its control algorithms by outputting actuator control signals. Changes in the control signals result in changes to variable values in the plant simulation.
For example, a HIL simulation platform for the development of automotive anti-lock braking systems may have mathematical representations for each of the following subsystems in the plant simulation:
Vehicle dynamics, such as suspension, wheels, tires, roll, pitch and yaw;
Dynamics of the brake system's hydraulic components;
Road characteristics.
Uses
In many cases, the most effective way to develop an embedded system is to connect the embedded system to the real plant. In other cases, HIL simulation is more efficient. The metric of development and testing efficiency is typically a formula that includes the following factors:
1. Cost
2. Duration
3. Safety
4. Feasibility
The cost of the approach should be a measure of the cost of all tools and effort. The duration of development and testing affects the time-to-market for a planned product. Safety factor and development duration are typically equated to a cost measure. Specific conditions that warrant the use of HIL simulation include the following:
Enhancing the quality of testing
Tight development schedules
High-burden-rate plant
Early process human factor development
Enhancing the quality of testing
Usage of HILs enhances the quality of the testing by increasing the scope of the testing.
Ideally, an embedded system would be tested against the real plant, but most of the time the real plant itself imposes limitations in terms of the scope of the testing. For example, testing an engine control unit as a real plant can create the following dangerous conditions for the test engineer:
Testing at or beyond the range of the certain ECU parameters (e.g. Engine parameters etc.)
Testing and verification of the system at failure conditions
In the above-mentioned test scenarios, HIL provides the efficient control and safe environment where test or application engineer can focus on the functionality of the controller.
Tight development schedules
The tight development schedules associated with most new automotive, aerospace and defense programs do not allow embedded system testing to wait for a prototype to be available. In fact, most new development schedules assume that HIL simulation will be used in parallel with the development of the plant. For example, by the time a new automobile engine prototype is made available for control system testing, 95% of the engine controller testing will have been completed using HIL simulation.
The aerospace and defense industries are even more likely to impose a tight development schedule. Aircraft and land vehicle development programs are using desktop and HIL simulation to perform design, test, and integration in parallel.
High-burden-rate plant
In many cases, the plant is more expensive than a high fidelity, real-time simulator and therefore has a higher-burden rate. Therefore, it is more economical to develop and test while connected to a HIL simulator than the real plant. For jet engine manufacturers, HIL simulation is a fundamental part of engine development. The development of Full Authority Digital Engine Controllers (FADEC) for aircraft jet engines is an extreme example of a high-burden-rate plant. Each jet engine can cost millions of dollars. In contrast, a HIL simulator designed to test a jet engine manufacturer's complete line of engines may demand merely a tenth of the cost of a single engine.
Early process human factors development
HIL simulation is a key step in the process of developing human factors, a method of ensuring usability and system consistency using software ergonomics, human-factors research and design. For real-time technology, human-factors development is the task of collecting usability data from man-in-the-loop testing for components that will have a human interface.
An example of usability testing is the development of fly-by-wire flight controls. Fly-by-wire flight controls eliminate the mechanical linkages between the flight controls and the aircraft control surfaces. Sensors communicate the demanded flight response and then apply realistic force feedback to the fly-by-wire controls using motors. The behavior of fly-by-wire flight controls is defined by control algorithms. Changes in algorithm parameters can translate into more or less flight response from a given flight control input. Likewise, changes in the algorithm parameters can also translate into more or less force feedback for a given flight control input. The “correct” parameter values are a subjective measure. Therefore, it is important to get input from numerous man-in-the-loop tests to obtain optimal parameter values.
In the case of fly-by-wire flight controls development, HIL simulation is used to simulate human factors. The flight simulator includes plant simulations of aerodynamics, engine thrust, environmental conditions, flight control dynamics and more. Prototype fly-by-wire flight controls are connected to the simulator and test pilots evaluate flight performance given various algorithm parameters.
The alternative to HIL simulation for human factors and usability development is to place prototype flight controls in early aircraft prototypes and test for usability during flight test. This approach fails when measuring the four conditions listed above.
Cost: A flight test is extremely costly and therefore the goal is to minimize any development occurring with flight test.
Duration: Developing flight controls with flight test will extend the duration of an aircraft development program. Using HIL simulation, the flight controls may be developed well before a real aircraft is available.
Safety: Using flight test for the development of critical components such as flight controls has a major safety implication. Should errors be present in the design of the prototype flight controls, the result could be a crash landing.
Feasibility: It may not be possible to explore certain critical timings (e.g. sequences of user actions with millisecond precision) with real users operating a plant. Likewise for problematical points in parameter space that may not be easily reachable with a real plant but must be tested against the hardware in question.
Use in various disciplines
Automotive systems
In context of automotive applications "Hardware-in-the-loop simulation systems provide such a virtual vehicle for systems validation and verification." Since in-vehicle driving tests for evaluating performance and diagnostic functionalities of Engine Management Systems are often time-consuming, expensive and not reproducible, HIL simulators allow developers to validate new hardware and software automotive solutions, respecting quality requirements and time-to-market restrictions. In a typical HIL Simulator, a dedicated real-time processor executes mathematical models which emulate engine dynamics. In addition, an I/O unit allows the connection of vehicle sensors and actuators (which usually present high degree of non-linearity). Finally, the Electronic Control Unit (ECU) under test is connected to the system and stimulated by a set of vehicle maneuvers executed by the simulator. At this point, HIL simulation also offers a high degree of repeatability during testing phase.
In the literature, several HIL specific applications are reported and simplified HIL simulators were built according to some specific purpose. When testing a new ECU software release for example, experiments can be performed in open loop and therefore several engine dynamic models are no longer required. The strategy is restricted to the analysis of ECU outputs when excited by controlled inputs. In this case, a Micro HIL system (MHIL) offers a simpler and more economic solution. Since complexity of models processing is dumped, a full-size HIL system is reduced into a portable device composed of a signal generator, an I/O board, and a console containing the actuators (external loads) to be connected to the ECU.
Radar
HIL simulation for radar systems have evolved from radar-jamming. Digital Radio Frequency Memory (DRFM) systems are typically used to create false targets to confuse the radar in the battlefield, but these same systems can simulate a target in the laboratory. This configuration allows for the testing and evaluation of the radar system, reducing the need for flight trials (for airborne radar systems) and field tests (for search or tracking radars), and can give an early indication to the susceptibility of the radar to electronic warfare (EW) techniques.
Robotics
Techniques for HIL simulation have been recently applied to the automatic generation of complex controllers for robots. A robot uses its own real hardware to extract sensation and actuation data, then uses this data to infer a physical simulation (self-model) containing aspects such as its own morphology as well as characteristics of the environment. Algorithms such as Back-to-Reality (BTR) and Estimation Exploration (EEA) have been proposed in this context.
Power systems
In recent years, HIL for power systems has been used for verifying the stability, operation, and fault tolerance of large-scale electrical grids. Current-generation real-time processing platforms have the capability to model large-scale power systems in real-time. This includes systems with more than 10,000 buses with associated generators, loads, power-factor correction devices, and network interconnections. These types of simulation platforms enable the evaluation and testing of large-scale power systems in a realistic emulated environment. Moreover, HIL for power systems has been used for investigating the integration of distributed resources, next-generation SCADA systems and power management units, and static synchronous compensator devices.
Offshore systems
In offshore and marine engineering, control systems and mechanical structures are generally designed in parallel. Testing the control systems is only possible after integration. As a result, many errors are found that have to be solved during the commissioning, with the risks of personal injuries, damaging equipment and delays. To reduce these errors, HIL simulation is gaining widespread attention. This is reflected by the adoption of HIL simulation in the Det Norske Veritas rules.
References
External links
Introduction to Hardware-in-the-Loop Simulation.
Embedded systems | 0.769727 | 0.992151 | 0.763686 |
Rapid plant movement | Rapid plant movement encompasses movement in plant structures occurring over a very short period, usually under one second. For example, the Venus flytrap closes its trap in about 100 milliseconds. The traps of Utricularia are much faster, closing in about 0.5 milliseconds. The dogwood bunchberry's flower opens its petals and fires pollen in less than 0.5 milliseconds. The record is currently held by the white mulberry tree, with flower movement taking 25 microseconds, as pollen is catapulted from the stamens at velocities in excess of half the speed of sound—near the theoretical physical limits for movements in plants.
These rapid plant movements differ from the more common, but much slower "growth-movements" of plants, called tropisms. Tropisms encompass movements that lead to physical, permanent alterations of the plant while rapid plant movements are usually reversible or occur over a shorter span of time.
A variety of mechanisms are employed by plants in order to achieve these fast movements. Extremely fast movements such as the explosive spore dispersal techniques of Sphagnum mosses may involve increasing internal pressure via dehydration, causing a sudden propulsion of spores up or through the rapid opening of the "flower" opening triggered by insect pollination. Fast movement can also be demonstrated in predatory plants, where the mechanical stimulation of insect movement creates an electrical action potential and a release of elastic energy within the plant tissues. This release can be seen in the closing of a Venus flytrap, the curling of sundew leaves, and in the trapdoor action and suction of bladderworts. Slower movement, such as the folding of Mimosa pudica leaves, may depend on reversible, but drastic or uneven changes in water pressure in the plant tissues This process is controlled by the fluctuation of ions in and out of the cell, and the osmotic response of water to the ion flux.
In 1880 Charles Darwin published The Power of Movement in Plants, his second-to-last work before his death.
Plants that capture and consume prey
Venus flytrap (Dionaea muscipula)
Waterwheel plant (Aldrovanda vesiculosa)
Bladderwort (Utricularia)
Certain varieties of sundew (Drosera)
Plants that move leaves and leaflets
Plants that are able to rapidly move their leaves or their leaflets in response to mechanical stimulation such as touch (thigmonasty):
Aeschynomene:
Large leaf sensitive plant (Aeschynomene fluitans)
Aeschynomene americana
Aeschynomene deightonii
Starfruit (Averrhoa carambola)
Biophytum:
Biophytum abyssinicum
Biophytum helenae
Biophytum petersianum
Biophytum reinwardtii
Biophytum sensitivum
Chamaecrista:
Partridge pea (Chamaecrista fasciculata)
Sensitive partridge pea (Chamaecrista nictitans)
Chamaecrista mimosoides L.
Mimosa:
Giant false sensitive plant (Mimosa diplotricha)
Catclaw brier (Mimosa nuttallii)
Giant sensitive plant (Mimosa pigra)
Mimosa polyantha
Mimosa polycarpa var. spegazzinii
Mimosa polydactyla
Sensitive plant (Mimosa pudica)
Roemer sensitive briar (Mimosa roemeriana)
Eastern sensitive plant, sensitive briar (Mimosa rupertiana)
Mimosa uruguensis
Neptunia:
Yellow neptunia (Neptunia lutea)
Sensitive neptunia (Neptunia oleracea)
Neptunia plena
Neptunia gracili
Senna alata
Plants that move their leaves or leaflets at speeds rapid enough to be perceivable with the naked eye:
Telegraph plant (Codariocalyx motorius)
Plants that spread seeds or pollen by rapid movement
Squirting cucumber (Ecballium elaterium)
Cardamine hirsuta and other Cardamine spp. have seed pods which explode when touched.
Impatiens (Impatiens)
Sandbox tree
Triggerplant (all Stylidium species)
Canadian dwarf cornel (aka dogwood bunchberry, Cornus canadensis)
White mulberry (Morus alba)
Orchids (all genus Catasetum)
Dwarf mistletoe (Arceuthobium)
Witch-hazel (Hamamelis)
Some Fabaceae have beans that twist as they dry out, putting tension on the seam, which at some point will split suddenly and violently, flinging the seeds metres from the maternal plant.
Marantaceae
Minnieroot (Ruellia tuberosa)
Peyote (Lophophora williamsii) stamens move in response to touch
See also
Kinesis (biology)
Nastic movements
Plant perception (physiology)
Taxis
Thigmonasty
Tropism
Plant bioacoustics
References
Plant physiology
Plant cognition | 0.767695 | 0.994777 | 0.763686 |
Drift velocity | In physics, drift velocity is the average velocity attained by charged particles, such as electrons, in a material due to an electric field. In general, an electron in a conductor will propagate randomly at the Fermi velocity, resulting in an average velocity of zero. Applying an electric field adds to this random motion a small net flow in one direction; this is the drift.
Drift velocity is proportional to current. In a resistive material, it is also proportional to the magnitude of an external electric field. Thus Ohm's law can be explained in terms of drift velocity. The law's most elementary expression is:
where is drift velocity, is the material's electron mobility, and is the electric field. In the MKS system, drift velocity has units of m/s, electron mobility, m2/(V·s), and electric field, V/m.
When a potential difference is applied across a conductor, free electrons gain velocity in the direction, opposite to the electric field between successive collisions (and lose velocity when traveling in the direction of the field), thus acquiring a velocity component in that direction in addition to its random thermal velocity. As a result, there is a definite small drift velocity of electrons, which is superimposed on the random motion of free electrons. Due to this drift velocity, there is a net flow of electrons opposite to the direction of the field. The drift speed of electrons is generally in the order of 10-3 meters per second whereas the thermal speed is on the order of 106 meters per second.
Experimental measure
The formula for evaluating the drift velocity of charge carriers in a material of constant cross-sectional area is given by:
where is the drift velocity of electrons, is the current density flowing through the material, is the charge-carrier number density, and is the charge on the charge-carrier.
This can also be written as:
But the current density and drift velocity, j and u, are in fact vectors, so this relationship is often written as:
where
is the charge density (SI unit: coulombs per cubic metre).
In terms of the basic properties of the right-cylindrical current-carrying metallic ohmic conductor, where the charge-carriers are electrons, this expression can be rewritten as:
where
is again the drift velocity of the electrons, in m⋅s−1
is the molecular mass of the metal, in kg
is the electric conductivity of the medium at the temperature considered, in S/m.
is the voltage applied across the conductor, in V
is the density (mass per unit volume) of the conductor, in kg⋅m−3
is the elementary charge, in C
is the number of free electrons per atom
is the length of the conductor, in m
Numerical example
Electricity is most commonly conducted through copper wires. Copper has a density of and an atomic weight of , so there are . In one mole of any element, there are atoms (the Avogadro number). Therefore, in of copper, there are about atoms. Copper has one free electron per atom, so is equal to electrons per cubic metre.
Assume a current , and a wire of diameter (radius = ). This wire has a cross sectional area of π ×2 = = . The elementary charge of an electron is . The drift velocity therefore can be calculated:
Dimensional analysis:
Therefore, in this wire, the electrons are flowing at the rate of . At 60Hz alternating current, this means that, within half a cycle (1/120th sec.), on average the electrons drift less than 0.2 μm. In context, at one ampere around electrons will flow across the contact point twice per cycle. But out of around movable electrons per meter of wire, this is an insignificant fraction.
By comparison, the Fermi flow velocity of these electrons (which, at room temperature, can be thought of as their approximate velocity in the absence of electric current) is around .
See also
Flow velocity
Electron mobility
Speed of electricity
Drift chamber
Guiding center
References
External links
Ohm's Law: Microscopic View at Hyperphysics
Electric current
Charge carriers | 0.767762 | 0.994685 | 0.763681 |
Lichtenberg figure | A Lichtenberg figure (German Lichtenberg-Figur), or Lichtenberg dust figure, is a branching electric discharge that sometimes appears on the surface or in the interior of insulating materials. Lichtenberg figures are often associated with the progressive deterioration of high voltage components and equipment. The study of planar Lichtenberg figures along insulating surfaces and 3D electrical trees within insulating materials often provides engineers with valuable insights for improving the long-term reliability of high-voltage equipment. Lichtenberg figures are now known to occur on or within solids, liquids, and gases during electrical breakdown.
Lichtenberg figures are natural phenomena which exhibit fractal properties.
History
Lichtenberg figures are named after the German physicist Georg Christoph Lichtenberg, who originally discovered and studied them. When they were first discovered, it was thought that their characteristic shapes might help to reveal the nature of positive and negative electric "fluids".
In 1777, Lichtenberg built a large electrophorus to generate high voltage static electricity through induction. After discharging a high voltage point to the surface of an insulator, he recorded the resulting radial patterns by sprinkling various powdered materials onto the surface. By then pressing blank sheets of paper onto these patterns, Lichtenberg was able to transfer and record these images, thereby discovering the basic principle of modern xerography.
This discovery was also the forerunner of the modern day science of plasma physics. Although Lichtenberg only studied two-dimensional (2D) figures, modern high voltage researchers study 2D and 3D figures (electrical trees) on, and within, insulating materials.
Formation
Two-dimensional (2D) Lichtenberg figures can be produced by placing a sharp-pointed needle perpendicular to the surface of a non-conducting plate, such as of resin, ebonite, or glass. The point is positioned very near or contacting the plate. A source of high voltage such as a Leyden jar (a type of capacitor) or a static electricity generator is applied to the needle, typically through a spark gap. This creates a sudden, small electrical discharge along the surface of the plate. This deposits stranded areas of charge onto the surface of the plate. These electrified areas are then tested by sprinkling a mixture of powdered flowers of sulfur and red lead (Pb3O4 or lead tetroxide) onto the plate.
During handling, powdered sulfur tends to acquire a slight negative charge, while red lead tends to acquire a slight positive charge. The negatively electrified sulfur is attracted to the positively electrified areas of the plate, while the positively electrified red lead is attracted to the negatively electrified areas.
In addition to the distribution of colors thereby produced, there is also a marked difference in the form of the figure, according to the polarity of the electrical charge that was applied to the plate. If the charge areas were positive, a widely extending patch is seen on the plate, consisting of a dense nucleus, from which branches radiate in all directions. Negatively charged areas are considerably smaller and have a sharp circular or fan-like boundary entirely devoid of branches. Heinrich Rudolf Hertz employed Lichtenberg dust figures in his seminal work proving Maxwell's electromagnetic wave theories.
If the plate receives a mixture of positive and negative charges as, for example, from an induction coil, a mixed figure results, consisting of a large red central nucleus, corresponding to the negative charge, surrounded by yellow rays, corresponding to the positive charge. The difference between positive and negative figures seems to depend on the presence of air; for the difference tends to disappear when the experiment is conducted in vacuum. Peter T. Riess (a 19th-century researcher) theorized that the negative electrification of the plate was caused by the friction of the water vapour, etc., driven along the surface by the explosion which accompanies the disruptive discharge at the point. This electrification would favor the spread of a positive, but hinder that of a negative discharge.
It is now known that electrical charges are transferred to the insulator's surface through small spark discharges that occur along the boundary between the gas and insulator surface. Once transferred to the insulator, these excess charges become temporarily stranded. The shapes of the resulting charge distributions reflect the shape of the spark discharges which, in turn, depend on the high voltage polarity and pressure of the gas. Using a higher applied voltage will generate larger diameter and more branched figures. It is now known that positive Lichtenberg figures have longer, branching structures because long sparks within air can more easily form and propagate from positively charged high voltage terminals. This property has been used to measure the transient voltage polarity and magnitude of lightning surges on electrical power lines.
Another type of 2D Lichtenberg figure can be created when an insulating surface becomes contaminated with semiconducting material. When a high voltage is applied across the surface, leakage currents may cause localized heating and progressive degradation and charring of the underlying material. Over time, branching, tree-like carbonized patterns are formed upon the surface of the insulator called electrical trees. This degradation process is called tracking. If the conductive paths ultimately bridge the insulating space, the result is catastrophic failure of the insulating material. Some artists purposely apply salt water to the surface of wood or cardboard and then apply a high voltage across the surface to generate complex carbonized 2D Lichtenberg figures on the surface.
Fractal similarities
The branching, self-similar patterns observed in Lichtenberg figures exhibit fractal properties. Lichtenberg figures often develop during the dielectric breakdown of solids, liquids, and even gases. Their appearance and growth appear to be related to a process called diffusion-limited aggregation (DLA). A useful macroscopic model that combines an electric field with DLA was developed by Niemeyer, Pietronero, and Weismann in 1984, and is known as the dielectric breakdown model (DBM).
Although the electrical breakdown mechanisms of air and PMMA plastic are considerably different, the branching discharges turn out to be related. The branching forms taken by natural lightning also have fractal characteristics.
Constructual law
Lichtenberg figures are examples of natural phenomena which exhibit fractal properties. The emergence and evolution of these and the other tree-like structures that abound in nature are summarized by the constructal law. First published by Duke professor Adrian Bejan in 1996, the constructal law is a first principle of physics which summarizes the tendency in nature to generate configurations (patterns, designs) that facilitate the free movement of the imposed currents that flow through it. The constructal law predicts that the tree-like designs described in this article should emerge and evolve to facilitate the movement (point-to-area) of the electrical currents flowing through them.
Natural occurrences
Lichtenberg figures are fern-like patterns that may appear on the skin of lightning strike victims and typically disappear in 24 hours. They are also known as Keraunographic markings.
A lightning strike can also create a large Lichtenberg figure in grass surrounding the point struck. These are sometimes found on golf courses or in grassy meadows. Branching root-shaped "fulgurite" mineral deposits may also be created as sand and soil is fused into glassy tubes by the intense heat of the current.
Electrical treeing often occurs in high-voltage equipment prior to causing complete breakdown. Following these Lichtenberg figures within the insulation during post-accident investigation of an insulation failure can be useful in finding the cause of breakdown. From the direction and shape of the trees and their branches, an experienced high-voltage engineer can see exactly the point where the insulation began to break down, and using that knowledge possibly find the initial cause as well. Broken-down transformers, high-voltage cables, bushings, and other equipment can usefully be investigated in this manner. The insulation is unrolled (in the case of paper insulation) or sliced in thin slices (in the case of solid insulating materials). The results are then sketched or photographed to create a record of the breakdown process.
In insulating materials
Modern Lichtenberg figures can also be created within solid insulating materials, such as acrylic (polymethyl methacrylate or PMMA) or glass by injecting them with a beam of high energy electrons from a linear electron beam accelerator (or Linac, a type of particle accelerator). Inside the Linac, electrons are focused and accelerated to form a beam of high speed particles. Electrons emerging from the accelerator have energies up to 25 MeV and are moving at an appreciable fraction (95 - 99+ percent) of the speed of light (relativistic velocities).
If the electron beam is aimed towards a thick acrylic specimen, the electrons easily penetrate the surface of the acrylic, rapidly decelerating as they collide with molecules inside the plastic, finally coming to rest deep inside the specimen. Since acrylic is an excellent electrical insulator, these electrons become temporarily trapped within the specimen, forming a plane of excess negative charge. Under continued irradiation, the amount of trapped charge builds, until the effective voltage inside the specimen reaches millions of volts. Once the electrical stress exceeds the dielectric strength of the plastic, some portions suddenly become conductive in a process called dielectric breakdown.
During breakdown, branching tree or fern-like conductive channels rapidly form and propagate through the plastic, allowing the trapped charge to suddenly rush out in a miniature lightning-like flash and bang. Breakdown of a charged specimen may also be manually triggered by poking the plastic with a pointed conductive object to create a point of excessive voltage stress. During the discharge, the powerful electric sparks leave thousands of branching chains of fractures behind - creating a permanent Lichtenberg figure inside the specimen. Although the internal charge within the specimen is negative, the discharge is initiated from the positively charged exterior surfaces of the specimen, so that the resulting discharge creates a positive Lichtenberg figure. These objects are sometimes called electron trees, beam trees, or lightning trees.
As the electrons rapidly decelerate inside the acrylic, they also generate powerful X-rays. Residual electrons and X-rays darken the acrylic by introducing defects (color centers) in a process called solarization. Solarization initially turns acrylic specimens a lime green color which then changes to an amber color after the specimen has been discharged. The color usually fades over time, and gentle heating, combined with oxygen, accelerates the fading process.
On wood
Lichtenberg figures can also be produced on wood. The types of wood and grain patterns affect the shape of the Lichtenberg figure produced. By applying a coat of electrolytic solution to the surface of the wood, the resistance of the surface drops considerably. Two electrodes are then placed on the wood and a high voltage is passed across them. Current from the electrodes will cause the surface of the wood to heat up, until the electrolyte boils and the wooden surface burns. Because the charred surface of the wood is mildly conductive, the surface of the wood will burn in a pattern outwards from the electrodes. The process can be dangerous, resulting in deaths every year from electrocution.
See also
Crown shyness
Dielectric breakdown model
Fractal curve
Kirlian photography
Lightning burn
Patterns in nature
Diffusion-limited aggregation
References
External links
What are Lichtenberg Figures and how are they created?
Lichtenberg Figures, Glass and Gemstones
1927 General Electric Review Article about Lichtenberg Figures
Dielectric Breakdown Model (DBM)
Trap Lightning in a Block. (DIY Lichtenberg Figure at Popular Science)
Lichtenbergs in acrylic in 3d. 1 2 3 (Requires QuickTime VR to view.)
Bibliography of Fulgurites
Lichtenberg wood burning With a Welder
Electricity
Electrical breakdown
Lightning
Dielectrics
Fractals | 0.765822 | 0.997196 | 0.763674 |
Sunlight | Sunlight is a portion of the electromagnetic radiation given off by the Sun, in particular infrared, visible, and ultraviolet light. On Earth, sunlight is scattered and filtered through Earth's atmosphere as daylight when the Sun is above the horizon. When direct solar radiation is not blocked by clouds, it is experienced as sunshine, a combination of bright light and radiant heat (atmospheric). When blocked by clouds or reflected off other objects, sunlight is diffused. Sources estimate a global average of between 164 watts to 340 watts per square meter over a 24-hour day; this figure is estimated by NASA to be about a quarter of Earth's average total solar irradiance.
The ultraviolet radiation in sunlight has both positive and negative health effects, as it is both a requisite for vitamin D3 synthesis and a mutagen.
Sunlight takes about 8.3 minutes to reach Earth from the surface of the Sun. A photon starting at the center of the Sun and changing direction every time it encounters a charged particle would take between 10,000 and 170,000 years to get to the surface.
Sunlight is a key factor in photosynthesis, the process used by plants and other autotrophic organisms to convert light energy, normally from the Sun, into chemical energy that can be used to synthesize carbohydrates and fuel the organisms' activities.
Daylighting is the natural lighting of interior spaces by admitting sunlight.
Solar irradiance is the solar energy available from sunlight.
Measurement
Researchers can measure the intensity of sunlight using a sunshine recorder, pyranometer, or pyrheliometer. To calculate the amount of sunlight reaching the ground, both the eccentricity of Earth's elliptic orbit and the attenuation by Earth's atmosphere have to be taken into account. The extraterrestrial solar illuminance, corrected for the elliptic orbit by using the day number of the year (dn), is given to a good approximation by
where dn=1 on January 1; dn=32 on February 1; dn=59 on March 1 (except on leap years, where dn=60), etc. In this formula dn–3 is used, because in modern times Earth's perihelion, the closest approach to the Sun and, therefore, the maximum occurs around January 3 each year. The value of 0.033412 is determined knowing that the ratio between the perihelion (0.98328989 AU) squared and the aphelion (1.01671033 AU) squared should be approximately 0.935338.
The solar illuminance constant, is equal to 128×103 lux. The direct normal illuminance, corrected for the attenuating effects of the atmosphere is given by:
where is the atmospheric extinction and is the relative optical airmass. The atmospheric extinction brings the number of lux down to around 100,000 lux.
The total amount of energy received at ground level from the Sun at the zenith depends on the distance to the Sun and thus on the time of year. It is about 3.3% higher than average in January and 3.3% lower in July (see below). If the extraterrestrial solar radiation is 1,367 watts per square meter (the value when the Earth–Sun distance is 1 astronomical unit), then the direct sunlight at Earth's surface when the Sun is at the zenith is about 1,050 W/m2, but the total amount (direct and indirect from the atmosphere) hitting the ground is around 1,120 W/m2. In terms of energy, sunlight at Earth's surface is around 52 to 55 percent infrared (above 700 nm), 42 to 43 percent visible (400 to 700 nm), and 3 to 5 percent ultraviolet (below 400 nm). At the top of the atmosphere, sunlight is about 30% more intense, having about 8% ultraviolet (UV), with most of the extra UV consisting of biologically damaging short-wave ultraviolet.
has a luminous efficacy of about 93 lumens per watt of radiant flux. This is higher than the efficacy (of source) of artificial lighting other than LEDs, which means using sunlight for illumination heats up a room less than fluorescent or incandescent lighting. Multiplying the figure of 1,050 watts per square meter by 93 lumens per watt indicates that bright sunlight provides an illuminance of approximately 98,000 lux (lumens per square meter) on a perpendicular surface at sea level. The illumination of a horizontal surface will be considerably less than this if the Sun is not very high in the sky. Averaged over a day, the highest amount of sunlight on a horizontal surface occurs in January at the South Pole (see insolation).
Dividing the irradiance of 1,050 W/m2 by the size of the Sun's disk in steradians gives an average radiance of 15.4 MW per square metre per steradian. (However, the radiance at the center of the sun's disk is somewhat higher than the average over the whole disk due to limb darkening.) Multiplying this by π gives an upper limit to the irradiance which can be focused on a surface using mirrors: 48.5 MW/m2.
Composition and power
The spectrum of the Sun's solar radiation can be compared to that of a black body with a temperature of about 5,800 K (see graph). The Sun emits EM radiation across most of the electromagnetic spectrum. Although the radiation created in the solar core consists mostly of x rays, internal absorption and thermalization convert these super-high-energy photons to lower-energy photons before they reach the Sun's surface and are emitted out into space. As a result, the photosphere of the Sun does not emit much X radiation (solar X-rays), although it does emit such "hard radiations" as X-rays and even gamma rays during solar flares. The quiet (non-flaring) Sun, including its corona, emits a broad range
of wavelengths: X-rays, ultraviolet, visible light, infrared, and radio waves. Different depths in the photosphere have different temperatures, and this partially explains the deviations from a black-body spectrum.
There is also a flux of gamma rays from the quiescent sun, obeying a power law between 0.5 and 2.6 TeV. Some gamma rays are caused by cosmic rays interacting with the solar atmosphere, but this does not explain these findings.
The only direct signature of the nuclear processes in the core of the Sun is via the very weakly interacting neutrinos.
Although the solar corona is a source of extreme ultraviolet and X-ray radiation, these rays make up only a very small amount of the power output of the Sun (see spectrum at right). The spectrum of nearly all solar electromagnetic radiation striking the Earth's atmosphere spans a range of 100 nm to about 1 mm (1,000,000 nm). This band of significant radiation power can be divided into five regions in increasing order of wavelengths:
Ultraviolet C or (UVC) range, which spans a range of 100 to 280 nm. The term ultraviolet refers to the fact that the radiation is at higher frequency than violet light (and, hence, also invisible to the human eye). Due to absorption by the atmosphere very little reaches Earth's surface. This spectrum of radiation has germicidal properties, as used in germicidal lamps.
Ultraviolet B or (UVB) range spans 280 to 315 nm. It is also greatly absorbed by the Earth's atmosphere, and along with UVC causes the photochemical reaction leading to the production of the ozone layer. It directly damages DNA and causes sunburn. In addition to this short-term effect it enhances skin ageing and significantly promotes the development of skin cancer, but is also required for vitamin D synthesis in the skin of mammals.
Ultraviolet A or (UVA) spans 315 to 400 nm. This band was once held to be less damaging to DNA, and hence is used in cosmetic artificial sun tanning (tanning booths and tanning beds) and PUVA therapy for psoriasis. However, UVA is now known to cause significant damage to DNA via indirect routes (formation of free radicals and reactive oxygen species), and can cause cancer.
Visible range or light spans 380 to 700 nm. As the name suggests, this range is visible to the naked eye. It is also the strongest output range of the Sun's total irradiance spectrum.
Infrared range that spans 700 nm to 1,000,000 nm (1 mm). It comprises an important part of the electromagnetic radiation that reaches Earth. Scientists divide the infrared range into three types on the basis of wavelength:
Infrared-A: 700 nm to 1,400 nm
Infrared-B: 1,400 nm to 3,000 nm
Infrared-C: 3,000 nm to 1 mm.
Published tables
Tables of direct solar radiation on various slopes from 0 to 60 degrees north latitude, in calories per square centimetre, issued in 1972 and published by Pacific Northwest Forest and Range Experiment Station, Forest Service, U.S. Department of Agriculture, Portland, Oregon, USA, appear on the web.
Intensity in the Solar System
Different bodies of the Solar System receive light of an intensity inversely proportional to the square of their distance from Sun.
A table comparing the amount of solar radiation received by each planet in the Solar System at the top of its atmosphere:
The actual brightness of sunlight that would be observed at the surface also depends on the presence and composition of an atmosphere. For example, Venus's thick atmosphere reflects more than 60% of the solar light it receives. The actual illumination of the surface is about 14,000 lux, comparable to that on Earth "in the daytime with overcast clouds".
Sunlight on Mars would be more or less like daylight on Earth during a slightly overcast day, and, as can be seen in the pictures taken by the rovers, there is enough diffuse sky radiation that shadows would not seem particularly dark. Thus, it would give perceptions and "feel" very much like Earth daylight. The spectrum on the surface is slightly redder than that on Earth, due to scattering by reddish dust in the Martian atmosphere.
For comparison, sunlight on Saturn is slightly brighter than Earth sunlight at the average sunset or sunrise. Even on Pluto, the sunlight would still be bright enough to almost match the average living room. To see sunlight as dim as full moonlight on Earth, a distance of about 500 AU (~69 light-hours) is needed; only a handful of objects in the Solar System have been discovered that are known to orbit farther than such a distance, among them 90377 Sedna and .
Variations in solar irradiance
Seasonal and orbital variation
On Earth, the solar radiation varies with the angle of the Sun above the horizon, with longer sunlight duration at high latitudes during summer, varying to no sunlight at all in winter near the pertinent pole. When the direct radiation is not blocked by clouds, it is experienced as sunshine. The warming of the ground (and other objects) depends on the absorption of the electromagnetic radiation in the form of heat.
The amount of radiation intercepted by a planetary body varies inversely with the square of the distance between the star and the planet. Earth's orbit and obliquity change with time (over thousands of years), sometimes forming a nearly perfect circle, and at other times stretching out to an orbital eccentricity of 5% (currently 1.67%). As the orbital eccentricity changes, the average distance from the Sun (the semimajor axis does not significantly vary, and so the total insolation over a year remains almost constant due to Kepler's second law,
where is the "areal velocity" invariant. That is, the integration over the orbital period (also invariant) is a constant.
If we assume the solar radiation power as a constant over time and the solar irradiation given by the inverse-square law, we obtain also the average insolation as a constant. However, the seasonal and latitudinal distribution and intensity of solar radiation received at Earth's surface does vary. The effect of Sun angle on climate results in the change in solar energy in summer and winter. For example, at latitudes of 65 degrees, this can vary by more than 25% as a result of Earth's orbital variation. Because changes in winter and summer tend to offset, the change in the annual average insolation at any given location is near zero, but the redistribution of energy between summer and winter does strongly affect the intensity of seasonal cycles. Such changes associated with the redistribution of solar energy are considered a likely cause for the coming and going of recent ice ages (see: Milankovitch cycles).
Solar intensity variation
Space-based observations of solar irradiance started in 1978. These measurements show that the solar constant is not constant. It varies on many time scales, including the 11-year sunspot solar cycle. When going further back in time, one has to rely on irradiance reconstructions, using sunspots for the past 400 years or cosmogenic radionuclides for going back 10,000 years.
Such reconstructions have been done. These studies show that in addition to the solar irradiance variation with the solar cycle (the (Schwabe) cycle), the solar activity varies with longer cycles, such as the proposed 88 year (Gleisberg cycle), 208 year (DeVries cycle) and 1,000 year (Eddy cycle).
Solar irradiance
Solar constant
The solar constant is a measure of flux density, is the amount of incoming solar electromagnetic radiation per unit area that would be incident on a plane perpendicular to the rays, at a distance of one astronomical unit (AU) (roughly the mean distance from the Sun to Earth). The "solar constant" includes all types of solar radiation, not just the visible light. Its average value was thought to be approximately 1,366 W/m2, varying slightly with solar activity, but recent recalibrations of the relevant satellite observations indicate a value closer to 1,361 W/m2 is more realistic.
Total solar irradiance (TSI) and spectral solar irradiance (SSI) upon Earth
Since 1978, a series of overlapping NASA and ESA satellite experiments have measured total solar irradiance (TSI) – the amount of solar radiation received at the top of Earth's atmosphere – as 1.365 kilowatts per square meter (kW/m2). TSI observations continue with the ACRIMSAT/ACRIM3, SOHO/VIRGO and SORCE/TIM satellite experiments. Observations have revealed variation of TSI on many timescales, including the solar magnetic cycle and many shorter periodic cycles. TSI provides the energy that drives Earth's climate, so continuation of the TSI time-series database is critical to understanding the role of solar variability in climate change.
Since 2003, the SORCE Spectral Irradiance Monitor (SIM) has monitored Spectral solar irradiance (SSI) – the spectral distribution of the TSI. Data indicate that SSI at UV (ultraviolet) wavelength corresponds in a less clear, and probably more complicated fashion, with Earth's climate responses than earlier assumed, fueling broad avenues of new research in "the connection of the Sun and stratosphere, troposphere, biosphere, ocean, and Earth's climate".
Surface illumination and spectrum
The spectrum of surface illumination depends upon solar elevation due to atmospheric effects, with the blue spectral component dominating during twilight before and after sunrise and sunset, respectively, and red dominating during sunrise and sunset. These effects are apparent in natural light photography where the principal source of illumination is sunlight as mediated by the atmosphere.
While the color of the sky is usually determined by Rayleigh scattering, an exception occurs at sunset and twilight. "Preferential absorption of sunlight by ozone over long horizon paths gives the zenith sky its blueness when the sun is near the horizon".
Spectral composition of sunlight at Earth's surface
The Sun may be said to illuminate, which is a measure of the light within a specific sensitivity range. Many animals (including humans) have a sensitivity range of approximately 400–700 nm, and given optimal conditions the absorption and scattering by Earth's atmosphere produces illumination that approximates an equal-energy illuminant for most of this range. The useful range for color vision in humans, for example, is approximately 450–650 nm. Aside from effects that arise at sunset and sunrise, the spectral composition changes primarily in respect to how directly sunlight is able to illuminate. When illumination is indirect, Rayleigh scattering in the upper atmosphere will lead blue wavelengths to dominate. Water vapour in the lower atmosphere produces further scattering and ozone, dust and water particles will also absorb particular wavelengths.
Life on Earth
The existence of nearly all life on Earth is fueled by light from the Sun. Most autotrophs, such as plants, use the energy of sunlight, combined with carbon dioxide and water, to produce simple sugars—a process known as photosynthesis. These sugars are then used as building-blocks and in other synthetic pathways that allow the organism to grow.
Heterotrophs, such as animals, use light from the Sun indirectly by consuming the products of autotrophs, either by consuming autotrophs, by consuming their products, or by consuming other heterotrophs. The sugars and other molecular components produced by the autotrophs are then broken down, releasing stored solar energy, and giving the heterotroph the energy required for survival. This process is known as cellular respiration.
In prehistory, humans began to further extend this process by putting plant and animal materials to other uses. They used animal skins for warmth, for example, or wooden weapons to hunt. These skills allowed humans to harvest more of the sunlight than was possible through glycolysis alone, and human population began to grow.
During the Neolithic Revolution, the domestication of plants and animals further increased human access to solar energy. Fields devoted to crops were enriched by inedible plant matter, providing sugars and nutrients for future harvests. Animals that had previously provided humans with only meat and tools once they were killed were now used for labour throughout their lives, fueled by grasses inedible to humans. Fossil fuels are the remnants of ancient plant and animal matter, formed using energy from sunlight and then trapped within Earth for millions of years.
Cultural aspects
The effect of sunlight is relevant to painting, evidenced for instance in works of Édouard Manet and Claude Monet on outdoor scenes and landscapes.
Many people find direct sunlight to be too bright for comfort; indeed, looking directly at the Sun can cause long-term vision damage. To compensate for the brightness of sunlight, many people wear sunglasses. Cars, many helmets and caps are equipped with visors to block the Sun from direct vision when the Sun is at a low angle. Sunshine is often blocked from entering buildings through the use of walls, window blinds, awnings, shutters, curtains, or nearby shade trees. Sunshine exposure is needed biologically for the production of Vitamin D in the skin, a vital compound needed to make strong bone and muscle in the body.
In many world religions, such as Hinduism, the Sun is considered to be a god, as it is the source of life and energy on Earth. The Sun was also considered to be a god in Ancient Egypt.
Sunbathing
Sunbathing is a popular leisure activity in which a person sits or lies in direct sunshine. People often sunbathe in comfortable places where there is ample sunlight. Some common places for sunbathing include beaches, open air swimming pools, parks, gardens, and sidewalk cafes. Sunbathers typically wear limited amounts of clothing or some simply go nude. For some, an alternative to sunbathing is the use of a sunbed that generates ultraviolet light and can be used indoors regardless of weather conditions. Tanning beds have been banned in a number of states in the world.
For many people with light skin, one purpose for sunbathing is to darken one's skin color (get a sun tan), as this is considered in some cultures to be attractive, associated with outdoor activity, vacations/holidays, and health. Some people prefer naked sunbathing so that an "all-over" or "even" tan can be obtained, sometimes as part of a specific lifestyle.
Controlled heliotherapy, or sunbathing, has been used as a treatment for psoriasis and other maladies.
Skin tanning is achieved by an increase in the dark pigment inside skin cells called melanocytes, and is an automatic response mechanism of the body to sufficient exposure to ultraviolet radiation from the Sun or from artificial sunlamps. Thus, the tan gradually disappears with time, when one is no longer exposed to these sources.
Effects on human health
The ultraviolet radiation in sunlight has both positive and negative health effects, as it is both a principal source of vitamin D3 and a mutagen. A dietary supplement can supply vitamin D without this mutagenic effect, but bypasses natural mechanisms that would prevent overdoses of vitamin D generated internally from sunlight. Vitamin D has a wide range of positive health effects, which include strengthening bones and possibly inhibiting the growth of some cancers. Sun exposure has also been associated with the timing of melatonin synthesis, maintenance of normal circadian rhythms, and reduced risk of seasonal affective disorder.
Long-term sunlight exposure is known to be associated with the development of skin cancer, skin aging, immune suppression, and eye diseases such as cataracts and macular degeneration. Short-term overexposure is the cause of sunburn, snow blindness, and solar retinopathy.
UV rays, and therefore sunlight and sunlamps, are the only listed carcinogens that are known to have health benefits, and a number of public health organizations state that there needs to be a balance between the risks of having too much sunlight or too little. There is a general consensus that sunburn should always be avoided.
Epidemiological data shows that people who have more exposure to sunlight have less high blood pressure and cardiovascular-related mortality. While sunlight (and its UV rays) are a risk factor for skin cancer, "sun avoidance may carry more of a cost than benefit for over-all good health". A study found that there is no evidence that UV reduces lifespan in contrast to other risk factors like smoking, alcohol and high blood pressure.
Effect on plant genomes
Elevated solar UV-B doses increase the frequency of DNA recombination in Arabidopsis thaliana and tobacco (Nicotiana tabacum) plants. These increases are accompanied by strong induction of an enzyme with a key role in recombinational repair of DNA damage. Thus the level of terrestrial solar UV-B radiation likely affects genome stability in plants.
See also
Color temperature
Coronal radiative losses
Diathermancy
Fraunhofer lines
List of cities by sunshine duration
Moonlight
Light pollution
Photic sneeze reflex
Photosynthesis
Starlight
References
Further reading
Hartmann, Thom (1998). The Last Hours of Ancient Sunlight. London: Hodder and Stoughton. .
External links
Solar radiation – Encyclopedia of Earth
Total Solar Irradiance (TSI) Daily mean data at the website of the National Geophysical Data Center
Construction of a Composite Total Solar Irradiance (TSI) Time Series from 1978 to present by World Radiation Center, Physikalisch-Meteorologisches Observatorium Davos (pmod wrc)
A Comparison of Methods for Providing Solar Radiation Data to Crop Models and Decision Support Systems, Rivington et al.
Evaluation of three model estimations of solar radiation at 24 UK stations, Rivington et al.
High resolution spectrum of solar radiation from Observatoire de Paris
Measuring Solar Radiation : A lesson plan from the National Science Digital Library.
Websurf astronomical information: Online tools for calculating Rising and setting times of Sun, Moon or planet, Azimuth of Sun, Moon or planet at rising and setting, Altitude and azimuth of Sun, Moon or planet for a given date or range of dates, and more.
An Excel workbook with a solar position and solar radiation time-series calculator; by Greg Pelletier
ASTM Standard for solar spectrum at ground level in the US (latitude ~37 degrees).
Detailed spectrum of the Sun at Astronomy Picture of the Day.
Light
Atmospheric radiation
Climate forcing
Solar energy
Light sources
IARC Group 1 carcinogens | 0.765896 | 0.99709 | 0.763667 |
Atomic units | The atomic units are a system of natural units of measurement that is especially convenient for calculations in atomic physics and related scientific fields, such as computational chemistry and atomic spectroscopy. They were originally suggested and named by the physicist Douglas Hartree.
Atomic units are often abbreviated "a.u." or "au", not to be confused with similar abbreviations used for astronomical units, arbitrary units, and absorbance units in other contexts.
Motivation
In the context of atomic physics, using the atomic units system can be a convenient shortcut, eliminating symbols and numbers and reducing the order of magnitude of most numbers involved.
For example, the Hamiltonian operator in the Schrödinger equation for the helium atom with standard quantities, such as when using SI units, is
but adopting the convention associated with atomic units that transforms quantities into dimensionless equivalents, it becomes
In this convention, the constants , , , and all correspond to the value (see below).
The distances relevant to the physics expressed in SI units are naturally on the order of , while expressed in atomic units distances are on the order of (one Bohr radius, the atomic unit of length). An additional benefit of expressing quantities using atomic units is that their values calculated and reported in atomic units do not change when values of fundamental constants are revised, since the fundamental constants are built into the conversion factors between atomic units and SI.
History
Hartree defined units based on three physical constants:
Here, the modern equivalent of is the Rydberg constant , of is the electron mass , of is the Bohr radius , and of is the reduced Planck constant . Hartree's expressions that contain differ from the modern form due to a change in the definition of , as explained below.
In 1957, Bethe and Salpeter's book Quantum mechanics of one-and two-electron atoms built on Hartree's units, which they called atomic units abbreviated "a.u.". They chose to use , their unit of action and angular momentum in place of Hartree's length as the base units. They noted that the unit of length in this system is the radius of the first Bohr orbit and their velocity is the electron velocity in Bohr's model of the first orbit.
In 1959, Shull and Hall advocated atomic units based on Hartree's model but again chose to use as the defining unit. They explicitly named the distance unit a "Bohr radius"; in addition, they wrote the unit of energy as and called it a Hartree. These terms came to be used widely in quantum chemistry.
In 1973 McWeeny extended the system of Shull and Hall by adding permittivity in the form of as a defining or base unit. Simultaneously he adopted the SI definition of so that his expression for energy in atomic units is , matching the expression in the 8th SI brochure.
Definition
A set of base units in the atomic system as in one proposal are the electron rest mass, the magnitude of the electronic charge, the Planck constant, and the permittivity. In the atomic units system, each of these takes the value 1; the corresponding values in the International System of Units are given in the table.
Table notes
Units
Three of the defining constants (reduced Planck constant, elementary charge, and electron rest mass) are atomic units themselves – of action, electric charge, and mass, respectively. Two named units are those of length (Bohr radius ) and energy (hartree ).
Conventions
Different conventions are adopted in the use of atomic units, which vary in presentation, formality and convenience.
Explicit units
Many texts (e.g. Jerrard & McNiell, Shull & Hall) define the atomic units as quantities, without a transformation of the equations in use. As such, they do not suggest treating either quantities as dimensionless or changing the form of any equations. This is consistent with expressing quantities in terms of dimensional quantities, where the atomic unit is included explicitly as a symbol (e.g. , , or more ambiguously, ), and keeping equations unaltered with explicit constants.
Provision for choosing more convenient closely related quantities that are more suited to the problem as units than universal fixed units are is also suggested, for example based on the reduced mass of an electron, albeit with careful definition thereof where used (for example, a unit , where for a specified mass ).
A convention that eliminates units
In atomic physics, it is common to simplify mathematical expressions by a transformation of all quantities:
Hartree suggested that expression in terms of atomic units allows us "to eliminate various universal constants from the equations", which amounts to informally suggesting a transformation of quantities and equations such that all quantities are replaced by corresponding dimensionless quantities. He does not elaborate beyond examples.
McWeeny suggests that "... their adoption permits all the fundamental equations to be written in a dimensionless form in which constants such as , and are absent and need not be considered at all during mathematical derivations or the processes of numerical solution; the units in which any calculated quantity must appear are implicit in its physical dimensions and may be supplied at the end." He also states that "An alternative convention is to interpret the symbols as the numerical measures of the quantities they represent, referred to some specified system of units: in this case the equations contain only pure numbers or dimensionless variables; ... the appropriate units are supplied at the end of a calculation, by reference to the physical dimensions of the quantity calculated. [This] convention has much to recommend it and is tacitly accepted in atomic and molecular physics whenever atomic units are introduced, for example for convenience in computation."
An informal approach is often taken, in which "equations are expressed in terms of atomic units simply by setting ". This is a form of shorthand for the more formal process of transformation between quantities that is suggested by others, such as McWeeny.
Physical constants
Dimensionless physical constants retain their values in any system of units. Of note is the fine-structure constant , which appears in expressions as a consequence of the choice of units. For example, the numeric value of the speed of light, expressed in atomic units, is
Bohr model in atomic units
Atomic units are chosen to reflect the properties of electrons in atoms, which is particularly clear in the classical Bohr model of the hydrogen atom for the bound electron in its ground state:
Mass = 1 a.u. of mass
Charge = −1 a.u. of charge
Orbital radius = 1 a.u. of length
Orbital velocity = 1 a.u. of velocity
Orbital period = 2π a.u. of time
Orbital angular velocity = 1 radian per a.u. of time
Orbital momentum = 1 a.u. of momentum
Ionization energy = a.u. of energy
Electric field (due to nucleus) = 1 a.u. of electric field
Lorentz force (due to nucleus) = 1 a.u. of force
References
Systems of units
Natural units
Atomic physics | 0.769252 | 0.992739 | 0.763666 |
Translational symmetry | In physics and mathematics, continuous translational symmetry is the invariance of a system of equations under any translation (without rotation). Discrete translational symmetry is invariant under discrete translation.
Analogously, an operator on functions is said to be translationally invariant with respect to a translation operator if the result after applying doesn't change if the argument function is translated.
More precisely it must hold that
Laws of physics are translationally invariant under a spatial translation if they do not distinguish different points in space. According to Noether's theorem, space translational symmetry of a physical system is equivalent to the momentum conservation law.
Translational symmetry of an object means that a particular translation does not change the object. For a given object, the translations for which this applies form a group, the symmetry group of the object, or, if the object has more kinds of symmetry, a subgroup of the symmetry group.
Geometry
Translational invariance implies that, at least in one direction, the object is infinite: for any given point p, the set of points with the same properties due to the translational symmetry form the infinite discrete set . Fundamental domains are e.g. for any hyperplane H for which a has an independent direction. This is in 1D a line segment, in 2D an infinite strip, and in 3D a slab, such that the vector starting at one side ends at the other side. Note that the strip and slab need not be perpendicular to the vector, hence can be narrower or thinner than the length of the vector.
In spaces with dimension higher than 1, there may be multiple translational symmetries. For each set of k independent translation vectors, the symmetry group is isomorphic with Zk.
In particular, the multiplicity may be equal to the dimension. This implies that the object is infinite in all directions. In this case, the set of all translations forms a lattice. Different bases of translation vectors generate the same lattice if and only if one is transformed into the other by a matrix of integer coefficients of which the absolute value of the determinant is 1. The absolute value of the determinant of the matrix formed by a set of translation vectors is the hypervolume of the n-dimensional parallelepiped the set subtends (also called the covolume of the lattice). This parallelepiped is a fundamental region of the symmetry: any pattern on or in it is possible, and this defines the whole object.
See also lattice (group).
E.g. in 2D, instead of a and b we can also take a and , etc. In general in 2D, we can take and for integers p, q, r, and s such that is 1 or −1. This ensures that a and b themselves are integer linear combinations of the other two vectors. If not, not all translations are possible with the other pair. Each pair a, b defines a parallelogram, all with the same area, the magnitude of the cross product. One parallelogram fully defines the whole object. Without further symmetry, this parallelogram is a fundamental domain. The vectors a and b can be represented by complex numbers. For two given lattice points, equivalence of choices of a third point to generate a lattice shape is represented by the modular group, see lattice (group).
Alternatively, e.g. a rectangle may define the whole object, even if the translation vectors are not perpendicular, if it has two sides parallel to one translation vector, while the other translation vector starting at one side of the rectangle ends at the opposite side.
For example, consider a tiling with equal rectangular tiles with an asymmetric pattern on them, all oriented the same, in rows, with for each row a shift of a fraction, not one half, of a tile, always the same, then we have only translational symmetry, wallpaper group p1 (the same applies without shift). With rotational symmetry of order two of the pattern on the tile we have p2 (more symmetry of the pattern on the tile does not change that, because of the arrangement of the tiles). The rectangle is a more convenient unit to consider as fundamental domain (or set of two of them) than a parallelogram consisting of part of a tile and part of another one.
In 2D there may be translational symmetry in one direction for vectors of any length. One line, not in the same direction, fully defines the whole object. Similarly, in 3D there may be translational symmetry in one or two directions for vectors of any length. One plane (cross-section) or line, respectively, fully defines the whole object.
Examples
Frieze patterns all have translational symmetries, and sometimes other kinds.
The Fourier transform with subsequent computation of absolute values is a translation-invariant operator.
The mapping from a polynomial function to the polynomial degree is a translation-invariant functional.
The Lebesgue measure is a complete translation-invariant measure.
See also
Glide reflection
Displacement
Periodic function
Lattice (group)
Translation operator (quantum mechanics)
Rotational symmetry
Lorentz symmetry
Tessellation
References
Stenger, Victor J. (2000) and MahouShiroUSA (2007). Timeless Reality. Prometheus Books. Especially chpt. 12. Nontechnical.
Classical mechanics
Symmetry
Conservation laws | 0.774692 | 0.985765 | 0.763664 |
Wind speed | In meteorology, wind speed, or wind flow speed, is a fundamental atmospheric quantity caused by air moving from high to low pressure, usually due to changes in temperature. Wind speed is now commonly measured with an anemometer.
Wind speed affects weather forecasting, aviation and maritime operations, construction projects, growth and metabolism rates of many plant species, and countless other implications. Wind direction is usually almost parallel to isobars (and not perpendicular, as one might expect), due to Earth's rotation.
Units
The meter per second (m/s) is the SI unit for velocity and the unit recommended by the World Meteorological Organization for reporting wind speeds, and used amongst others in weather forecasts in the Nordic countries. Since 2010 the International Civil Aviation Organization (ICAO) also recommends meters per second for reporting wind speed when approaching runways, replacing their former recommendation of using kilometers per hour (km/h).
For historical reasons, other units such as miles per hour (mph), knots (kn), and feet per second (ft/s) are also sometimes used to measure wind speeds. Historically, wind speeds have also been classified using the Beaufort scale, which is based on visual observations of specifically defined wind effects at sea or on land.
Factors affecting wind speed
Wind speed is affected by a number of factors and situations, operating on varying scales (from micro to macro scales). These include the pressure gradient, Rossby waves, jet streams, and local weather conditions. There are also links to be found between wind speed and wind direction, notably with the pressure gradient and terrain conditions.
The Pressure gradient describes the difference in air pressure between two points in the atmosphere or on the surface of the Earth. It is vital to wind speed, because the greater the difference in pressure, the faster the wind flows (from the high to low pressure) to balance out the variation. The pressure gradient, when combined with the Coriolis effect and friction, also influences wind direction.
Rossby waves are strong winds in the upper troposphere. These operate on a global scale and move from west to east (hence being known as westerlies). The Rossby waves are themselves a different wind speed from that experienced in the lower troposphere.
Local weather conditions play a key role in influencing wind speed, as the formation of hurricanes, monsoons, and cyclones as freak weather conditions can drastically affect the flow velocity of the wind.
Highest speed
Non-tornadic
The fastest wind speed not related to tornadoes ever recorded was during the passage of Tropical Cyclone Olivia on 10 April 1996: an automatic weather station on Barrow Island, Australia, registered a maximum wind gust of The wind gust was evaluated by the WMO Evaluation Panel, who found that the anemometer was mechanically sound and that the gust was within statistical probability and ratified the measurement in 2010. The anemometer was mounted 10 m above ground level (and thus 64 m above sea level). During the cyclone, several extreme gusts of greater than were recorded, with a maximum 5-minute mean speed of ; the extreme gust factor was on the order of 2.27–2.75 times the mean wind speed. The pattern and scales of the gusts suggest that a mesovortex was embedded in the already-strong eyewall of the cyclone.
Currently, the second-highest surface wind speed ever officially recorded is at the Mount Washington (New Hampshire) Observatory above sea level in the US on 12 April 1934, using a hot-wire anemometer. The anemometer, specifically designed for use on Mount Washington, was later tested by the US National Weather Bureau and confirmed to be accurate.
Tornadic
Wind speeds within certain atmospheric phenomena (such as tornadoes) may greatly exceed these values but have never been accurately measured. Directly measuring these tornadic winds is rarely done, as the violent wind would destroy the instruments. A method of estimating speed is to use Doppler on Wheels or mobile Doppler radars to measure the wind speeds remotely. Using this method, a mobile radar (RaXPol) owned and operated by the University of Oklahoma recorded winds up to inside the 2013 El Reno tornado, marking the fastest winds ever observed by radar in history. In 1999, a mobile radar measured winds up to during the 1999 Bridge Creek–Moore tornado in Oklahoma on 3 May, although another figure of has also been quoted for the same tornado. Yet another number used by the Center for Severe Weather Research for that measurement is . However, speeds measured by Doppler weather radar are not considered official records.
Wind speeds can be much higher on exoplanets. Scientists at the University of Warwick in 2015 determined that HD 189733b has winds of . In a press release, the University announced that the methods used from measuring HD 189733b's wind speeds could be used to measure wind speeds on Earth-like exoplanets.
Measurement
An anemometer is one of the tools used to measure wind speed. A device consisting of a vertical pillar and three or four concave cups, the anemometer captures the horizontal movement of air particles (wind speed).
Unlike traditional cup-and-vane anemometers, ultrasonic wind sensors have no moving parts and are therefore used to measure wind speed in applications that require maintenance-free performance, such as atop wind turbines. As the name suggests, ultrasonic wind sensors measure the wind speed using high-frequency sound. An ultrasonic anemometer has two or three pairs of sound transmitters and receivers. Each transmitter constantly beams high-frequency sound to its receiver. Electronic circuits inside measure the time it takes for the sound to make its journey from each transmitter to the corresponding receiver. Depending on how the wind blows, some of the sound beams will be affected more than the others, slowing it down or speeding it up very slightly. The circuits measure the difference in speeds of the beams and use that to calculate how fast the wind is blowing.
Acoustic resonance wind sensors are a variant of the ultrasonic sensor. Instead of using time of flight measurement, acoustic resonance sensors use resonating acoustic waves within a small purpose-built cavity. Built into the cavity is an array of ultrasonic transducers, which are used to create the separate standing-wave patterns at ultrasonic frequencies. As wind passes through the cavity, a change in the wave's property occurs (phase shift). By measuring the amount of phase shift in the received signals by each transducer, and then by mathematically processing the data, the sensor is able to provide an accurate horizontal measurement of wind speed and direction.
Another tool used to measure wind velocity includes a GPS combined with pitot tube. A fluid flow velocity tool, the Pitot tube is primarily used to determine the air velocity of an aircraft.
Design of structures
Wind speed is a common factor in the design of structures and buildings around the world. It is often the governing factor in the required lateral strength of a structure's design.
In the United States, the wind speed used in design is often referred to as a "3-second gust", which is the highest sustained gust over a 3-second period having a probability of being exceeded per year of 1 in 50 (ASCE 7-05, updated to ASCE 7-16). This design wind speed is accepted by most building codes in the United States and often governs the lateral design of buildings and structures.
In Canada, reference wind pressures are used in design and are based on the "mean hourly" wind speed having a probability of being exceeded per year of 1 in 50. The reference wind pressure is calculated using the equation , where is the air density and is the wind speed.
Historically, wind speeds have been reported with a variety of averaging times (such as fastest mile, 3-second gust, 1-minute, and mean hourly) which designers may have to take into account. To convert wind speeds from one averaging time to another, the Durst Curve was developed, which defines the relation between probable maximum wind speed averaged over some number of seconds to the mean wind speed over one hour.
See also
American Society of Civil Engineers (promulgator of ASCE 7-05, current version is ASCE 7-16)
Beaufort scale
Fujita scale and Enhanced Fujita Scale
International Building Code (promulgator of NBC 2005)
ICAO recommendations – International System of Units
Knot (unit)
Prevailing wind
Saffir–Simpson Hurricane Scale
TORRO scale
Wind direction
References
External links
Wind
Airspeed
Meteorological quantities
Wind power
Weather extremes of Earth
es:Viento#Características físicas de los vientos | 0.767216 | 0.99537 | 0.763664 |
Maxwell stress tensor | The Maxwell stress tensor (named after James Clerk Maxwell) is a symmetric second-order tensor in three dimensions that is used in classical electromagnetism to represent the interaction between electromagnetic forces and mechanical momentum. In simple situations, such as a point charge moving freely in a homogeneous magnetic field, it is easy to calculate the forces on the charge from the Lorentz force law. When the situation becomes more complicated, this ordinary procedure can become impractically difficult, with equations spanning multiple lines. It is therefore convenient to collect many of these terms in the Maxwell stress tensor, and to use tensor arithmetic to find the answer to the problem at hand.
In the relativistic formulation of electromagnetism, the nine components of the Maxwell stress tensor appear, negated, as components of the electromagnetic stress–energy tensor, which is the electromagnetic component of the total stress–energy tensor. The latter describes the density and flux of energy and momentum in spacetime.
Motivation
As outlined below, the electromagnetic force is written in terms of and . Using vector calculus and Maxwell's equations, symmetry is sought for in the terms containing and , and introducing the Maxwell stress tensor simplifies the result.
in the above relation for conservation of momentum, is the momentum flux density and plays a role similar to in Poynting's theorem.
The above derivation assumes complete knowledge of both and (both free and bounded charges and currents). For the case of nonlinear materials (such as magnetic iron with a BH-curve), the nonlinear Maxwell stress tensor must be used.
Equation
In physics, the Maxwell stress tensor is the stress tensor of an electromagnetic field. As derived above, it is given by:
,
where is the electric constant and is the magnetic constant, is the electric field, is the magnetic field and is Kronecker's delta. With Gaussian quantities, it is given by:
,
where is the magnetizing field.
An alternative way of expressing this tensor is:
where is the dyadic product, and the last tensor is the unit dyad:
The element of the Maxwell stress tensor has units of momentum per unit of area per unit time and gives the flux of momentum parallel to the th axis crossing a surface normal to the th axis (in the negative direction) per unit of time.
These units can also be seen as units of force per unit of area (negative pressure), and the element of the tensor can also be interpreted as the force parallel to the th axis suffered by a surface normal to the th axis per unit of area. Indeed, the diagonal elements give the tension (pulling) acting on a differential area element normal to the corresponding axis. Unlike forces due to the pressure of an ideal gas, an area element in the electromagnetic field also feels a force in a direction that is not normal to the element. This shear is given by the off-diagonal elements of the stress tensor.
It has recently been shown that the Maxwell stress tensor is the real part of a more general complex electromagnetic stress tensor whose imaginary part accounts for reactive electrodynamical forces.
In magnetostatics
If the field is only magnetic (which is largely true in motors, for instance), some of the terms drop out, and the equation in SI units becomes:
For cylindrical objects, such as the rotor of a motor, this is further simplified to:
where is the shear in the radial (outward from the cylinder) direction, and is the shear in the tangential (around the cylinder) direction. It is the tangential force which spins the motor. is the flux density in the radial direction, and is the flux density in the tangential direction.
In electrostatics
In electrostatics the effects of magnetism are not present. In this case the magnetic field vanishes, i.e. , and we obtain the electrostatic Maxwell stress tensor. It is given in component form by
and in symbolic form by
where is the appropriate identity tensor usually .
Eigenvalue
The eigenvalues of the Maxwell stress tensor are given by:
These eigenvalues are obtained by iteratively applying the matrix determinant lemma, in conjunction with the Sherman–Morrison formula.
Noting that the characteristic equation matrix, , can be written as
where
we set
Applying the matrix determinant lemma once, this gives us
Applying it again yields,
From the last multiplicand on the RHS, we immediately see that is one of the eigenvalues.
To find the inverse of , we use the Sherman-Morrison formula:
Factoring out a term in the determinant, we are left with finding the zeros of the rational function:
Thus, once we solve
we obtain the other two eigenvalues.
See also
Ricci calculus
Energy density of electric and magnetic fields
Poynting vector
Electromagnetic stress–energy tensor
Magnetic pressure
Magnetic tension
References
David J. Griffiths, "Introduction to Electrodynamics" pp. 351–352, Benjamin Cummings Inc., 2008
John David Jackson, "Classical Electrodynamics, 3rd Ed.", John Wiley & Sons, Inc., 1999
Richard Becker, "Electromagnetic Fields and Interactions", Dover Publications Inc., 1964
Tensor physical quantities
Electromagnetism
James Clerk Maxwell | 0.773628 | 0.987111 | 0.763657 |
Kinesiology | Kinesiology is the scientific study of human body movement. Kinesiology addresses physiological, anatomical, biomechanical, pathological, neuropsychological principles and mechanisms of movement. Applications of kinesiology to human health include biomechanics and orthopedics; strength and conditioning; sport psychology; motor control; skill acquisition and motor learning; methods of rehabilitation, such as physical and occupational therapy; and sport and exercise physiology. Studies of human and animal motion include measures from motion tracking systems, electrophysiology of muscle and brain activity, various methods for monitoring physiological function, and other behavioral and cognitive research techniques.
Basics
Kinesiology studies the science of human movement, performance, and function by applying the fundamental sciences of Cell Biology, Molecular Biology, Chemistry, Biochemistry, Biophysics, Biomechanics, Biomathematics, Biostatistics, Anatomy, Physiology, Exercise Physiology, Pathophysiology, Neuroscience, and Nutritional science. A bachelor's degree in kinesiology can provide strong preparation for graduate study in biomedical research, as well as in professional programs.
The term "kinesiologist" is not a licensed nor professional designation in many countries, with the notable exception of Canada. Individuals with training in this area can teach physical education, work as personal trainers and sport coaches, provide consulting services, conduct research and develop policies related to rehabilitation, human motor performance, ergonomics, and occupational health and safety. In North America, kinesiologists may study to earn a Bachelor of Science, Master of Science, or Doctorate of Philosophy degree in Kinesiology or a Bachelor of Kinesiology degree, while in Australia or New Zealand, they are often conferred an Applied Science (Human Movement) degree (or higher). Many doctoral level faculty in North American kinesiology programs received their doctoral training in related disciplines, such as neuroscience, mechanical engineering, psychology, and physiology.
In 1965, the University of Massachusetts Amherst created the United States' first Department of Exercise Science (kinesiology) under the leadership of visionary researchers and academicians in the field of exercise science. In 1967, the University of Waterloo launched Canada's first kinesiology department.
Principles
Adaptation through exercise
Adaptation through exercise is a key principle of kinesiology that relates to improved fitness in athletes as well as health and wellness in clinical populations. Exercise is a simple and established intervention for many movement disorders and musculoskeletal conditions due to the neuroplasticity of the brain and the adaptability of the musculoskeletal system. Therapeutic exercise has been shown to improve neuromotor control and motor capabilities in both normal and pathological populations.
There are many different types of exercise interventions that can be applied in kinesiology to athletic, normal, and clinical populations. Aerobic exercise interventions help to improve cardiovascular endurance. Anaerobic strength training programs can increase muscular strength, power, and lean body mass. Decreased risk of falls and increased neuromuscular control can be attributed to balance intervention programs. Flexibility programs can increase functional range of motion and reduce the risk of injury.
As a whole, exercise programs can reduce symptoms of depression and risk of cardiovascular and metabolic diseases. Additionally, they can help to improve quality of life, sleeping habits, immune system function, and body composition.
The study of the physiological responses to physical exercise and their therapeutic applications is known as exercise physiology, which is an important area of research within kinesiology.
Neuroplasticity
Neuroplasticity is also a key scientific principle used in kinesiology to describe how movement and changes in the brain are related. The human brain adapts and acquires new motor skills based on this principle. The brain can be exposed to new stimuli and experiences and therefore learn from them and create new neural pathways hence leading to brain adaptation. These new adaptations and skills include both adaptive and maladaptive brain changes.
Adaptive plasticity
Recent empirical evidence indicates the significant impact of physical activity on brain function; for example, greater amounts of physical activity are associated with enhanced cognitive function in older adults. The effects of physical activity can be distributed throughout the whole brain, such as higher gray matter density and white matter integrity after exercise training, and/or on specific brain areas, such as greater activation in prefrontal cortex and hippocampus. Neuroplasticity is also the underlying mechanism of skill acquisition. For example, after long-term training, pianists showed greater gray matter density in sensorimotor cortex and white matter integrity in the internal capsule compared to non-musicians.
Maladaptive plasticity
Maladaptive plasticity is defined as neuroplasticity with negative effects or detrimental consequences in behavior. Movement abnormalities may occur among individuals with and without brain injuries due to abnormal remodeling in central nervous system. Learned non-use is an example commonly seen among patients with brain damage, such as stroke. Patients with stroke learned to suppress paretic limb movement after unsuccessful experience in paretic hand use; this may cause decreased neuronal activation at adjacent areas of the infarcted motor cortex.
There are many types of therapies that are designed to overcome maladaptive plasticity in clinic and research, such as constraint-induced movement therapy (CIMT), body weight support treadmill training (BWSTT) and virtual reality therapy. These interventions are shown to enhance motor function in paretic limbs and stimulate cortical reorganization in patients with brain damage.
Motor redundancy
Motor redundancy is a widely used concept in kinesiology and motor control which states that, for any task the human body can perform, there are effectively an unlimited number of ways the nervous system could achieve that task. This redundancy appears at multiple levels in the chain of motor execution:
Kinematic redundancy means that for a desired location of the endpoint (e.g. the hand or finger), there are many configurations of the joints that would produce the same endpoint location in space.
Muscle redundancy means that the same net joint torque could be generated by many different relative contributions of individual muscles.
Motor unit redundancy means that for the same net muscle force could be generated by many different relative contributions of motor units within that muscle.
The concept of motor redundancy is explored in numerous studies, usually with the goal of describing the relative contribution of a set of motor elements (e.g. muscles) in various human movements, and how these contributions can be predicted from a comprehensive theory. Two distinct (but not incompatible) theories have emerged for how the nervous system coordinates redundant elements: simplification and optimization. In the simplification theory, complex movements and muscle actions are constructed from simpler ones, often known as primitives or synergies, resulting in a simpler system for the brain to control. In the optimization theory, motor actions arise from the minimization of a control parameter, such as the energetic cost of movement or errors in movement performance.
Scope of practice
In Canada, kinesiology is a professional designation as well as an area of study. In the province of Ontario the scope has been officially defined as, "the assessment of human movement and performance and its rehabilitation and management to maintain, rehabilitate or enhance movement and performance"
Kinesiologists work in a variety of roles as health professionals. They work as rehabilitation providers in hospitals, clinics and private settings working with populations needing care for musculoskeletal, cardiac and neurological conditions. They provide rehabilitation to persons injured at work and in vehicular accidents. Kinesiologists also work as functional assessment specialists, exercise therapists, ergonomists, return to work specialists, case managers and medical legal evaluators. They can be found in hospital, long-term care, clinic, work, and community settings. Additionally, kinesiology is applied in areas of health and fitness for all levels of athletes, but more often found with training of elite athletes.
Licensing and regulation
Canada
In Canada, kinesiology has been designated a regulated health profession in Ontario. Kinesiology was granted the right to regulate in the province of Ontario in the summer of 2007 and similar proposals have been made for other provinces. The College of Kinesiologists of Ontario achieved proclamation on April 1, 2013, at which time the professional title "Kinesiologist" became protected by law. In Ontario only members of the college may call themselves a Registered Kinesiologist. Individuals who have earned degrees in kinesiology can work in research, the fitness industry, clinical settings, and in industrial environments. They also work in cardiac rehabilitation, health and safety, hospital and long-term care facilities and community health centers just to name a few.
Health service
Health promotion
Kinesiologists working in the health promotion industry work with individuals to enhance the health, fitness, and well-being of the individual. Kinesiologists can be found working in fitness facilities, personal training/corporate wellness facilities, and industry.
Clinical/rehabilitation
Kinesiologists work with individuals with disabling conditions to assist in regaining their optimal physical function. They work with individuals in their home, fitness facilities, rehabilitation clinics, and at the worksite. They also work alongside physiotherapists and occupational therapists.
Ergonomics
Kinesiologists work in industry to assess suitability of design of workstations and provide suggestions for modifications and assistive devices.
Health and safety
Kinesiologists are involved in consulting with industry to identify hazards and provide recommendations and solutions to optimize the health and safety of workers.
Disability management/case coordination
Kinesiologists recommend and provide a plan of action to return an injured individual to their optimal function in all aspects of life.
Management/research/administration
Kinesiologists frequently fulfill roles in all above areas, perform research, and manage businesses.
Health education
Kinesiologists working in health education teach people about behaviors that promote wellness. They develop and implement strategies to improve the health of individuals and communities. Community health workers collect data and discuss health concerns with members of specific populations or communities.
Athletic training
Kinesiologists working in athletic training work in cooperation with physicians. Athletic trainers strive to prevent athletes from suffering injuries, diagnose them if they have suffered an injury and apply the appropriate treatment.
Athletic coaches and scouts
Kinesiologists who pursue a career as an athletic coach develop new talent and guide an athlete's progress in a specific sport. They teach amateur or professional athletes the skills they need to succeed at their sport. Many coaches are also involved in scouting. Scouts look for new players and evaluate their skills and likelihood for success at the college, amateur, or professional level.
Physical education teacher
Kinesiologists working as physical education teachers are responsible for teaching fitness, sports and health. They help students stay both mentally and physically fit by teaching them to make healthy choices.
Physical therapy
Kinesiologists working in physical therapy diagnose physical abnormalities, restore mobility to the client, and promote proper function of joints.
History of kinesiology
Royal Central Institute of Gymnastics (sv) G.C.I. was founded 1813 in Stockholm, Sweden by Pehr Henrik Ling. It was the first Physiotherapy school in the world, training hundreds of medical gymnasts who spread the Swedish physical therapy around the entire world.
In 1887, Sweden was the first country in the world to give a national state licence to physiotherapists/physical therapists.
The Swedish medical gymnast and kinesiologist Carl August Georgii (sv), Professor at the Royal Gymnastic Central Institute GCI in Stockholm, was the one who created and coined the new international word Kinesiology in 1854.
The term Kinesiology is a literal translation to Greek+English from the original Swedish word Rörelselära, meaning "Movement Science". It was the foundation of the Medical Gymnastics, the original Physiotherapy and Physical Therapy, developed for over 100 years in Sweden (starting 1813).
The new medical therapy created in Sweden was originally called Rörelselära (sv), and later in 1854 translated to the new and invented international word "Kinesiology". The Kinesiology consisted of nearly 2,000 physical movements and 50 different types of massage therapy techniques.
They were all used to affect various dysfunctions and even illnesses, not only in the movement apparatus, but also into the internal physiology of man.
Thus, the original classical and Traditional Kinesiology was not only a system of rehabilitation for the body, or biomechanics like in modern Academic Kinesiology, but also a new therapy for relieving and curing diseases, by affecting the autonomic nervous system, organs and glands in the body.,
In 1886, the Swedish Medical Gymnast Nils Posse (1862-1895) introduced the term kinesiology in the U.S. Nils Posse was a graduate of the Royal Gymnastic Central Institute in Stockholm, Sweden and founder of the Posse Gymnasium in Boston, MA. He was teaching at Boston Normal School of Gymnastics BNSG.
The Special Kinesiology Of Educational Gymnastics was the first book ever written in the world with the word "Kinesiology" in the title of the book. It was written by Nils Posse and published in Boston, 1894–1895. Posse was elected posthumously as an Honorary Fellow in Memoriam in the National Academy of Kinesiology.
The National Academy of Kinesiology was formally founded in 1930 in the United States. The academy's dual purpose is to encourage and promote the study and educational applications of the art and science of human movement and physical activity and to honor by election to its membership persons who have directly or indirectly contributed significantly to the study of and/or application of the art and science of human movement and physical activity. Membership in the National Academy of Kinesiology is by election and those elected are known as Fellows. Fellows are elected from around the world. Election into the National Academy of Kinesiology is considered a pinnacle achievement and recognition with the discipline. For further information see: National Academy of Kinesiology | National Academy of Kinesiology
Technology in kinesiology
Motion capture technology has application in measuring human movement, and thus kinesiology. Historically, motion capture labs have recorded high fidelity data. While accurate and credible, these systems can come at high capital and operational costs. Modern-day systems have increased accessibility to mocap technology.
Adapted physical activity
Adapted physical activity (APA) is a branch of kinesiology, referring to physical activity that is modified or designed to meet the needs of individuals with disabilities. The term originated in the field of physical education and is commonly used in the field of physical education and rehabilitation to refer to physical activities and exercises that have been modified or adapted for individuals with disabilities. These activities are often led by trained professionals, such as adapted physical educators, occupational therapists, or physical therapists.
In 1973 the Federation Internationale de lʼ Activite Physique Adaptee (International Federation of Adapted Physical Activity - IFAPA) was formed and is described as a discipline/profession that purpose to facilitates physical activity across people with a wide range of individual differences, emphasizing in empowerment, self-determination and opportunities access.
A common definition of APA is "a cross-disciplinary body of practical and theoretical knowledge directed toward impairments, activity limitations, and participation restrictions in physical activity. It is a service delivery profession and an academic field of study that supports an attitude of acceptance of individual differences, advocates access to active lifestyles and sport, and promotes innovative and cooperative service delivery, supports, and empowerment. Adapted physical activity includes, but is not limited to, physical education, sport, recreation, dance, creative arts, nutrition, medicine, and rehabilitation." This definition aligns with the World Health Organization International Classification of Functioning, Disability and Health whereby disability is seen as the interaction between impairments or conditions with activity limitations, participation restrictions and contextual factors.
Overview
The term APA has evolved in the course of years, and in some countries could be recognized with alternative terms that contain a similar set of constructs, for example, sports for disabled people, sports therapy, and psychomotor therapy. The APA is considered as (i) activities or service delivery, (ii) a profession, and (iii) an academic field of study with a unique body of knowledge that differs from terms such as adapted physical education or para-sport. Principally, APA is an umbrella term that incorporates the mentioned terms considered sub-specializations (i.e., physical education, para-sports, recreation, and rehabilitation).
APA is proposed to have close links between the field of practice and the field of study with unique theories and growing bodies of practical and scientific knowledge, where APA practitioners are those who provide the services and activities, while APA scholars generate and promote evidence-based research practices among practitioners.
Adaptation to physical activity opportunities is most often provided in the form of appropriately designed and modified equipment (prosthesis, wheelchairs, mono-ski, ball size), task criteria (e.g., modifying skill quality criteria or using a different skill), instructions (e.g., using personal supports, peer tutors, non-verbal instructions, motivational strategies), physical and social environments (e.g., increasing or decreasing court dimensions; segregated vs. inclusive; type of training climate: mastery-oriented, collaborative or competitive social environment; degree of peer and parental support), and rules (e.g., double bounce rule in wheelchair tennis).
In general, the APA presents various sub-specializations such as physical education (e.g., inclusion in physical education, attention to students with special needs, development of new education contents), sports (e.g., development of paralympic sports, activity by sports federations for athletes with disabilities), recreation (e.g., development of the inclusive sport approach and attitudes change programs), and rehabilitation (e.g., physical activity programs in rehabilitation centers, involvement of health-related professionals).
The role of sports and physical activity participation in the population with disabilities has been recognized as a human right in the Convention on the Rights of Persons with Disabilities and declared in other international organization agreements such as:
International Charter of Physical Education, Physical Activity and Sport (UNESCO).
International Conference of Ministers and Senior Officials Responsible for Physical Education and Sport (MINEPS).
Marseille Declaration, Universal Fitness Innovation & Transformation - UFIT Launch October 2015. A Commitment to Inclusion by and for the Global Fitness Industry.
Sustainable Development Goals, Sports and Physical Activity, United Nations (UN).
In this line, the APA as a discipline/profession plays an essential role in addressing the needs from a theoretical and practical framework to provide full participation access in physical activity to populations with disabilities.
There are many educational programmes offered around the world that specialise in APA, including disability sports, adapted sports, rehabilitation, adapted physical education and parasport management. In Europe there is the European Diploma of Adapted Physical Activity for bachelor's degrees. At the master's degree level, there is the International Masters in Adapted Physical Activity and the master's degree in Adapted Physical Activity offered by the Lithuanian Sports University. A doctoral programme in adapted physical activity can be studied through the Multi-Institution Mentorship Consortium (MAMC). Furthermore, there is offered a Master of Adapted Physical Education in the North American region in Oregon State University (USA). In the South American Region, the San Sebastian University (Chile) offers a Master of Physical Activity and Adapted Sports. The universities Viña del Mar and UMCE in Chile offers a specialization in adapted physical activity.
International Federation of Adapted Physical Activity
The International Federation of Adapted Physical Activity (IFAPA) is an international scientific organization of higher education scholars, practitioners and students dedicated to promoting APA. IFAPA was founded in 1973 in Quebec, Canada, presenting an original purpose declared "to give global focus to professionals who use adapted physical activities for instruction, recreation, remediation, and research". From these initial times, IFAPA evolved from a small organization to an international corporation with active regional federations in different world regions.
The current purpose of IFAPA are:
To encourage international cooperation in the field of physical activity to the benefit of individuals of all abilities,
to promote, stimulate and support research in the field of adapted physical activity throughout the world,
and to make scientific knowledge of and practical experiences in adapted physical activity available to all interested persons, organizations and institutions.
IFAPA coordinates national, regional, and international functions (both governmental and nongovernmental) that pertain to sport, dance, aquatics, exercise, fitness, and wellness for individuals of all ages with disabilities or special needs. IFAPA is linked with several other international governing bodies, including the International Paralympic Committee (IPC), Special Olympics International and the International Council of Sport Science and Physical Education (ICSSPE). English is the language used for IFAPA correspondence, conferences. Professor David Legg from Mount Royal University is the current President of the International Federation of Adapted Physical Activity (IFAPA) since 2019 at the International Symposium of Adapted Physical Activity (ISAPA) hosted by IFAPA Past President Martin Block at the University of Virginia.
The biennial ISAPA scheduled for 2021 was planned to be held at the University of Jyväskylä, Finland. Due to the COVID-19 pandemic it was later announced to be held online only, making it the first Online ISAPA since the first one in 1977. The 2023 ISAPA was awarded to a multi-site organisation by Halberg Foundation in New Zealand and Mooven in France.
Regions
Africa - no formal organisation
Asia - Asian society of adapted physical education - ASAPE
Europe - European Federation of Adapted Physical Activity - EUFAPA
Middle East - Middle East Federation of Adapted Physical Activity - MEFAPA
North America - North American Federation of Adapted Physical Activity - NAFAPA
Oceania - no formal organisation
South and Central America - South American Federation of Adapted Physical Activity - SAPA
Research and dissemination in adapted physical activity
Actually, it is possible to find numerous sports science journals with research papers on adapted sport, while those specific to APA are lesser. Adapted Physical Activity Quarterly (APAQ) is the only AFA-specific journal indexed in the Journal Citation Reports Index, appearing in both the Sport Sciences and Rehabilitation directories, which is another example of its interdisciplinarity (Impact Score 2020-2021 = 2.61) (Pérez et al., 2012). Additionally, the European Journal of Adapted Physical Activity (EUJAPA) is another international, multidisciplinary journal introduced to communicate, share and stimulate academic inquiry focusing on APA of persons with disabilities, appearing in the Education directories of Scimago Journal & Country Rank (SJR).
Regarding the dissemination of scientific knowledge generated by the APA, the most relevant international events are described as follows:
International Symposium of Adapted Physical Activity (ISAPA), organized by IFAPA on a biannual basis.
Vista conference, organized by the International Paralympic Committee on a biannual basis.
Paralympic Congress, organized by the International Paralympic Committee every four years.
European Conference on Adapted Physical Activity (EUCAPA), organized by European Federation in Adapted Physical Activity on a biannual basis.
North American Federation of Adapted Physical Activity (NAFAPA) Conference, organized by NAFAPA on a biannual basis.
South American Adapted Physical Activity Conference, organized by South American Federation of Adapted Physical Activity.
Adapted physical education
Adapted physical education is a sub-discipline of physical education with a focus on including students with disabilities into the subject. APE is the term used to refer to the physical education for individuals with disabilities that occurs primarily in elementary and secondary schools. According to Dunn and Leitschuh APE is defined as "Adapted physical education programs are those that have the same objectives as the regular physical education program but in which adjustments are made in the regular offerings to meet the needs and abilities of exceptional students". This education can be provided in separate educational settings as well as in general (regular) educational settings. APE is oriented to educate students to lifelong engagement in physical activities and to live a healthy lifestyle offering possibilities to exploit movements, games, and sports and at the same time personal development.
Goals and objectives of adapted and general physical education might be the same with some minor differences. For example, learning to push a wheelchair or play wheelchair basketball might be a goal for a child with a spinal cord injury, while running and playing regular basketball is a goal for a child with a disability. In other cases, a child with a disability might focus on fewer objectives or modified objectives within a domain (e.g., physical fitness) compared to peers without disabilities.
Parasport or disability sport
The APA in this field is oriented principally to the Parasports movement, which organises sports for and by people with disabilities. Examples of para-sports organizations include sports in the Paralympic Games, Special Olympics, Deaflympics as well as Invictus games to name a few. Many para-sports have eligibility criteria according to the characteristics of the participants. In the Paralympics Games, this is known as sport classification, a system that provides a framework for determining who can and who cannot participate according to the impact of the impairments on the outcome of the competition.
In the Special Olympics individuals eligible have to meet the following criteria
be at least 8 years old
have been identified by an agency or professional as having one of the following conditions: intellectual disabilities, cognitive delays (as measured by formal assessment), or significant learning or vocational problems due to cognitive delay that require specially designed instruction.
Another sporting competition for people with intellectual impairments is the Virtus Games (formerly known as International Sports Federation for Persons with Intellectual Disability. This is different from the Special Olympics. Eligibility is based on a master list of
II 1 Intellectual Disability
II 2 Significant Intellectual Disability
II 3 Austism
To be eligible to compete at the Deaflympics, athletes must have a hearing loss of at least 55 decibels in the better ear.
The Invictus Games were designed to allow sport competitions between wounded, injured or sick servicemen and women (WIS). Therefore, only people in the military sectors can compete in the Invictus games.
Physical medicine and rehabiltiation
The results from APA can help the practice of Physical medicine and rehabilitation, whereby the functional ability and quality of life is improved. Rehabilitation is helping the individual achieve the highest level of functioning, independence, participation, and quality of life possible. The APA and sport in rehabilitation for individuals with disabilities is particularly important and is associated with the legacy of the medical rehabilitation specialist Sir Ludwig Guttman who was the founder of the International Stoke Mandeville Games Federation, the basis of the actual Paralympic movement. APA and sports are strongly recommended in rehabilitation programs due to the positive impact and health benefits in people with different disabilities. The APA practitioner provides exercise and training regimens adapted for specific individual needs and works based on the International Classification of Functioning, Disability, and Health of the World Health Organization, facilitating a common language with other rehabilitation professionals during the rehabilitation process.
See also
Adapted Physical Education (USA)
Anatomical terms of motion
Assistive technology in sport
Disability
Disabled sports
Exercise physiology
Human musculoskeletal system
Kinanthropometry
Kinesiogenomics
Kinesiotherapy
Mental practice of action
Motor imagery
Movement assessment
Neurology
Parasports
Physical therapy (USA)
Physiological movement
Sports science
References
External links
Ergonomics
Applied sciences
Human physiology
Motor control
Exercise physiology | 0.764286 | 0.999112 | 0.763608 |
Department for Business, Energy and Industrial Strategy | The Department for Business, Energy, and Industrial Strategy (BEIS) was a ministerial department of the United Kingdom Government, from July 2016 to February 2023.
The department was formed during a machinery of government change on 14 July 2016, following Theresa May's appointment as Prime Minister. It was created by a merger between the Department for Business, Innovation, and Skills and the Department of Energy and Climate Change.
On 7 February 2023, under the Rishi Sunak premiership, the department was dissolved. Its functions were split into three new departments: the Department for Business and Trade, the Department for Energy Security and Net Zero, and the Department for Science, Innovation, and Technology. Grant Shapps, the final secretary of state for the old department, became the first Secretary of State for Energy Security and Net Zero.
Responsibilities
The department had responsibility for:
business
industrial strategy
science, research, and innovation
deregulation
energy and clean growth
climate change
While some functions of the former Department for Business, Innovation, and Skills, in respect of higher and further education policy, apprenticeships, and skills, were transferred to the Department for Education, May explained in a statement:The Department for Energy and Climate Change and the remaining functions of the Department for Business, Innovation, and Skills have been merged to form a new Department for Business, Energy, and Industrial Strategy, bringing together responsibility for business, industrial strategy, science, and innovation with energy and climate change policy. The new department will be responsible for helping to ensure that the economy grows strongly in all parts of the country, based on a robust industrial strategy. It will ensure that the UK has energy supplies that are reliable, affordable, and clean, and it will make the most of the economic opportunities of new technologies and support the UK's global competitiveness more effectively.
Research and innovation partnerships in low and middle-income countries
BEIS spends part of the overseas aid budget on research and innovation through two major initiatives: The Newton Fund and the Global Challenges Research Fund, or GCRF. Both funds aim to leverage the UK's world-class research and innovation capacity to pioneer new ways to support economic development, social welfare, and long-term sustainable and equitable growth in low- and middle-income countries. The Newton Fund builds research and innovation partnerships with partner countries to support their economic development and social welfare and to develop their research and innovation capacity for long-term sustainable growth. The fund is delivered through seven UK delivery partners.
National Security and Investment Act 2021
In August 2022, BEIS blocked the sale of Pulsic Limited in Bristol to a company owned by China's National Integrated Circuit Industry Investment Fund. Pulsic is a chip design software company which makes tools to design and develop circuit layouts for chips.
In November 2022, BEIS ordered Nexperia to sell at least 86 percent of Newport Wafer Fab, the largest chipmaking facility in the UK, which it had acquired in July 2021. In 2018, a Chinese corporation by the name of Wingtech Technology acquired Nexperia.
Devolution
Some responsibilities extend to England alone due to devolution, while others are reserved or excepted matters that therefore apply to the other countries of the United Kingdom as well.
Reserved and exceptioned matters are outlined below.
Scotland
Reserved matters:
The Economy Directorates of the Scottish Government handles devolved economic policy.
Northern Ireland
Reserved matters:
Climate change policy
Competition
Consumer protection
Import and export control
Export licensing
Intellectual property
Nuclear energy
Postal services
Product standards, safety and liability
Research councils
Science and research
Telecommunications
Units of measurement
Excepted matter:
Outer space
Nuclear power
The department's main counterpart is:
Department for the Economy (general economic policy)
Ministers
The final roster of ministers in the Department for Business, Energy and Industrial Strategy were:
In October 2016, Archie Norman was appointed as Lead Non-Executive board member for BEIS.
References
Business, Energy and Industrial Strategy
2016 establishments in the United Kingdom
Business in the United Kingdom
Economy ministries
Energy ministries
Innovation ministries
Research ministries
Energy in the United Kingdom
Innovation in the United Kingdom
Ministries established in 2016
2023 disestablishments in the United Kingdom
Government agencies disestablished in the 2020s | 0.777293 | 0.982386 | 0.763601 |
Oxidative phosphorylation | Oxidative phosphorylation (UK , US ) or electron transport-linked phosphorylation or terminal oxidation is the metabolic pathway in which cells use enzymes to oxidize nutrients, thereby releasing chemical energy in order to produce adenosine triphosphate (ATP). In eukaryotes, this takes place inside mitochondria. Almost all aerobic organisms carry out oxidative phosphorylation. This pathway is so pervasive because it releases more energy than alternative fermentation processes such as anaerobic glycolysis.
The energy stored in the chemical bonds of glucose is released by the cell in the citric acid cycle, producing carbon dioxide and the energetic electron donors NADH and FADH. Oxidative phosphorylation uses these molecules and O2 to produce ATP, which is used throughout the cell whenever energy is needed. During oxidative phosphorylation, electrons are transferred from the electron donors to a series of electron acceptors in a series of redox reactions ending in oxygen, whose reaction releases half of the total energy.
In eukaryotes, these redox reactions are catalyzed by a series of protein complexes within the inner membrane of the cell's mitochondria, whereas, in prokaryotes, these proteins are located in the cell's outer membrane. These linked sets of proteins are called the electron transport chain. In eukaryotes, five main protein complexes are involved, whereas in prokaryotes many different enzymes are present, using a variety of electron donors and acceptors.
The energy transferred by electrons flowing through this electron transport chain is used to transport protons across the inner mitochondrial membrane, in a process called electron transport. This generates potential energy in the form of a pH gradient and the resulting electrical potential across this membrane. This store of energy is tapped when protons flow back across the membrane and down the potential energy gradient, through a large enzyme called ATP synthase in a process called chemiosmosis. The ATP synthase uses the energy to transform adenosine diphosphate (ADP) into adenosine triphosphate, in a phosphorylation reaction. The reaction is driven by the proton flow, which forces the rotation of a part of the enzyme. The ATP synthase is a rotary mechanical motor.
Although oxidative phosphorylation is a vital part of metabolism, it produces reactive oxygen species such as superoxide and hydrogen peroxide, which lead to propagation of free radicals, damaging cells and contributing to disease and, possibly, aging and senescence. The enzymes carrying out this metabolic pathway are also the target of many drugs and poisons that inhibit their activities.
Chemiosmosis
Oxidative phosphorylation works by using energy-releasing chemical reactions to drive energy-requiring reactions. The two sets of reactions are said to be coupled. This means one cannot occur without the other. The chain of redox reactions driving the flow of electrons through the electron transport chain, from electron donors such as NADH to electron acceptors such as oxygen and hydrogen (protons), is an exergonic process – it releases energy, whereas the synthesis of ATP is an endergonic process, which requires an input of energy. Both the electron transport chain and the ATP synthase are embedded in a membrane, and energy is transferred from the electron transport chain to the ATP synthase by movements of protons across this membrane, in a process called chemiosmosis. A current of protons is driven from the negative N-side of the membrane to the positive P-side through the proton-pumping enzymes of the electron transport chain. The movement of protons creates an electrochemical gradient across the membrane, is called the proton-motive force. It has two components: a difference in proton concentration (a H+ gradient, ΔpH) and a difference in electric potential, with the N-side having a negative charge.
ATP synthase releases this stored energy by completing the circuit and allowing protons to flow down the electrochemical gradient, back to the N-side of the membrane. The electrochemical gradient drives the rotation of part of the enzyme's structure and couples this motion to the synthesis of ATP.
The two components of the proton-motive force are thermodynamically equivalent: In mitochondria, the largest part of energy is provided by the potential; in alkaliphile bacteria the electrical energy even has to compensate for a counteracting inverse pH difference. Inversely, chloroplasts operate mainly on ΔpH. However, they also require a small membrane potential for the kinetics of ATP synthesis. In the case of the fusobacterium Propionigenium modestum it drives the counter-rotation of subunits a and c of the FO motor of ATP synthase.
The amount of energy released by oxidative phosphorylation is high, compared with the amount produced by anaerobic fermentation. Glycolysis produces only 2 ATP molecules, but somewhere between 30 and 36 ATPs are produced by the oxidative phosphorylation of the 10 NADH and 2 succinate molecules made by converting one molecule of glucose to carbon dioxide and water, while each cycle of beta oxidation of a fatty acid yields about 14 ATPs. These ATP yields are theoretical maximum values; in practice, some protons leak across the membrane, lowering the yield of ATP.
Electron and proton transfer molecules
The electron transport chain carries both protons and electrons, passing electrons from donors to acceptors, and transporting protons across a membrane. These processes use both soluble and protein-bound transfer molecules. In the mitochondria, electrons are transferred within the intermembrane space by the water-soluble electron transfer protein cytochrome c. This carries only electrons, and these are transferred by the reduction and oxidation of an iron atom that the protein holds within a heme group in its structure. Cytochrome c is also found in some bacteria, where it is located within the periplasmic space.
Within the inner mitochondrial membrane, the lipid-soluble electron carrier coenzyme Q10 (Q) carries both electrons and protons by a redox cycle. This small benzoquinone molecule is very hydrophobic, so it diffuses freely within the membrane. When Q accepts two electrons and two protons, it becomes reduced to the ubiquinol form (QH2); when QH2 releases two electrons and two protons, it becomes oxidized back to the ubiquinone (Q) form. As a result, if two enzymes are arranged so that Q is reduced on one side of the membrane and QH2 oxidized on the other, ubiquinone will couple these reactions and shuttle protons across the membrane. Some bacterial electron transport chains use different quinones, such as menaquinone, in addition to ubiquinone.
Within proteins, electrons are transferred between flavin cofactors, iron–sulfur clusters and cytochromes. There are several types of iron–sulfur cluster. The simplest kind found in the electron transfer chain consists of two iron atoms joined by two atoms of inorganic sulfur; these are called [2Fe–2S] clusters. The second kind, called [4Fe–4S], contains a cube of four iron atoms and four sulfur atoms. Each iron atom in these clusters is coordinated by an additional amino acid, usually by the sulfur atom of cysteine. Metal ion cofactors undergo redox reactions without binding or releasing protons, so in the electron transport chain they serve solely to transport electrons through proteins. Electrons move quite long distances through proteins by hopping along chains of these cofactors. This occurs by quantum tunnelling, which is rapid over distances of less than 1.4 m.
Eukaryotic electron transport chains
Many catabolic biochemical processes, such as glycolysis, the citric acid cycle, and beta oxidation, produce the reduced coenzyme NADH. This coenzyme contains electrons that have a high transfer potential; in other words, they will release a large amount of energy upon oxidation. However, the cell does not release this energy all at once, as this would be an uncontrollable reaction. Instead, the electrons are removed from NADH and passed to oxygen through a series of enzymes that each release a small amount of the energy. This set of enzymes, consisting of complexes I through IV, is called the electron transport chain and is found in the inner membrane of the mitochondrion. Succinate is also oxidized by the electron transport chain, but feeds into the pathway at a different point.
In eukaryotes, the enzymes in this electron transport system use the energy released from O2 by NADH to pump protons across the inner membrane of the mitochondrion. This causes protons to build up in the intermembrane space, and generates an electrochemical gradient across the membrane. The energy stored in this potential is then used by ATP synthase to produce ATP. Oxidative phosphorylation in the eukaryotic mitochondrion is the best-understood example of this process. The mitochondrion is present in almost all eukaryotes, with the exception of anaerobic protozoa such as Trichomonas vaginalis that instead reduce protons to hydrogen in a remnant mitochondrion called a hydrogenosome.
NADH-coenzyme Q oxidoreductase (complex I)
NADH-coenzyme Q oxidoreductase, also known as NADH dehydrogenase or complex I, is the first protein in the electron transport chain. Complex I is a giant enzyme with the mammalian complex I having 46 subunits and a molecular mass of about 1,000 kilodaltons (kDa). The structure is known in detail only from a bacterium; in most organisms the complex resembles a boot with a large "ball" poking out from the membrane into the mitochondrion. The genes that encode the individual proteins are contained in both the cell nucleus and the mitochondrial genome, as is the case for many enzymes present in the mitochondrion.
The reaction that is catalyzed by this enzyme is the two electron oxidation of NADH by coenzyme Q10 or ubiquinone (represented as Q in the equation below), a lipid-soluble quinone that is found in the mitochondrion membrane:
The start of the reaction, and indeed of the entire electron chain, is the binding of a NADH molecule to complex I and the donation of two electrons. The electrons enter complex I via a prosthetic group attached to the complex, flavin mononucleotide (FMN). The addition of electrons to FMN converts it to its reduced form, FMNH2. The electrons are then transferred through a series of iron–sulfur clusters: the second kind of prosthetic group present in the complex. There are both [2Fe–2S] and [4Fe–4S] iron–sulfur clusters in complex I.
As the electrons pass through this complex, four protons are pumped from the matrix into the intermembrane space. Exactly how this occurs is unclear, but it seems to involve conformational changes in complex I that cause the protein to bind protons on the N-side of the membrane and release them on the P-side of the membrane. Finally, the electrons are transferred from the chain of iron–sulfur clusters to a ubiquinone molecule in the membrane. Reduction of ubiquinone also contributes to the generation of a proton gradient, as two protons are taken up from the matrix as it is reduced to ubiquinol (QH2).
Succinate-Q oxidoreductase (complex II)
Succinate-Q oxidoreductase, also known as complex II or succinate dehydrogenase, is a second entry point to the electron transport chain. It is unusual because it is the only enzyme that is part of both the citric acid cycle and the electron transport chain. Complex II consists of four protein subunits and contains a bound flavin adenine dinucleotide (FAD) cofactor, iron–sulfur clusters, and a heme group that does not participate in electron transfer to coenzyme Q, but is believed to be important in decreasing production of reactive oxygen species. It oxidizes succinate to fumarate and reduces ubiquinone. As this reaction releases less energy than the oxidation of NADH, complex II does not transport protons across the membrane and does not contribute to the proton gradient.
In some eukaryotes, such as the parasitic worm Ascaris suum, an enzyme similar to complex II, fumarate reductase (menaquinol:fumarate
oxidoreductase, or QFR), operates in reverse to oxidize ubiquinol and reduce fumarate. This allows the worm to survive in the anaerobic environment of the large intestine, carrying out anaerobic oxidative phosphorylation with fumarate as the electron acceptor. Another unconventional function of complex II is seen in the malaria parasite Plasmodium falciparum. Here, the reversed action of complex II as an oxidase is important in regenerating ubiquinol, which the parasite uses in an unusual form of pyrimidine biosynthesis.
Electron transfer flavoprotein-Q oxidoreductase
Electron transfer flavoprotein-ubiquinone oxidoreductase (ETF-Q oxidoreductase), also known as electron transferring-flavoprotein dehydrogenase, is a third entry point to the electron transport chain. It is an enzyme that accepts electrons from electron-transferring flavoprotein in the mitochondrial matrix, and uses these electrons to reduce ubiquinone. This enzyme contains a flavin and a [4Fe–4S] cluster, but, unlike the other respiratory complexes, it attaches to the surface of the membrane and does not cross the lipid bilayer.
In mammals, this metabolic pathway is important in beta oxidation of fatty acids and catabolism of amino acids and choline, as it accepts electrons from multiple acetyl-CoA dehydrogenases. In plants, ETF-Q oxidoreductase is also important in the metabolic responses that allow survival in extended periods of darkness.
Q-cytochrome c oxidoreductase (complex III)
Q-cytochrome c oxidoreductase is also known as cytochrome c reductase, cytochrome bc1 complex, or simply complex III. In mammals, this enzyme is a dimer, with each subunit complex containing 11 protein subunits, an [2Fe-2S] iron–sulfur cluster and three cytochromes: one cytochrome c1 and two b cytochromes. A cytochrome is a kind of electron-transferring protein that contains at least one heme group. The iron atoms inside complex III's heme groups alternate between a reduced ferrous (+2) and oxidized ferric (+3) state as the electrons are transferred through the protein.
The reaction catalyzed by complex III is the oxidation of one molecule of ubiquinol and the reduction of two molecules of cytochrome c, a heme protein loosely associated with the mitochondrion. Unlike coenzyme Q, which carries two electrons, cytochrome c carries only one electron.
As only one of the electrons can be transferred from the QH2 donor to a cytochrome c acceptor at a time, the reaction mechanism of complex III is more elaborate than those of the other respiratory complexes, and occurs in two steps called the Q cycle. In the first step, the enzyme binds three substrates, first, QH2, which is then oxidized, with one electron being passed to the second substrate, cytochrome c. The two protons released from QH2 pass into the intermembrane space. The third substrate is Q, which accepts the second electron from the QH2 and is reduced to Q.−, which is the ubisemiquinone free radical. The first two substrates are released, but this ubisemiquinone intermediate remains bound. In the second step, a second molecule of QH2 is bound and again passes its first electron to a cytochrome c acceptor. The second electron is passed to the bound ubisemiquinone, reducing it to QH2 as it gains two protons from the mitochondrial matrix. This QH2 is then released from the enzyme.
As coenzyme Q is reduced to ubiquinol on the inner side of the membrane and oxidized to ubiquinone on the other, a net transfer of protons across the membrane occurs, adding to the proton gradient. The rather complex two-step mechanism by which this occurs is important, as it increases the efficiency of proton transfer. If, instead of the Q cycle, one molecule of QH2 were used to directly reduce two molecules of cytochrome c, the efficiency would be halved, with only one proton transferred per cytochrome c reduced.
Cytochrome c oxidase (complex IV)
Cytochrome c oxidase, also known as complex IV, is the final protein complex in the electron transport chain. The mammalian enzyme has an extremely complicated structure and contains 13 subunits, two heme groups, as well as multiple metal ion cofactors – in all, three atoms of copper, one of magnesium and one of zinc.
This enzyme mediates the final reaction in the electron transport chain and transfers electrons to oxygen and hydrogen (protons), while pumping protons across the membrane. The final electron acceptor oxygen is reduced to water in this step. Both the direct pumping of protons and the consumption of matrix protons in the reduction of oxygen contribute to the proton gradient. The reaction catalyzed is the oxidation of cytochrome c and the reduction of oxygen:
Alternative reductases and oxidases
Many eukaryotic organisms have electron transport chains that differ from the much-studied mammalian enzymes described above. For example, plants have alternative NADH oxidases, which oxidize NADH in the cytosol rather than in the mitochondrial matrix, and pass these electrons to the ubiquinone pool. These enzymes do not transport protons, and, therefore, reduce ubiquinone without altering the electrochemical gradient across the inner membrane.
Another example of a divergent electron transport chain is the alternative oxidase, which is found in plants, as well as some fungi, protists, and possibly some animals. This enzyme transfers electrons directly from ubiquinol to oxygen.
The electron transport pathways produced by these alternative NADH and ubiquinone oxidases have lower ATP yields than the full pathway. The advantages produced by a shortened pathway are not entirely clear. However, the alternative oxidase is produced in response to stresses such as cold, reactive oxygen species, and infection by pathogens, as well as other factors that inhibit the full electron transport chain. Alternative pathways might, therefore, enhance an organism's resistance to injury, by reducing oxidative stress.
Organization of complexes
The original model for how the respiratory chain complexes are organized was that they diffuse freely and independently in the mitochondrial membrane. However, recent data suggest that the complexes might form higher-order structures called supercomplexes or "respirasomes". In this model, the various complexes exist as organized sets of interacting enzymes. These associations might allow channeling of substrates between the various enzyme complexes, increasing the rate and efficiency of electron transfer. Within such mammalian supercomplexes, some components would be present in higher amounts than others, with some data suggesting a ratio between complexes I/II/III/IV and the ATP synthase of approximately 1:1:3:7:4. However, the debate over this supercomplex hypothesis is not completely resolved, as some data do not appear to fit with this model.
Prokaryotic electron transport chains
In contrast to the general similarity in structure and function of the electron transport chains in eukaryotes, bacteria and archaea possess a large variety of electron-transfer enzymes. These use an equally wide set of chemicals as substrates. In common with eukaryotes, prokaryotic electron transport uses the energy released from the oxidation of a substrate to pump ions across a membrane and generate an electrochemical gradient. In the bacteria, oxidative phosphorylation in Escherichia coli is understood in most detail, while archaeal systems are at present poorly understood.
The main difference between eukaryotic and prokaryotic oxidative phosphorylation is that bacteria and archaea use many different substances to donate or accept electrons. This allows prokaryotes to grow under a wide variety of environmental conditions. In E. coli, for example, oxidative phosphorylation can be driven by a large number of pairs of reducing agents and oxidizing agents, which are listed below. The midpoint potential of a chemical measures how much energy is released when it is oxidized or reduced, with reducing agents having negative potentials and oxidizing agents positive potentials.
As shown above, E. coli can grow with reducing agents such as formate, hydrogen, or lactate as electron donors, and nitrate, DMSO, or oxygen as acceptors. The larger the difference in midpoint potential between an oxidizing and reducing agent, the more energy is released when they react. Out of these compounds, the succinate/fumarate pair is unusual, as its midpoint potential is close to zero. Succinate can therefore be oxidized to fumarate if a strong oxidizing agent such as oxygen is available, or fumarate can be reduced to succinate using a strong reducing agent such as formate. These alternative reactions are catalyzed by succinate dehydrogenase and fumarate reductase, respectively.
Some prokaryotes use redox pairs that have only a small difference in midpoint potential. For example, nitrifying bacteria such as Nitrobacter oxidize nitrite to nitrate, donating the electrons to oxygen. The small amount of energy released in this reaction is enough to pump protons and generate ATP, but not enough to produce NADH or NADPH directly for use in anabolism. This problem is solved by using a nitrite oxidoreductase to produce enough proton-motive force to run part of the electron transport chain in reverse, causing complex I to generate NADH.
Prokaryotes control their use of these electron donors and acceptors by varying which enzymes are produced, in response to environmental conditions. This flexibility is possible because different oxidases and reductases use the same ubiquinone pool. This allows many combinations of enzymes to function together, linked by the common ubiquinol intermediate. These respiratory chains therefore have a modular design, with easily interchangeable sets of enzyme systems.
In addition to this metabolic diversity, prokaryotes also possess a range of isozymes – different enzymes that catalyze the same reaction. For example, in E. coli, there are two different types of ubiquinol oxidase using oxygen as an electron acceptor. Under highly aerobic conditions, the cell uses an oxidase with a low affinity for oxygen that can transport two protons per electron. However, if levels of oxygen fall, they switch to an oxidase that transfers only one proton per electron, but has a high affinity for oxygen.
ATP synthase (complex V)
ATP synthase, also called complex V, is the final enzyme in the oxidative phosphorylation pathway. This enzyme is found in all forms of life and functions in the same way in both prokaryotes and eukaryotes. The enzyme uses the energy stored in a proton gradient across a membrane to drive the synthesis of ATP from ADP and phosphate (Pi). Estimates of the number of protons required to synthesize one ATP have ranged from three to four, with some suggesting cells can vary this ratio, to suit different conditions.
This phosphorylation reaction is an equilibrium, which can be shifted by altering the proton-motive force. In the absence of a proton-motive force, the ATP synthase reaction will run from right to left, hydrolyzing ATP and pumping protons out of the matrix across the membrane. However, when the proton-motive force is high, the reaction is forced to run in the opposite direction; it proceeds from left to right, allowing protons to flow down their concentration gradient and turning ADP into ATP. Indeed, in the closely related vacuolar type H+-ATPases, the hydrolysis reaction is used to acidify cellular compartments, by pumping protons and hydrolysing ATP.
ATP synthase is a massive protein complex with a mushroom-like shape. The mammalian enzyme complex contains 16 subunits and has a mass of approximately 600 kilodaltons. The portion embedded within the membrane is called FO and contains a ring of c subunits and the proton channel. The stalk and the ball-shaped headpiece is called F1 and is the site of ATP synthesis. The ball-shaped complex at the end of the F1 portion contains six proteins of two different kinds (three α subunits and three β subunits), whereas the "stalk" consists of one protein: the γ subunit, with the tip of the stalk extending into the ball of α and β subunits. Both the α and β subunits bind nucleotides, but only the β subunits catalyze the ATP synthesis reaction. Reaching along the side of the F1 portion and back into the membrane is a long rod-like subunit that anchors the α and β subunits into the base of the enzyme.
As protons cross the membrane through the channel in the base of ATP synthase, the FO proton-driven motor rotates. Rotation might be caused by changes in the ionization of amino acids in the ring of c subunits causing electrostatic interactions that propel the ring of c subunits past the proton channel. This rotating ring in turn drives the rotation of the central axle (the γ subunit stalk) within the α and β subunits. The α and β subunits are prevented from rotating themselves by the side-arm, which acts as a stator. This movement of the tip of the γ subunit within the ball of α and β subunits provides the energy for the active sites in the β subunits to undergo a cycle of movements that produces and then releases ATP.
This ATP synthesis reaction is called the binding change mechanism and involves the active site of a β subunit cycling between three states. In the "open" state, ADP and phosphate enter the active site (shown in brown in the diagram). The protein then closes up around the molecules and binds them loosely – the "loose" state (shown in red). The enzyme then changes shape again and forces these molecules together, with the active site in the resulting "tight" state (shown in pink) binding the newly produced ATP molecule with very high affinity. Finally, the active site cycles back to the open state, releasing ATP and binding more ADP and phosphate, ready for the next cycle.
In some bacteria and archaea, ATP synthesis is driven by the movement of sodium ions through the cell membrane, rather than the movement of protons. Archaea such as Methanococcus also contain the A1Ao synthase, a form of the enzyme that contains additional proteins with little similarity in sequence to other bacterial and eukaryotic ATP synthase subunits. It is possible that, in some species, the A1Ao form of the enzyme is a specialized sodium-driven ATP synthase, but this might not be true in all cases.
Oxidative phosphorylation - energetics
The transport of electrons from redox pair NAD+/ NADH to the final redox pair 1/2 O2/ H2O can be summarized as
1/2 O2 + NADH + H+ → H2O + NAD+
The potential difference between these two redox pairs is 1.14 volt, which is equivalent to -52 kcal/mol or -2600 kJ per 6 mol of O2.
When one NADH is oxidized through the electron transfer chain, three ATPs are produced, which is equivalent to 7.3 kcal/mol x 3 = 21.9 kcal/mol.
The conservation of the energy can be calculated by the following formula
Efficiency = (21.9 x 100%) / 52 = 42%
So we can conclude that when NADH is oxidized, about 42% of energy is conserved in the form of three ATPs and the remaining (58%) energy is lost as heat (unless the chemical energy of ATP under physiological conditions was underestimated).
Reactive oxygen species
Molecular oxygen is a good terminal electron acceptor because it is a strong oxidizing agent. The reduction of oxygen does involve potentially harmful intermediates. Although the transfer of four electrons and four protons reduces oxygen to water, which is harmless, transfer of one or two electrons produces superoxide or peroxide anions, which are dangerously reactive.
These reactive oxygen species and their reaction products, such as the hydroxyl radical, are very harmful to cells, as they oxidize proteins and cause mutations in DNA. This cellular damage may contribute to disease and is proposed as one cause of aging.
The cytochrome c oxidase complex is highly efficient at reducing oxygen to water, and it releases very few partly reduced intermediates; however small amounts of superoxide anion and peroxide are produced by the electron transport chain. Particularly important is the reduction of coenzyme Q in complex III, as a highly reactive ubisemiquinone free radical is formed as an intermediate in the Q cycle. This unstable species can lead to electron "leakage" when electrons transfer directly to oxygen, forming superoxide. As the production of reactive oxygen species by these proton-pumping complexes is greatest at high membrane potentials, it has been proposed that mitochondria regulate their activity to maintain the membrane potential within a narrow range that balances ATP production against oxidant generation. For instance, oxidants can activate uncoupling proteins that reduce membrane potential.
To counteract these reactive oxygen species, cells contain numerous antioxidant systems, including antioxidant vitamins such as vitamin C and vitamin E, and antioxidant enzymes such as superoxide dismutase, catalase, and peroxidases, which detoxify the reactive species, limiting damage to the cell.
Oxidative phosphorylation in hypoxic/anoxic conditions
As oxygen is fundamental for oxidative phosphorylation, a shortage in O2 level can alter ATP production rates. Under anoxic conditions, ATP-synthase will commit 'cellular treason' and run in reverse, forcing protons from the matrix back into the inner membrane space, using up ATP in the process. The proton motive force and ATP production can be maintained by intracellular acidosis. Cytosolic protons that have accumulated with ATP hydrolysis and lactic acidosis can freely diffuse across the mitochondrial outer-membrane and acidify the inter-membrane space, hence directly contributing to the proton motive force and ATP production.
Inhibitors
There are several well-known drugs and toxins that inhibit oxidative phosphorylation. Although any one of these toxins inhibits only one enzyme in the electron transport chain, inhibition of any step in this process will halt the rest of the process. For example, if oligomycin inhibits ATP synthase, protons cannot pass back into the mitochondrion. As a result, the proton pumps are unable to operate, as the gradient becomes too strong for them to overcome. NADH is then no longer oxidized and the citric acid cycle ceases to operate because the concentration of NAD+ falls below the concentration that these enzymes can use.
Many site-specific inhibitors of the electron transport chain have contributed to the present knowledge of mitochondrial respiration. Synthesis of ATP is also dependent on the electron transport chain, so all site-specific inhibitors also inhibit ATP formation. The fish poison rotenone, the barbiturate drug amytal, and the antibiotic piericidin A inhibit NADH and coenzyme Q.
Carbon monoxide, cyanide, hydrogen sulphide and azide effectively inhibit cytochrome oxidase. Carbon monoxide reacts with the reduced form of the cytochrome while cyanide and azide react with the oxidised form. An antibiotic, antimycin A, and British anti-Lewisite, an antidote used against chemical weapons, are the two important inhibitors of the site between cytochrome B and C1.
Not all inhibitors of oxidative phosphorylation are toxins. In brown adipose tissue, regulated proton channels called uncoupling proteins can uncouple respiration from ATP synthesis. This rapid respiration produces heat, and is particularly important as a way of maintaining body temperature for hibernating animals, although these proteins may also have a more general function in cells' responses to stress.
History
The field of oxidative phosphorylation began with the report in 1906 by Arthur Harden of a vital role for phosphate in cellular fermentation, but initially only sugar phosphates were known to be involved. However, in the early 1940s, the link between the oxidation of sugars and the generation of ATP was firmly established by Herman Kalckar, confirming the central role of ATP in energy transfer that had been proposed by Fritz Albert Lipmann in 1941. Later, in 1949, Morris Friedkin and Albert L. Lehninger proved that the coenzyme NADH linked metabolic pathways such as the citric acid cycle and the synthesis of ATP. The term oxidative phosphorylation was coined by in 1939.
For another twenty years, the mechanism by which ATP is generated remained mysterious, with scientists searching for an elusive "high-energy intermediate" that would link oxidation and phosphorylation reactions. This puzzle was solved by Peter D. Mitchell with the publication of the chemiosmotic theory in 1961. At first, this proposal was highly controversial, but it was slowly accepted and Mitchell was awarded a Nobel prize in 1978. Subsequent research concentrated on purifying and characterizing the enzymes involved, with major contributions being made by David E. Green on the complexes of the electron-transport chain, as well as Efraim Racker on the ATP synthase. A critical step towards solving the mechanism of the ATP synthase was provided by Paul D. Boyer, by his development in 1973 of the "binding change" mechanism, followed by his radical proposal of rotational catalysis in 1982. More recent work has included structural studies on the enzymes involved in oxidative phosphorylation by John E. Walker, with Walker and Boyer being awarded a Nobel Prize in 1997.
See also
Respirometry
TIM/TOM Complex
Notes
References
Further reading
Introductory
Advanced
General resources
Animated diagrams illustrating oxidative phosphorylation Wiley and Co Concepts in Biochemistry
On-line biophysics lectures Antony Crofts, University of Illinois at Urbana–Champaign
ATP Synthase Graham Johnson
Structural resources
PDB molecule of the month:
ATP synthase
Cytochrome c
Cytochrome c oxidase
Interactive molecular models at Universidade Fernando Pessoa:
NADH dehydrogenase
succinate dehydrogenase
Coenzyme Q - cytochrome c reductase
cytochrome c oxidase
Cellular respiration
Integral membrane proteins
Metabolism
Redox | 0.766314 | 0.996456 | 0.763599 |
Equivalent potential temperature | Equivalent potential temperature, commonly referred to as theta-e , is a quantity that is conserved during changes to an air parcel's pressure (that is, during vertical motions in the atmosphere), even if water vapor condenses during that pressure change. It is therefore more conserved than the ordinary potential temperature, which remains constant only for unsaturated vertical motions (pressure changes).
is the temperature a parcel of air would reach if all the water vapor in the parcel were to condense, releasing its latent heat, and the parcel was brought adiabatically to a standard reference pressure, usually 1000 hPa (1000 mbar) which is roughly equal to atmospheric pressure at sea level.
Its use in estimating atmospheric stability
Stability of incompressible fluid
Like a ball balanced on top of a hill, denser fluid lying above less dense fluid would be dynamically unstable: overturning motions (convection) can lower the center of gravity, and thus will occur spontaneously, rapidly producing a stable stratification (see also stratification (water)) which is thus the observed condition almost all the time. The condition for stability of an incompressible fluid is that density decreases monotonically with height.
Stability of compressible air: potential temperature
If a fluid is compressible like air, the criterion for dynamic stability instead involves potential density, the density of the fluid at a fixed reference pressure. For an ideal gas (see gas laws), the stability criterion for an air column is that potential temperature increases monotonically with height.
To understand this, consider dry convection in the atmosphere, where the vertical variation in pressure is substantial and adiabatic temperature change is important: As a parcel of air moves upward, the ambient pressure drops, causing the parcel to expand. Some of the internal energy of the parcel is used up in doing the work required to expand against the atmospheric pressure, so the temperature of the parcel drops, even though it has not lost any heat. Conversely, a sinking parcel is compressed and becomes warmer even though no heat is added.
Air at the top of a mountain is usually colder than the air in the valley below, but the arrangement is not unstable: if a parcel of air from the valley were somehow lifted up to the top of the mountain, when it arrived it would be even colder than the air already there, due to adiabatic cooling; it would be heavier than the ambient air, and would sink back toward its original position. Similarly, if a parcel of cold mountain-top air were to make the trip down to the valley, it would arrive warmer and lighter than the valley air, and would float back up the mountain.
So cool air lying on top of warm air can be stable, as long as the temperature decrease with height is less than the adiabatic lapse rate; the dynamically important quantity is not the temperature, but the potential temperature—the temperature the air would have if it were brought adiabatically to a reference pressure. The air around the mountain is stable because the air at the top, due to its lower pressure, has a higher potential temperature than the warmer air below.
Effects of water condensation: equivalent potential temperature
A rising parcel of air containing water vapor, if it rises far enough, reaches its lifted condensation level: it becomes saturated with water vapor (see Clausius–Clapeyron relation). If the parcel of air continues to rise, water vapor condenses and releases its latent heat to the surrounding air, partially offsetting the adiabatic cooling. A saturated parcel of air therefore cools less than a dry one would as it rises (its temperature changes with height at the moist adiabatic lapse rate, which is smaller than the dry adiabatic lapse rate). Such a saturated parcel of air can achieve buoyancy, and thus accelerate further upward, a runaway condition (instability) even if potential temperature increases with height. The sufficient condition for an air column to be absolutely stable, even with respect to saturated convective motions, is that the equivalent potential temperature must increase monotonically with height.
Formula
The definition of the equivalent potential temperature is:
Where:
is the temperature [K] of air at pressure ,
is a reference pressure that is taken as 1000 hPa,
is the pressure at the point,
and are the specific gas constants of dry air and of water vapour, respectively,
and are the specific heat capacities of dry air and of liquid water, respectively,
and are the total water and water vapour mixing ratios, respectively,
is the relative humidity,
is the latent heat of vapourisation of water.
A number of approximate formulations are used for calculating equivalent potential temperature, since it is not easy to compute integrations along motion of the parcel. Bolton (1980) gives review of such procedures with estimates of error. His best approximation formula is used when accuracy is needed:
Where:
is (dry) potential temperature [K] at the lifted condensation level (LCL),
is (approximated) temperature [K] at LCL,
is dew point temperature at pressure ,
is the water vapor pressure (to obtain for dry air),
is the ratio of the specific gas constant to the specific heat of dry air at constant pressure (0.2854),
is mixing ratio of water vapor mass per mass [kg/kg] (sometimes value is given in [g/kg] and that should be divided by 1000).
A little more theoretical formula is commonly used in literature like Holton (1972) when theoretical explanation is important:
Where:
is saturated mixing ratio of water at temperature , the temperature at the saturation level of the air,
is latent heat of evaporation at temperature (2406 kJ/kg {at 40 °C} to 2501 kJ/kg {at 0 °C}), and
is specific heat of dry air at constant pressure (1005.7 J/(kg·K)).
Further more simplified formula is used (in, for example, Stull 1988 §13.1 p. 546) for simplicity, if it is desirable to avoid computing :
Where:
= equivalent temperature
= specific gas constant for air (287.04 J/(kg·K))
Usage
This applies on the synoptic scale for characterisation of air masses. For instance, in a study of the North American Ice Storm of 1998, professors Gyakum (McGill University, Montreal) and Roebber (University of Wisconsin-Milwaukee) have demonstrated that the air masses involved originated from high Arctic at an altitude of 300 to 400 hPa the previous week, went down toward the surface as they moved to the Tropics, then moved back up along the Mississippi Valley toward the St. Lawrence Valley. The back trajectories were evaluated using the constant equivalent potential temperatures.
In the mesoscale, equivalent potential temperature is also a useful measure of the static stability of the unsaturated atmosphere. Under normal, stably stratified conditions, the potential temperature increases with height,
and vertical motions are suppressed. If the equivalent potential temperature decreases with height,
the atmosphere is unstable to vertical motions, and convection is likely. Situations in which the equivalent potential temperature decreases with height, indicating instability in saturated air, are quite common.
See also
Meteorology
Moist static energy
Potential temperature
Weather forecasting
Bibliography
M K Yau and R.R. Rogers, Short Course in Cloud Physics, Third Edition, published by Butterworth-Heinemann, January 1, 1989, 304 pages.
References
Atmospheric thermodynamics
Equivalent units | 0.786873 | 0.970405 | 0.763586 |
Élan vital | Élan vital is a term coined by French philosopher Henri Bergson in his 1907 book Creative Evolution, in which he addresses the question of self-organisation and spontaneous morphogenesis of things in an increasingly complex manner. Élan vital was translated in the English edition as "vital impetus", but is usually translated by his detractors as "vital force". It is a hypothetical explanation for evolution and development of organisms, which Bergson linked closely with consciousness – the intuitive perception of experience and the flow of inner time.
Precursors
Distant anticipations of Bergson can be found in the work of the pre-Christian Stoic philosopher Posidonius, who postulated a "vital force" emanated by the sun to all living creatures on the Earth's surface, and in that of Zeno of Elea. The concept of élan vital is also similar to Baruch Spinoza's concept of conatus as well as Arthur Schopenhauer's concept of the will-to-live and the Sanskrit āyus or "life principle".
Influence
The French philosopher Gilles Deleuze attempted to recoup the novelty of Bergson's idea in his book Bergsonism, though the term itself underwent substantial changes by Deleuze. No longer considered a mystical, elusive force acting on brute matter, as it was in the vitalist debates of the late 19th century, élan vital in Deleuze's hands denotes an internal force, a substance in which the distinction between organic and inorganic matter is indiscernible, and the emergence of life undecidable.
In 1912 Beatrice M. Hinkle wrote that Carl Gustav Jung's conception of libido was similar to Bergson's élan vital.
The notion of élan vital had considerable influence on the psychiatrist and phenomenologist Eugène Minkowski and his own concept of a personal élan – the element which keeps us in touch with a feeling of life.
Criticism
The consensus of geneticists is that they see no "life force" other than the organisational matrix contained in the genes themselves, according to R.F. Weir.
The British secular humanist biologist Julian Huxley dryly remarked that Bergson's élan vital is no better an explanation of life than is explaining the operation of a railway engine by its élan locomotif ("locomotive driving force"). The same alleged epistemological fallacy is parodied in Molière's Le Malade imaginaire, where a quack "answers" the question of "Why does opium cause sleep?" with "Because of its soporific power". However, Huxley used the term élan vital in a more metaphorical sense:
The author and popular Christian theologian C. S. Lewis rejected Bergson's concept in his essay The Weight of Glory stating "...even if all the happiness they promised could come to man on earth, yet still each generation would lose it by death, including the last generation of all, and the whole story would be nothing, not even a story, for ever and ever. Hence all the nonsense that Mr. Shaw puts into the final speech of Lilith, and Bergson's remark that the élan vital is capable of surmounting all obstacles, perhaps even death—as if we could believe that any social or biological development on this planet will delay the senility of the sun or reverse the second law of thermodynamics."
See also
Conatus
Emergence
Joie de vivre
Hylozoism
Orthogenesis
Parable of the Invisible Gardener
Vis viva
References
Henri Bergson
Vitalism
Concepts in metaphysics | 0.770379 | 0.99117 | 0.763577 |
Minimum total potential energy principle | The minimum total potential energy principle is a fundamental concept used in physics and engineering. It dictates that at low temperatures a structure or body shall deform or displace to a position that (locally) minimizes the total potential energy, with the lost potential energy being converted into kinetic energy (specifically heat).
Some examples
A free proton and free electron will tend to combine to form the lowest energy state (the ground state) of a hydrogen atom, the most stable configuration. This is because that state's energy is 13.6 electron volts (eV) lower than when the two particles separated by an infinite distance. The dissipation in this system takes the form of spontaneous emission of electromagnetic radiation, which increases the entropy of the surroundings.
A rolling ball will end up stationary at the bottom of a hill, the point of minimum potential energy. The reason is that as it rolls downward under the influence of gravity, friction produced by its motion transfers energy in the form of heat of the surroundings with an attendant increase in entropy.
A protein folds into the state of lowest potential energy. In this case, the dissipation takes the form of vibration of atoms within or adjacent to the protein.
Structural mechanics
The total potential energy, , is the sum of the elastic strain energy, , stored in the deformed body and the potential energy, , associated to the applied forces:
This energy is at a stationary position when an infinitesimal variation from such position involves no change in energy:
The principle of minimum total potential energy may be derived as a special case of the virtual work principle for elastic systems subject to conservative forces.
The equality between external and internal virtual work (due to virtual displacements) is:
where
= vector of displacements
= vector of distributed forces acting on the part of the surface
= vector of body forces
In the special case of elastic bodies, the right-hand-side of can be taken to be the change, , of elastic strain energy due to infinitesimal variations of real displacements.
In addition, when the external forces are conservative forces, the left-hand-side of can be seen as the change in the potential energy function of the forces. The function is defined as:
where the minus sign implies a loss of potential energy as the force is displaced in its direction. With these two subsidiary conditions, becomes:
This leads to as desired. The variational form of is often used as the basis for developing the finite element method in structural mechanics.
References
Thermodynamics
Solid mechanics | 0.792487 | 0.963507 | 0.763567 |
Tensegrity | Tensegrity, tensional integrity or floating compression is a structural principle based on a system of isolated components under compression inside a network of continuous tension, and arranged in such a way that the compressed members (usually bars or struts) do not touch each other while the prestressed tensioned members (usually cables or tendons) delineate the system spatially.
Tensegrity structures are found in both nature as well as human-made objects: in the human body, the bones are held in compression while the connective tissues are held in tension, and the same principles have been applied to furniture and architectural design and beyond.
The term was coined by Buckminster Fuller in the 1960s as a portmanteau of "tensional integrity".
Core Concept
Tensegrity is characterized by several foundational principles that define its unique properties:
Continuous Tension: Fundamental to tensegrity, the tension elements—typically cables or tendons—form a continuous network that encases the entire structure. This allows for the even distribution of mechanical stresses and maintains the structural form, contributing to the overall stability and flexibility of the system.
Discontinuous Compression: The compression components, such as struts or rods, are distinct in that they do not make direct contact with each other but are instead suspended within the tension network. This eliminates the need for rigid connections, enhancing the structural efficiency and resilience of the system.
Pre-stressed: A key aspect of tensegrity structures is their pre-stressed state, in which tension elements are tightened during the assembly process. Pre-stressing contributes significantly to the structural stiffness and stability, ensuring that all elements are either in tension or compression at all times.
Self-equilibration: Tensegrity structures are self-equilibrating and so automatically distribute internal stresses across the structure. This allows them to adapt to varying loads without losing structural integrity.
Minimalism and Efficiency: Tensegrity systems employ a minimalist design philosophy, utilizing the minimum amount of materials to achieve maximum structural strength.
Scalability and Modularity: The design principles of tensegrity allow for scalability and modular construction. Tensegrity structures to be easily adapted or expanded in size and complexity according to specific requirements.
Because of these patterns, no structural member experiences a bending moment and there are no shear stresses within the system. This can produce exceptionally strong and rigid structures for their mass and for the cross section of the components.
These principles collectively enable tensegrity structures to achieve a balance of strength, resilience, and flexibility, making the concept widely applicable across disciplines including architecture, robotics, and biomechanics.
Early Example
A conceptual building block of tensegrity is seen in the 1951 Skylon. Six cables, three at each end, hold the tower in position. The three cables connected to the bottom "define" its location. The other three cables are simply keeping it vertical.
A three-rod tensegrity structure (shown above in a spinning drawing of a T3-Prism) builds on this simpler structure: the ends of each green rod look like the top and bottom of the Skylon. As long as the angle between any two cables is smaller than 180°, the position of the rod is well defined. While three cables are the minimum required for stability, additional cables can be attached to each node for aesthetic purposes and for redundancy. For example, Snelson's Needle Tower uses a repeated pattern built using nodes that are connected to 5 cables each.
Eleanor Heartney points out visual transparency as an important aesthetic quality of these structures. Korkmaz et al. has argued that lightweight tensegrity structures are suitable for adaptive architecture.
Applications
Architecture
Tensegrities saw increased application in architecture beginning in the 1960s, when Maciej Gintowt and Maciej Krasiński designed Spodek arena complex (in Katowice, Poland), as one of the first major structures to employ the principle of tensegrity. The roof uses an inclined surface held in check by a system of cables holding up its circumference. Tensegrity principles were also used in David Geiger's Seoul Olympic Gymnastics Arena (for the 1988 Summer Olympics), and the Georgia Dome (for the 1996 Summer Olympics). Tropicana Field, home of the Tampa Bay Rays major league baseball team, also has a dome roof supported by a large tensegrity structure.
On 4 October 2009, the Kurilpa Bridge opened across the Brisbane River in Queensland, Australia. A multiple-mast, cable-stay structure based on the principles of tensegrity, it is currently the world's largest tensegrity bridge.
Robotics
Since the early 2000s, tensegrities have also attracted the interest of roboticists due to their potential to design lightweight and resilient robots. Numerous researches have investigated tensegrity rovers, bio-mimicking robots, and modular soft robots. The most famous tensegrity robot is the Super Ball Bot, a rover for space exploration using a 6-bar tensegrity structure, currently under developments at NASA Ames.
Anatomy
Biotensegrity, a term coined by Stephen Levin, is an extended theoretical application of tensegrity principles to biological structures. Biological structures such as muscles, bones, fascia, ligaments and tendons, or rigid and elastic cell membranes, are made strong by the unison of tensioned and compressed parts. The musculoskeletal system consists of a continuous network of muscles and connective tissues, while the bones provide discontinuous compressive support, whilst the nervous system maintains tension in vivo through electrical stimulus. Levin claims that the human spine, is also a tensegrity structure although there is no support for this theory from a structural perspective.
Biochemistry
Donald E. Ingber has developed a theory of tensegrity to describe numerous phenomena observed in molecular biology. For instance, the expressed shapes of cells, whether it be their reactions to applied pressure, interactions with substrates, etc., all can be mathematically modelled by representing the cell's cytoskeleton as a tensegrity. Furthermore, geometric patterns found throughout nature (the helix of DNA, the geodesic dome of a volvox, Buckminsterfullerene, and more) may also be understood based on applying the principles of tensegrity to the spontaneous self-assembly of compounds, proteins, and even organs. This view is supported by how the tension-compression interactions of tensegrity minimize material needed to maintain stability and achieve structural resiliency, although the comparison with inert materials within a biological framework has no widely accepted premise within physiological science. Therefore, natural selection pressures would likely favor biological systems organized in a tensegrity manner.
As Ingber explains:
In embryology, Richard Gordon proposed that embryonic differentiation waves are propagated by an 'organelle of differentiation' where the cytoskeleton is assembled in a bistable tensegrity structure at the apical end of cells called the 'cell state splitter'.
Origins and art history
The origins of tensegrity are controversial. Many traditional structures, such as skin-on-frame kayaks and shōji, use tension and compression elements in a similar fashion.
Russian artist Viatcheslav Koleichuk claimed that the idea of tensegrity was invented first by Kārlis Johansons (in Russian as German as Karl Ioganson) (lv), a Soviet avant-garde artist of Latvian descent, who contributed some works to the main exhibition of Russian constructivism in 1921. Koleichuk's claim was backed up by Maria Gough for one of the works at the 1921 constructivist exhibition. Snelson has acknowledged the constructivists as an influence for his work (query?). French engineer David Georges Emmerich has also noted how Kārlis Johansons's work (and industrial design ideas) seemed to foresee tensegrity concepts.
In fact, some scientific paper proves this fact, showing the images of the first Simplex structures (made with 3 bars and 9 tendons) developed by Ioganson.
In 1948, artist Kenneth Snelson produced his innovative "X-Piece" after artistic explorations at Black Mountain College (where Buckminster Fuller was lecturing) and elsewhere. Some years later, the term "tensegrity" was coined by Fuller, who is best known for his geodesic domes. Throughout his career, Fuller had experimented with incorporating tensile components in his work, such as in the framing of his dymaxion houses.
Snelson's 1948 innovation spurred Fuller to immediately commission a mast from Snelson. In 1949, Fuller developed a tensegrity-icosahedron based on the technology, and he and his students quickly developed further structures and applied the technology to building domes. After a hiatus, Snelson also went on to produce a plethora of sculptures based on tensegrity concepts. His main body of work began in 1959 when a pivotal exhibition at the Museum of Modern Art took place. At the MOMA exhibition, Fuller had shown the mast and some of his other work. At this exhibition, Snelson, after a discussion with Fuller and the exhibition organizers regarding credit for the mast, also displayed some work in a vitrine.
Snelson's best-known piece is his 26.5-meter-high (87 ft) Needle Tower of 1968.
Mathematics of Tensegrity
The loading of at least some tensegrity structures causes an auxetic response and negative Poisson ratio, e.g. the T3-prism and 6-strut tensegrity icosahedron.
Tensegrity prisms
The three-rod tensegrity structure (3-way prism) has the property that, for a given (common) length of compression member "rod" (there are three total) and a given (common) length of tension cable "tendon" (six total) connecting the rod ends together, there is a particular value for the (common) length of the tendon connecting the rod tops with the neighboring rod bottoms that causes the structure to hold a stable shape. For such a structure, it is straightforward to prove that the triangle formed by the rod tops and that formed by the rod bottoms are rotated with respect to each other by an angle of 5π/6 (radians).
The stability ("prestressability") of several 2-stage tensegrity structures are analyzed by Sultan, et al.
The T3-prism (also known as Triplex) can be obtained through form finding of a straight triangular prism. Its self-equilibrium state is given when the base triangles are in parallel planes separated by an angle of twist of π/6. The formula for its unique self-stress state is given by,Here, the first three negative values correspond to the inner components in compression, while the rest correspond to the cables in tension.
Tensegrity icosahedra
The tensegrity icosahedron, first studied by Snelson in 1949, has struts and tendons along the edges of a polyhedron called Jessen's icosahedron. It is a stable construction, albeit with infinitesimal mobility. To see this, consider a cube of side length , centered at the origin. Place a strut of length in the plane of each cube face, such that each strut is parallel to one edge of the face and is centered on the face. Moreover, each strut should be parallel to the strut on the opposite face of the cube, but orthogonal to all other struts. If the Cartesian coordinates of one strut are and , those of its parallel strut will be, respectively, and . The coordinates of the other strut ends (vertices) are obtained by permuting the coordinates, e.g., (rotational symmetry in the main diagonal of the cube).
The distance s between any two neighboring vertices and is
Imagine this figure built from struts of given length and tendons (connecting neighboring vertices) of given length s, with . The relation tells us there are two possible values for d: one realized by pushing the struts together, the other by pulling them apart. In the particular case the two extremes coincide, and , therefore the figure is the stable tensegrity icosahedron. This choice of parameters gives the vertices the positions of Jessen's icosahedron; they are different from the regular icosahedron, for which the ratio of and would be the golden ratio, rather than 2. However both sets of coordinates lie along a continuous family of positions ranging from the cuboctahedron to the octahedron (as limit cases), which are linked by a helical contractive/expansive transformation. This kinematics of the cuboctahedron is the geometry of motion of the tensegrity icosahedron. It was first described by H. S. M. Coxeter and later called the "jitterbug transformation" by Buckminster Fuller.
Since the tensegrity icosahedron represents an extremal point of the above relation, it has infinitesimal mobility: a small change in the length s of the tendon (e.g. by stretching the tendons) results in a much larger change of the distance 2d of the struts.
Patents
, "Tensile-Integrity Structures," 13 November 1962, Buckminster Fuller.
French Patent No. 1,377,290, "Construction de Reseaux Autotendants", 28 September 1964, David Georges Emmerich.
French Patent No. 1,377,291, "Structures Linéaires Autotendants", 28 September 1964, David Georges Emmerich.
, "Suspension Building" (also called aspension), 7 July 1964, Buckminster Fuller.
, "Continuous Tension, Discontinuous Compression Structure," 16 February 1965, Kenneth Snelson.
, "Non-symmetrical Tension-Integrity Structures," 18 February 1975, Buckminster Fuller.
Basic tensegrity structures
Tensegrity structures
See also
, giant sky-floating tensegrity spheres named by Buckminster Fuller
, the geometry of the motion of the tensegrity icosahedron
Notes
References
Bibliography
Online
A good overview on the scope of tensegrity from Fuller's point of view, and an interesting overview of early structures with careful attributions most of the time.
2003 reprint . This is a good starting place for learning about the mathematics of tensegrity and building models.
Further reading
Edmondson, Amy (2007). "A Fuller Explanation", Emergent World LLC
They present the remarkable result that any linear transformation of a tensegrity is also a tensegrity.
Vilnay, Oren (1990). Cable Nets and Tensegric Shells: Analysis and Design Applications, New York: Ellis Horwood Ltd.
Wilken, Timothy (2001). Seeking the Gift Tensegrity, TrustMark
External links
Scientific Publications in the Field of Tensegrity by Swiss Federal Institute of Technology (EPFL), Applied Computing and Mechanics Laboratory (IMAC)
Stephen Levin's Biotensegrity site Several papers on the tensegrity mechanics of biologic structures from viruses to vertebrates by an Orthopedic Surgeon.
Buckminster Fuller
Tensile architecture | 0.766232 | 0.996481 | 0.763536 |
Joule | The joule ( , or ; symbol: J) is the unit of energy in the International System of Units (SI). It is equal to the amount of work done when a force of one newton displaces a mass through a distance of one metre in the direction of that force. It is also the energy dissipated as heat when an electric current of one ampere passes through a resistance of one ohm for one second. It is named after the English physicist James Prescott Joule (1818–1889).
Definition
In terms of SI base units and in terms of SI derived units with special names, the joule is defined as
One joule is also equivalent to any of the following:
The work required to move an electric charge of one coulomb through an electrical potential difference of one volt, or one coulomb-volt (C⋅V). This relationship can be used to define the volt.
The work required to produce one watt of power for one second, or one watt-second (W⋅s) (compare kilowatt-hour, which is 3.6 megajoules). This relationship can be used to define the watt.
History
The cgs system had been declared official in 1881, at the first International Electrical Congress.
The erg was adopted as its unit of energy in 1882. Wilhelm Siemens, in his inauguration speech as chairman of the British Association for the Advancement of Science (23 August 1882) first proposed the joule as unit of heat, to be derived from the electromagnetic units ampere and ohm, in cgs units equivalent to .
The naming of the unit in honour of James Prescott Joule (1818–1889), at the time retired but still living (aged 63), followed the recommendation of Siemens:
"Such a heat unit, if found acceptable, might with great propriety, I think, be called the Joule, after the man who has done so much to develop the dynamical theory of heat."
At the second International Electrical Congress, on 31 August 1889, the joule was officially adopted alongside the watt and the quadrant (later renamed to henry).
Joule died in the same year, on 11 October 1889.
At the fourth congress (1893), the "international ampere" and "international ohm" were defined, with slight changes in the specifications for their measurement, with the "international joule" being the unit derived from them.
In 1935, the International Electrotechnical Commission (as the successor organisation of the International Electrical Congress) adopted the "Giorgi system", which by virtue of assuming a defined value for the magnetic constant also implied a redefinition of the joule. The Giorgi system was approved by the International Committee for Weights and Measures in 1946. The joule was now no longer defined based on electromagnetic unit, but instead as the unit of work performed by one unit of force (at the time not yet named newton) over the distance of 1 metre. The joule was explicitly intended as the unit of energy to be used in both electromagnetic and mechanical contexts. The ratification of the definition at the ninth General Conference on Weights and Measures, in 1948, added the specification that the joule was also to be preferred as the unit of heat in the context of calorimetry, thereby officially deprecating the use of the calorie. This is the definition declared in the modern International System of Units in 1960.
The definition of the joule as J = kg⋅m2⋅s−2 has remained unchanged since 1946, but the joule as a derived unit has inherited changes in the definitions of the second (in 1960 and 1967), the metre (in 1983) and the kilogram (in 2019).
Practical examples
One joule represents (approximately):
The typical energy released as heat by a person at rest every 1/60 s (~, basal metabolic rate); about / day.
The amount of electricity required to run a device for .
The energy required to accelerate a mass at through a distance of .
The kinetic energy of a mass travelling at , or a mass travelling at .
The energy required to lift an apple up 1 m, assuming the apple has a mass of 101.97 g.
The heat required to raise the temperature of 0.239 g of water from 0 °C to 1 °C.
The kinetic energy of a human moving very slowly.
The kinetic energy of a tennis ball moving at .
The food energy (kcal) in slightly more than half of an ordinary-sized sugar crystal (/crystal).
Multiples
. The minimal energy needed to change a bit of data in computation at around room temperature – approximately – is given by the Landauer limit.
is about the kinetic energy of a flying mosquito.
The Large Hadron Collider (LHC) produces collisions of the microjoule order (7 TeV) per particle.
Nutritional food labels in most countries express energy in kilojoules (kJ). One square metre of the Earth receives about of solar radiation every second in full daylight. A human in a sprint has approximately 3 kJ of kinetic energy, while a cheetah in a (76 mph) sprint has approximately 20 kJ. .
The megajoule is approximately the kinetic energy of a one megagram (tonne) vehicle moving at (100 mph). The energy required to heat of liquid water at constant pressure from to is approximately . .
is about the chemical energy of combusting of petroleum. 2 GJ is about the Planck energy unit. .
The terajoule is about (which is often used in energy tables). About of energy was released by Little Boy. The International Space Station, with a mass of approximately and orbital velocity of , has a kinetic energy of roughly . In 2017, Hurricane Irma was estimated to have a peak wind energy of . .
is about of TNT, which is the amount of energy released by the Tsar Bomba, the largest man-made explosion ever. .
The 2011 Tōhoku earthquake and tsunami in Japan had of energy according to its rating of 9.0 on the moment magnitude scale. Yearly U.S. energy consumption amounts to roughly , and the world final energy consumption was in 2021. One Petawatt-hour of electricity, or any other form of energy, is .
The zettajoule is somewhat more than the amount of energy required to heat the Baltic Sea by 1 °C, assuming properties similar to those of pure water. Human annual world energy consumption is approximately . The energy to raise the temperature of Earth's atmosphere 1 °C is approximately .
The yottajoule is a little less than the amount of energy required to heat the Indian Ocean by 1 °C, assuming properties similar to those of pure water. The thermal output of the Sun is approximately per second.
Conversions
1 joule is equal to (approximately unless otherwise stated):
(exactly)
(foot-pound)
(foot-poundal)
Units with exact equivalents in joules include:
1 thermochemical calorie = 4.184J
1 International Table calorie = 4.1868J
1W⋅h =
1kW⋅h =
1W⋅s =
1ton TNT =
1foe =
Newton-metre and torque
In mechanics, the concept of force (in some direction) has a close analogue in the concept of torque (about some angle):
A result of this similarity is that the SI unit for torque is the newton-metre, which works out algebraically to have the same dimensions as the joule, but they are not interchangeable. The General Conference on Weights and Measures has given the unit of energy the name joule, but has not given the unit of torque any special name, hence it is simply the newton-metre (N⋅m) – a compound name derived from its constituent parts. The use of newton-metres for torque but joules for energy is helpful to avoid misunderstandings and miscommunication.
The distinction may be seen also in the fact that energy is a scalar quantity – the dot product of a force vector and a displacement vector. By contrast, torque is a vector – the cross product of a force vector and a distance vector. Torque and energy are related to one another by the equation
where E is energy, τ is (the vector magnitude of) torque, and θ is the angle swept (in radians). Since plane angles are dimensionless, it follows that torque and energy have the same dimensions.
Watt-second
A watt-second (symbol W s or W⋅s) is a derived unit of energy equivalent to the joule. The watt-second is the energy equivalent to the power of one watt sustained for one second. While the watt-second is equivalent to the joule in both units and meaning, there are some contexts in which the term "watt-second" is used instead of "joule", such as in the rating of photographic electronic flash units.
References
External links
James Prescott Joule
SI derived units
Units of energy | 0.763896 | 0.999522 | 0.76353 |